Primary Key Violation at the time of Moving from primary range to secondary range.

Hi Experts,
I've observed a strange issue in our environment.
we are using sql server 2008 R2 with SP2.
whenever a table is moving from primary range to secondary range on it's identity values, application is getting crashed with the message as below. 
Violation of PRIMARY KEY constraint 'PK6'. Cannot insert duplicate key in object 'dbo.TD_TRANN'. The duplicate key value is (17868679).
The statement has been terminated.
OR
Violation of UNIQUE KEY constraint 'IX_TDS_COST'. Cannot insert duplicate key in object 'dbo.TDS_COST'. The duplicate key value is (17, 19431201).
identity ranges were auto managed by replication. agents are running continuous.
please suggest.
Cheers, Vinod Mallolu

Well this is pretty simple, so there are two type of subscriptions (Server and client) in merge replication. So while adding article you provide following parameters:
@pub_identity_range
@identity_range
You can check the details of above parameters on following article:http://msdn.microsoft.com/en-us/library/ms174329.aspx
Snippet
 @pub_identity_range= ]
pub_identity_range              
Controls the identity range size allocated to a Subscriber with a server subscription when automatic identity range management is used. This identity range is reserved for a republishing Subscriber to allocate to its own Subscribers.
pub_identity_range is bigint, with a default of NULL. You must specify this parameter if
identityrangemanagementoption is auto or if
auto_identity_range is true.
[ @identity_range= ]
identity_range              
Controls the identity range size allocated both to the Publisher and to the Subscriber when automatic identity range management is used.
identity_range is bigint, with a default of NULL. You must specify this parameter if
identityrangemanagementoption is auto or if
auto_identity_range is true.
So for example you are adding "Server" type subscription then we consider @pub_identity_range value while assigning the range to that sub. If it is "Client" type subscription in that case we consider @identity_range value.
You could run following query to check the range assigned to each publisher and subscriber:
SELECT B.SUBSCRIBER_SERVER,B.DB_NAME,A.* FROM MSMERGE_IDENTITY_RANGE A,SYSMERGESUBSCRIPTIONS B
WHERE A.SUBID=B.SUBID
This should answer your other question as well.
Vikas Rana | Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker -------------------------------------------------------------------------------- This posting is provided "AS IS"
with no warranties, and confers no rights. ------------------------------------------------

Similar Messages

  • Issue with INSERT INTO, throws primary key violation error even if the target table is empty

    Hi,
    I am running a simple
    INSERT INTO Table 1 (column 1, column 2, ....., column n)
    SELECT column 1, column 2, ....., column n FROM Table 2
    Table 1 and Table 2 have same definition(schema).
    Table 1 is empty and Table 2 has all the data. Column 1 is primary key and there is NO identity column.
    This statement still throws Primary key violation error. Am clueless about this? 
    How can this happen when the target table is totally empty? 
    Chintu

    Nope thats not true
    Either you're not inserting to the right table or in the background some other trigger code is getting fired which is inserting into some table which causes a PK violation. 
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to defer the primary key validation to commit time

    Hi
    Is it possible to defer the primary key validation to commit time? I don't know why the framework checks for the unique key constraint immediately after inserting the row and before committing it. This causes "Too many objects match the primary key oracle.jbo.Key[null]" error if the user presses the create new record button multiple times before filling and saving the previous records.
    Thanks,
    Ferez

    Dear M.Jabr,
    Many thanks for your reply. I have access to the database but I prefer to use an ADF workaround to this problem rather than a DB workaround. I am not sure but I think that there should be a way to defer or disable primary key constraint in ADF.
    Anyway, I tried to make the primary key constraint DEFERRABLE in DB using PL/SQL developer but an error occurred (the name is used by another object) which I don't know why.
    Thanks,
    Ferez

  • Primary key violation exception in auto increment column

    Hi All,
    I am facing one issue in Multi threaded environment.
    I am getting Primary key violation exception in auto increment column. I have a table and the primary key is the auto increment column, and I have a trigger which is populating this column.
    5 threads are running and inserting the data in the table and throwing Primary key violation exception randomly.
    create table example (
    id number not null,
    name varchar2(30)
    alter table example
    add constraint PK1example primary key (id);
    create sequence example_id_seq start with 1 increment by 1;
    create or replace trigger example_insert
    before insert on example
    for each row
    begin
    select example_id_seq.nextval into :new.id from dual;
    end;
    Any idea how to handle auto increment column(trigger) in Multi threaded environment??
    Thanks,

    user13566109 wrote:
    Thanks All,
    Problem was in approach; removed the trigger and placed a seq.nextval in insert query. It has resolved the issue.I very much suspect that that was not the issue.
    The trigger would execute for each insertion and the nextval would have been unique for each insertion (that's how sequences work in oracle), so that wouldn't have been causing duplicates.
    I suspect, more likely, that you had some other code somewhere that was using another sequence or some other method of generating the keys that was also inserting into the same table, so there was a conflict in the sources of the sequences being generated.
    The way you showed you had coded above, was a perfectly normal way to assign primary keys from a sequence, and is not a problem in a multi user/threaded environment.

  • Primary key violation when inserting records with insert_identity and ident_current+x

    My problem is this: I have data in a couple of temporary tables including
    relations (one table referencing records from another table by using
    temporary keys).
    Next, I would like to insert the data from the temp tables to my productive tables which have the same structure but their own identity keys. Hence, I need to translate the 'temporary' keys to regular identity keys in my productive tables.
    This is even more difficult because the system is highly concurrent, i.e. multiple sessions may try to insert
    data that way simultaneously.
    So far we were running the following solution, using a combination of
    identity_insert and ident_current:
    create table doc(id int identity primary key, number varchar(100))
    create table pos (id int identity primary key, docid int references doc(id), qty int)
    create table #doc(idx int, number varchar(100))
    create table #pos (docidx int, qty int)
    insert #doc select 1, 'D1'
    insert #doc select 2, 'D2'
    insert #pos select 1, 10
    insert #pos select 1, 12
    insert #pos select 2, 32
    insert #pos select 2, 9
    declare @docids table(ID int)
    set identity_insert doc on
    insert doc (id,number)
    output inserted.ID into @docids
    select ident_current('doc')+idx,number from #doc
    set identity_insert doc off
    -- Since scope_identity() is not reliable, we get the inserted identity values this way:
    declare @docID int = (select min(ID) from @docids)
    insert pos (docid,qty) select @docID+docidx-1, qty from #pos
    Since the request to ident_current() is located directly in the insert statement, we always have an implicit transaction which should be thread safe to a certain extend.
    We never had a problem with this solution for years until recently when we were running in occasional primary key violations. After some reasearch it turned out, that there were concurrent sessions trying to insert records in this way.
    Does anybody have an explanation for the primary key violations or an alternative solution for the problem?
    Thank you
    David

    >> My problem is this: I have data in a couple of temporary tables including relations (one table referencing records [sic] from another table by using temporary keys [sic]). <<
    NO, your problem is that you have no idea how RDBMS and SQL work. 
    1. Rows are not anything like records; this is a basic concept.
    2. Temp tables are how old magnetic tape file mimic scratch tapes. SQL programmers use CTEs, views, derived tables, etc. 
    3. Keys are a subset of attributes of an entity, fundamental characteristics of them! A key cannot be temporary by definition. 
    >> Next, I would like to insert the data from the temp tables to my production tables which have the same structure but their own IDENTITY keys. Hence, I need to translate the 'temporary' keys to regular IDENTITY keys in my productive tables. <<
    NO, you just get worse. IDENTITY is a 1970's Sybase/UNIX dialect, based on the sequential file structure used on 16-bit mini computers back then. It counts the physical insertion attempts (not even successes!) and has nothing to with a logical data model. This
    is a mag tape drive model of 1960's EDP, and not RDBMS.
    >> This is even more difficult because the system is highly concurrent, i.e. multiple sessions may try to insert data that way simultaneously. <<
    Gee, that is how magnetic tapes work, with queues. This is one of many reasons competent SQL programers do not use IDENTITY. 
    >> So far we were running the following solution, using a combination of IDENTITY_INSERT and IDENT_CURRENT: <<
    This is a kludge, not a solution. 
    There is no such thing as a generic “id” in RDBMS; it has to be “<something in particular>_id” to be valid. You have no idea what the ISO-11179 rules are. Even worse, your generic “id” changes names from table to table! By magic, it starts as a “Doc”,
    then becomes a “Pos” in the next table! Does it wind up as a “doc-id”? Can it become a automobile? A squid? Lady Gaga? 
    This is the first principle of any data model; it is based on the Law of Identity; remember that from Freshman Logic 101? A data element has one and only one name in a model. 
    And finally, you do not know the correct syntax for INSERT INTO, so you use the 1970's Sybase/UNIX dialect! The ANSI/ISO Standard uses a table consrtuctor: 
    INSERT INTO Doc VALUES (1, 'D1'), (2, 'D2'); 
    >> We never had a problem with this solution for years until recently when we were running in occasional PRIMARY KEY violations. After some research it turned out, that there were concurrent sessions trying to insert records [sic] in this way. <<
    “No matter how far you have gone down the wrong road, turn around.” -- Turkish proverb. 
    You have been mimicking a mag tape file system and have not written correct SQL. It has caught up with you in a way you can see. Throw out this garbage and do it right. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • JPA: Attr ... mapped to a primary key column in the DB. Update not allowed.

    Let me just say that I posted a bug report for this here:
    https://glassfish.dev.java.net/issues/show_bug.cgi?id=3937
    But I'm also posting the info here, so that people who search on this forum may get some help:
    TopLink (both Essentials and 11g) has a problem with flushing entities
    containing a self-reference relationship. When flushing, the following exception
    occurs:
    Exception [TOPLINK-7251] (Oracle TopLink Essentials - 2.1 (Build b14-fcs
    (12/17/2007))): oracle.toplink.essentials.exceptions.ValidationException
    Exception Description: The attribute [id] of class [test.jpa.entities.Person] is
    mapped to a primary key column in the database. Updates are not allowed.
    No manual updates have been made to ANY primary key.
    What I'm doing is:
    1. I instantiate a new entity.
    2. I start a transaction
    3. I persist the new entity.
    4. I read an existing entity from the DB.
    5. I let the existing entity point to the new entity via the self-reference
    relationship.
    6. I flush the persistence context.
    7. I issue commit(), and the exception occurs. (I have provided the stack traces for various versions of TopLink below.)
    This is a clear bug.
    Here are some additional observations:
    1. Reproduced on the following versions of TopLink:
    1.1. Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))
    1.2. Oracle TopLink Essentials - 2.1 (Build b14-fcs (12/17/2007))
    1.3. Oracle TopLink - 11g Technology Preview 3 (11.1.1.0.0) (Build 071214)
    2. Reproducible both on Java SE and Java EE. (I tested on Oracle Application Server)
    3. Reproducible with and without class weaving
    4. Reproducible regardless of whether the JPA annotations are on fields or on
    methods
    5. Reproducible regardless of whether "cascade={CascadeType.PERSIST}" is used or
    not.
    6. Reproducible regardless of the fetch type of the self-reference relationship
    (EAGER or LAZY).
    Also:
    1. Without flushing, the bug doesn't occur. That is, if I commit without
    flushing, it works.
    2. Without setting the self-reference relationship, the bug doesn't occur.
    This is an issue that appears when using BOTH self-reference relationship AND
    flushing.
    Best regards,
    Bisser
    The message was edited by bisser:
    Added info that the exception occurs "when I issue commit()" on step 7.

    I'm extremely surprised that you couldn't reproduce the error. It's reproduced each time when I run the Test Scenario that I described above.
    You could download a sample Eclipse project that reproduces the error from here: https://glassfish.dev.java.net/issues/show_bug.cgi?id=3937
    For the log below I used TopLink, version: Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007)).
    Could you, please, tell me what version you use and I will try the Test Case on it.
    Here's the FINEST log:
    [TopLink Finest]: 2008.01.09 07:35:58.094--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.weaving; value=false
    [TopLink Finest]: 2008.01.09 07:35:59.312--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.orm.throw.exceptions; default value=true
    [TopLink Finer]: 2008.01.09 07:35:59.312--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--Searching for default mapping file in file:/D:/dev/bull/jpa_pk_bug/bin/
    [TopLink Config]: 2008.01.09 07:35:59.547--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--The alias name for the entity class [class test.jpa.entities.Person] is being defaulted to: Person.
    [TopLink Config]: 2008.01.09 07:35:59.594--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--The column name for element [private java.lang.Long test.jpa.entities.Person.id] is being defaulted to: ID.
    [TopLink Config]: 2008.01.09 07:35:59.609--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--The column name for element [private java.lang.String test.jpa.entities.Person.name] is being defaulted to: NAME.
    [TopLink Config]: 2008.01.09 07:35:59.641--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--The target entity (reference) class for the many to one mapping element [test.jpa.entities.Person test.jpa.entities.Person.mgr] is being defaulted to: class test.jpa.entities.Person.
    [TopLink Config]: 2008.01.09 07:35:59.703--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--The primary key column name for the mapping element [test.jpa.entities.Person test.jpa.entities.Person.mgr] is being defaulted to: ID.
    [TopLink Finest]: 2008.01.09 07:35:59.703--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--end predeploying Persistence Unit Test; state Predeployed; factoryCount 0
    [TopLink Finer]: 2008.01.09 07:35:59.703--Thread(Thread[Main Thread,5,main])--cmp_init_transformer_is_null
    [TopLink Finest]: 2008.01.09 07:35:59.703--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--begin predeploying Persistence Unit Test; state Predeployed; factoryCount 0
    [TopLink Finest]: 2008.01.09 07:35:59.703--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--end predeploying Persistence Unit Test; state Predeployed; factoryCount 1
    [TopLink Finest]: 2008.01.09 07:35:59.719--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--begin deploying Persistence Unit Test; state Predeployed; factoryCount 1
    [TopLink Finest]: 2008.01.09 07:35:59.734--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.logging.level; value=FINEST; translated value=FINEST
    [TopLink Finest]: 2008.01.09 07:35:59.734--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.logging.level; value=FINEST; translated value=FINEST
    [TopLink Finest]: 2008.01.09 07:35:59.750--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.jdbc.user; value=rms
    [TopLink Finest]: 2008.01.09 07:35:59.750--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.jdbc.password; value=xxxxxx
    [TopLink Finest]: 2008.01.09 07:36:00.766--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.target-database; value=Oracle; translated value=oracle.toplink.essentials.platform.database.oracle.OraclePlatform
    [TopLink Finest]: 2008.01.09 07:36:00.781--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.jdbc.driver; value=oracle.jdbc.OracleDriver
    [TopLink Finest]: 2008.01.09 07:36:00.781--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--property=toplink.jdbc.url; value=jdbc:oracle:thin:@//10.20.6.126:1521/region2
    [TopLink Info]: 2008.01.09 07:36:00.797--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--TopLink, version: Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))
    [TopLink Config]: 2008.01.09 07:36:00.812--ServerSession(1968077)--Connection(5182312)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:01.797--ServerSession(1968077)--Connection(4252099)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Config]: 2008.01.09 07:36:01.797--ServerSession(1968077)--Connection(5744890)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:01.875--ServerSession(1968077)--Connection(5747801)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Config]: 2008.01.09 07:36:01.891--ServerSession(1968077)--Connection(5760373)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:01.969--ServerSession(1968077)--Connection(5763284)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Config]: 2008.01.09 07:36:01.969--ServerSession(1968077)--Connection(5497095)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:02.047--ServerSession(1968077)--Connection(5500006)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Config]: 2008.01.09 07:36:02.047--ServerSession(1968077)--Connection(5512041)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:02.125--ServerSession(1968077)--Connection(5514977)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Config]: 2008.01.09 07:36:02.125--ServerSession(1968077)--Connection(5527528)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:02.203--ServerSession(1968077)--Connection(5530440)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Config]: 2008.01.09 07:36:02.203--ServerSession(1968077)--Connection(5542993)--Thread(Thread[Main Thread,5,main])--connecting(DatabaseLogin(
         platform=>OraclePlatform
         user name=> "rms"
         datasource URL=> "jdbc:oracle:thin:@//10.20.6.126:1521/region2"
    [TopLink Config]: 2008.01.09 07:36:02.281--ServerSession(1968077)--Connection(5545904)--Thread(Thread[Main Thread,5,main])--Connected: jdbc:oracle:thin:@//10.20.6.126:1521/region2
         User: RMS
         Database: Oracle  Version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
         Driver: Oracle JDBC driver  Version: 10.2.0.1.0
    [TopLink Finest]: 2008.01.09 07:36:02.312--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--sequencing connected, state is Preallocation_NoTransaction_State
    [TopLink Info]: 2008.01.09 07:36:02.484--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--file:/D:/dev/bull/jpa_pk_bug/bin/-Test login successful
    [TopLink Finest]: 2008.01.09 07:36:02.484--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--end deploying Persistence Unit Test; state Deployed; factoryCount 1
    [TopLink Finer]: 2008.01.09 07:36:02.516--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--client acquired
    [TopLink Finest]: 2008.01.09 07:36:02.531--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Execute query DoesExistQuery()
    [TopLink Finest]: 2008.01.09 07:36:02.547--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--PERSIST operation called on: test.jpa.entities.Person@563c8c.
    [TopLink Finest]: 2008.01.09 07:36:02.562--ClientSession(5666151)--Thread(Thread[Main Thread,5,main])--Execute query ValueReadQuery()
    [TopLink Fine]: 2008.01.09 07:36:02.594--ServerSession(1968077)--Connection(5747801)--Thread(Thread[Main Thread,5,main])--SELECT PERSONS_ID_SEQ.NEXTVAL FROM DUAL
    [TopLink Finest]: 2008.01.09 07:36:03.297--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--sequencing preallocation for PERSONS_ID_SEQ: objects: 1 , first: 5, last: 5
    [TopLink Finest]: 2008.01.09 07:36:03.312--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--assign sequence to the object (5 -> test.jpa.entities.Person@563c8c)
    [TopLink Finest]: 2008.01.09 07:36:03.328--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Execute query ReadObjectQuery(test.jpa.entities.Person)
    [TopLink Fine]: 2008.01.09 07:36:03.438--ServerSession(1968077)--Connection(4252099)--Thread(Thread[Main Thread,5,main])--SELECT ID, NAME, MGR_ID FROM Persons WHERE (ID = ?)
         bind => [1]
    [TopLink Finest]: 2008.01.09 07:36:03.531--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Register the existing object test.jpa.entities.Person@3a4484
    [TopLink Finer]: 2008.01.09 07:36:03.625--ClientSession(5666151)--Connection(5763284)--Thread(Thread[Main Thread,5,main])--begin transaction
    [TopLink Finest]: 2008.01.09 07:36:03.625--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Execute query UpdateObjectQuery(test.jpa.entities.Person@3a57fa)
    [TopLink Finest]: 2008.01.09 07:36:03.641--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Execute query WriteObjectQuery(test.jpa.entities.Person@563c8c)
    [TopLink Fine]: 2008.01.09 07:36:03.656--ClientSession(5666151)--Connection(5763284)--Thread(Thread[Main Thread,5,main])--INSERT INTO Persons (ID, NAME, MGR_ID) VALUES (?, ?, ?)
         bind => [5, Boss, null]
    [TopLink Fine]: 2008.01.09 07:36:03.688--ClientSession(5666151)--Connection(5763284)--Thread(Thread[Main Thread,5,main])--UPDATE Persons SET MGR_ID = ? WHERE (ID = ?)
         bind => [5, 1]
    [TopLink Finer]: 2008.01.09 07:36:03.703--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--begin unit of work commit
    [TopLink Finest]: 2008.01.09 07:36:03.703--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Execute query UpdateObjectQuery(test.jpa.entities.Person@563c8c)
    [TopLink Warning]: 2008.01.09 07:36:03.812--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--Local Exception Stack:
    Exception [TOPLINK-7251] (Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))): oracle.toplink.essentials.exceptions.ValidationException
    Exception Description: The attribute [id] of class [test.jpa.entities.Person] is mapped to a primary key column in the database. Updates are not allowed.
         at oracle.toplink.essentials.exceptions.ValidationException.primaryKeyUpdateDisallowed(ValidationException.java:2222)
         at oracle.toplink.essentials.mappings.foundation.AbstractDirectMapping.writeFromObjectIntoRowWithChangeRecord(AbstractDirectMapping.java:750)
         at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildRowForUpdateWithChangeSet(ObjectBuilder.java:948)
         at oracle.toplink.essentials.internal.queryframework.DatabaseQueryMechanism.updateObjectForWriteWithChangeSet(DatabaseQueryMechanism.java:1263)
         at oracle.toplink.essentials.queryframework.UpdateObjectQuery.executeCommitWithChangeSet(UpdateObjectQuery.java:91)
         at oracle.toplink.essentials.internal.queryframework.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:390)
         at oracle.toplink.essentials.queryframework.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:109)
         at oracle.toplink.essentials.queryframework.DatabaseQuery.execute(DatabaseQuery.java:628)
         at oracle.toplink.essentials.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:555)
         at oracle.toplink.essentials.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:138)
         at oracle.toplink.essentials.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:110)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2233)
         at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:952)
         at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:909)
         at oracle.toplink.essentials.internal.sessions.CommitManager.commitChangedObjectsForClassWithChangeSet(CommitManager.java:309)
         at oracle.toplink.essentials.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:195)
         at oracle.toplink.essentials.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:2657)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1044)
         at oracle.toplink.essentials.internal.ejb.cmp3.base.RepeatableWriteUnitOfWork.commitToDatabase(RepeatableWriteUnitOfWork.java:403)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1126)
         at oracle.toplink.essentials.internal.ejb.cmp3.base.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:107)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:856)
         at oracle.toplink.essentials.internal.ejb.cmp3.transaction.base.EntityTransactionImpl.commit(EntityTransactionImpl.java:102)
         at oracle.toplink.essentials.internal.ejb.cmp3.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:60)
         at test.jpa.TestPkBug.runTest(TestPkBug.java:53)
         at test.jpa.TestPkBug.main(TestPkBug.java:95)
    [TopLink Finer]: 2008.01.09 07:36:03.828--ClientSession(5666151)--Connection(5763284)--Thread(Thread[Main Thread,5,main])--rollback transaction
    [TopLink Finer]: 2008.01.09 07:36:03.844--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--release unit of work
    [TopLink Finer]: 2008.01.09 07:36:03.844--UnitOfWork(5663897)--Thread(Thread[Main Thread,5,main])--initialize identitymaps
    [TopLink Finer]: 2008.01.09 07:36:03.844--ClientSession(5666151)--Thread(Thread[Main Thread,5,main])--client released
    [TopLink Finest]: 2008.01.09 07:36:03.844--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--begin undeploying Persistence Unit Test; state Deployed; factoryCount 1
    [TopLink Finest]: 2008.01.09 07:36:03.844--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--sequencing disconnected
    [TopLink Config]: 2008.01.09 07:36:03.844--ServerSession(1968077)--Connection(4252099)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Finer]: 2008.01.09 07:36:03.859--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--initialize identitymaps
    [TopLink Info]: 2008.01.09 07:36:03.859--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--file:/D:/dev/bull/jpa_pk_bug/bin/-Test logout successful
    [TopLink Config]: 2008.01.09 07:36:03.859--ServerSession(1968077)--Connection(5747801)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Config]: 2008.01.09 07:36:03.859--ServerSession(1968077)--Connection(5182312)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Config]: 2008.01.09 07:36:03.859--ServerSession(1968077)--Connection(5500006)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Config]: 2008.01.09 07:36:03.875--ServerSession(1968077)--Connection(5514977)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Config]: 2008.01.09 07:36:03.875--ServerSession(1968077)--Connection(5530440)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Config]: 2008.01.09 07:36:03.875--ServerSession(1968077)--Connection(5545904)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Config]: 2008.01.09 07:36:03.891--ServerSession(1968077)--Connection(5763284)--Thread(Thread[Main Thread,5,main])--disconnect
    [TopLink Finest]: 2008.01.09 07:36:03.891--ServerSession(1968077)--Thread(Thread[Main Thread,5,main])--end undeploying Persistence Unit Test; state Undeployed; factoryCount 0
    Exception in thread "Main Thread" javax.persistence.RollbackException: Exception [TOPLINK-7251] (Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))): oracle.toplink.essentials.exceptions.ValidationException
    Exception Description: The attribute [id] of class [test.jpa.entities.Person] is mapped to a primary key column in the database. Updates are not allowed.
         at oracle.toplink.essentials.internal.ejb.cmp3.transaction.base.EntityTransactionImpl.commit(EntityTransactionImpl.java:120)
         at oracle.toplink.essentials.internal.ejb.cmp3.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:60)
         at test.jpa.TestPkBug.runTest(TestPkBug.java:53)
         at test.jpa.TestPkBug.main(TestPkBug.java:95)
    Caused by: Exception [TOPLINK-7251] (Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))): oracle.toplink.essentials.exceptions.ValidationException
    Exception Description: The attribute [id] of class [test.jpa.entities.Person] is mapped to a primary key column in the database. Updates are not allowed.
         at oracle.toplink.essentials.exceptions.ValidationException.primaryKeyUpdateDisallowed(ValidationException.java:2222)
         at oracle.toplink.essentials.mappings.foundation.AbstractDirectMapping.writeFromObjectIntoRowWithChangeRecord(AbstractDirectMapping.java:750)
         at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildRowForUpdateWithChangeSet(ObjectBuilder.java:948)
         at oracle.toplink.essentials.internal.queryframework.DatabaseQueryMechanism.updateObjectForWriteWithChangeSet(DatabaseQueryMechanism.java:1263)
         at oracle.toplink.essentials.queryframework.UpdateObjectQuery.executeCommitWithChangeSet(UpdateObjectQuery.java:91)
         at oracle.toplink.essentials.internal.queryframework.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:390)
         at oracle.toplink.essentials.queryframework.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:109)
         at oracle.toplink.essentials.queryframework.DatabaseQuery.execute(DatabaseQuery.java:628)
         at oracle.toplink.essentials.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:555)
         at oracle.toplink.essentials.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:138)
         at oracle.toplink.essentials.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:110)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2233)
         at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:952)
         at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:909)
         at oracle.toplink.essentials.internal.sessions.CommitManager.commitChangedObjectsForClassWithChangeSet(CommitManager.java:309)
         at oracle.toplink.essentials.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:195)
         at oracle.toplink.essentials.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:2657)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1044)
         at oracle.toplink.essentials.internal.ejb.cmp3.base.RepeatableWriteUnitOfWork.commitToDatabase(RepeatableWriteUnitOfWork.java:403)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1126)
         at oracle.toplink.essentials.internal.ejb.cmp3.base.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:107)
         at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:856)
         at oracle.toplink.essentials.internal.ejb.cmp3.transaction.base.EntityTransactionImpl.commit(EntityTransactionImpl.java:102)
         ... 3 moreEDIT: Are you using the EXACT Test Case as I have described it in the previous posts? It's important that you commit(), and not rollback(), the transaction after the flush.
    EDIT: Updated the log because I found out that I had made a small change to the original Test Case while I was trying to find a workaround. The current log is produced by the EXACT Test Case I described in my previous posts.
    Message was edited by:
    bisser

  • Primary key column as the key of the cache

    Hi All,
    I am implementing a cache backed by HibernateCacheStore for a table. Cache contains key as the primary key of the table and value as the entire row in object format. the primary key is generated by hibernate.
    when i am inserting new rows through coherence (i.e cahce.put) what persistence strategy should is use to set the cache entries.
    I cannot use cache.put (key, value) because key is the primary key of the table which is not generated yet.
    and if i use the session.save(value) the persisted value is not in the cache.
    is there any mechanism to map the cache key as the primary key column of the table.
    Regards
    S

    Hi -
    Using Hibernate as a CacheStore for Coherence, the key of the cache entry is also the key of the entity
    being stored via Hibernate. For this reason keys for new cache entries must be generated ahead of time.
    A UUID can be used. The commons component of the Incubator project has a key generation utility that
    might be worth looking at.
    /Mark

  • Primary Key Data type in Time Dimension????

    I have to create a Time dimension with day grain in a Datawarehouse system and I don’t know what is the best data type for the primary key...
    For example
    1) I could put Number(8) datatype, then the dates will be: 20050114, 20050115, 20050116.... Then in the fact tables I put the Number(8) datatype in the date fields... But in my reporting tools I have to put the to_date function to show the dates in the right format.
    2) Or I could put Date datatype, then the dates will be: 01/14/2005, 01/15/2005, 01/16/2005.... Then in the fact tables I put the Date datatype in the date fields...
    It’s the Date primary key a bad datatype? (Very slow)
    What is the best Primary Key Data type in Time Dimension???
    Thanks!

    <quote>I have to create a Time dimension with day grain</quote>
    OK.
    <quote>But in my reporting tools I have to put the to_date function to show the dates in the right format</quote>
    Why? ... if you’ve decided to have a Day dimension table what is stopping you from having the day represented as a DATE column in there? (plus all the other "right formats" you may need). The join keys should only be used for … joining.
    <quote> It’s the Date primary key a bad datatype? (Very slow)</quote>
    No … DATE or NUMBER won’t make any noticeable difference when used as the join key between the time dimension and the fact table.
    Some see advantages in having the DATE FK in the fact table …
    1. One can have range partitioning in the fact using real DATEs
    2. One can get Day-derived info right from the fact table … that is, the join to the Day dimension is not needed
    I don’t (see them as advantages) … for #1, range partitioning by some measure of time is still achievable as long as the PK values on the Day dimension are immutable (as they should) … and I don’t see #2 as an advantage for the DW end-users.
    Personally, I prefer a surrogate numeric PK … but not things like 20050129.
    <William>If you need dates then use dates. They are more robust, you can never accidentally have November 43rd</William>
    Of course this can not be about the fact table … since there the column is constrained … so this is about accidentally getting "November 43rd, 2004" into the Day dimensions when the PK is numeric 20041143; true, but is a DATE PK more robust in this case? … No … one could accidentally insert to_date('29-Jan-2005','DD-Mon-YYYY') and to_date('29-Jan-2005 00:00:01','DD-Mon-YYYY HH24:MI:SS') and that won't be very good, would it? In both cases, one would need something extra to 100% protect the integrity of the Day dimension (mind you, loading the Day dimension probably happens once a year under the supervision of the most technical people on the project).
    There is no best PK data type for the Day dimension (between NUMBER and DATE) … they are both workable solutions ... go with what you’re comfortable.

  • Unique key violation i.e. primary key violation in JSF page

    Hi, I am using Jdeveloper 11.1.1.5.0 and working with adf fusion web application. I have created entity object and view object. In view object I have primary key and I have set it's type to DB sequence. THe problem is though the attribute is DB sequence. IN JSF page it show unique key violation i.e. primary key violation. Please help me

    So, do you have a trigger in the database that updates the value of the primary key? Or do you assign the value in some other way?
    You can check [url http://download.oracle.com/docs/cd/E16162_01/web.1112/e16182/toc.htm]the docs for information about the ways it can be done.

  • Knowing the primary key columns of the given table

    Hi,
    How can I get the primary key columns of the given table?
    Regards,
    Sachin R.K.

    You can find the constraint_name from all_constraints/user_constraints for constraint_type = 'P' (which is the primary key constraint).
    And then see which columns are in for the constriant_name
    in all_cons_columns/user_cons_columns view.
    Below is the example
    select acc.column_name from
    all_cons_columns acc, all_constraints ac
    where acc.constraint_name = ac.constraint_name
    and acc.table_name = 'DEPT' AND acc.owner = 'SCOTT'
    and ac.constraint_type = 'P'
    Hope this helps
    Srinivasa Medam

  • What should I do to setup a new MacBook pro for first time when moving from a PC?

    What should I do to setup a new MacBook pro for first time when moving from a PC?

    you have to use the migration assistant if you want your files transfered from your pc to mac this link will be helpful
    http://support.apple.com/kb/HT4796

  • I have a power pc (g5) computer that I will soon be replacing with a current i5 or i7 mini. How do I transfer the Time Machine files from the internal hard drive on the G5 to an external drive that I will later use with the Mini?

    I have a Power PC G5 computer that I will soon be replacing with a current i5 or i7 Mini. How do I transfer the Time Machine files from the internal hard drive on the G5 to an external drive that I will later use with the Mini?

    Hi, likely the easiest is to just poll the drive & get something like this...
    Get MacScan...
    http://www.apple.com/downloads/macosx/networking_security/macscan.html
    http://eshop.macsales.com/item/NewerTech/U3NVSPATA/
    But if you have a good external drive already, just clone it.
    Get carbon copy cloner to make an exact copy of your old HD to the New one...
    http://www.bombich.com/software/ccc.html
    Or SuperDuper...
    http://www.shirt-pocket.com/SuperDuper/

  • How can I use the Time Machine Backups from my Old Computer?

    I have two months of Time Machine backups made using my old Macintosh computer which died and I no longer have. I have now purchased a new computer and am trying to use the Time Machine back ups from the original computer, but it will not recognise them. How can I get my new MacBook Pro to use the Time Machine Backups from my old Computer? I phoned the Apple help line and they said I cannot use them and would have to delete them. This sounds crazy if you can only use the Time Machine back up file with one computer and have to delete it when you buy a new Mac. Surely there must be some way to transfer ownership (not files) from an old to the new Mac?
    Thanks
    Richard

    Thanks Kevin for the suggestion posted on MacOSXHints how to "Repair Time Machine after logic board changes".
    http://www.macosxhints.com/article.php?story=20080128003716101
    It seems as though I am not alone with my frustration with the way Time Machine uses the MAC address of a computer to tell one system from another. This means that if you have your Mac repaired with a new logic board, or replace your system with a new one, you can't resume backups where you left off. Reading through the readers who used the fix using Terminal, it appears that the fix does not always work. I contacted Apple again, but they were no help. Surely Apple should come out with a solution as more and more people use Time Machine/Time Capsule.
    It is CRAZY that after a change computers or a switch in the computer logic board that you cannot resume your Time Machine back ups.

  • I am setting up a new Time Capsule to replace another hard drive.  Is there any way to include the the Time Machine history from the old drive on the new Time Capsule?

    I am setting up a new Time Capsule to replace another hard drive.  Is there any way to include the the Time Machine history from the old drive on the new Time Capsule?

    Check item #18 of Time Machine FAQ, particularly the material in #3 of the "How to" section.
    By the way, you've been misled by poor field labeling on this forum into typing a large part of your message into the field intended for the subject.  In the future just type a short summary of your post into that field and type the whole message into the field below that.

  • For the Primary Key Field with the combination of other fields

    Hai
    I Have a problem in creating trigger for the following
    the problem goes like this..
    i have a table with the fields
    fld1 (varchar2(6) fld2(varchar2(20)) fld3 (number)
    the fld1 is a primary key.
    here in this table i am inserting the values ..
    At the time of inserting the fld1 should get the part value of fld2
    say if new value for fld2 is "SAM & CO" then
    it should take the first letter and then followed by the sequence..
    i.e., the fld1 is the combination of a
    letter + firt letter of the fld2 + sequence
    fld1 fld2 fld3
    CS001 SSSSS 324234
    CP001 PPPPP 5345435
    CS002 SSSS 53543543
    Here the Sequence should vary depending on the alphabet of the fld1
    if P the the Sequence should be the next number of fld1 to P
    i.e.,
    if i add the value like this
    insert into tname(fld2,fld3) values('PQQQQ',34343)
    then it should be inserted as
    CP002 PQQQQ 34343
    I need the solution for this..
    Thanx
    Gaya3
    thanx
    gaya3

    There are not enough details to be sure since you have not provided the mappings. From just the error, it looks like you are using the tableC.tableA_ID field as the foreign key in the ManyToOne relationship to A, but have marked it as insertable=false, writeable=false, meaning that it cannot be updated or used for inserts.
    Either make it writable (set the settings to true), or add another basic mapping/attribute in the entity for TableC that maps to the field which you can use to set when you insert a new tableC entity. A simple example is available at
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/2.0/DerivedIdentifiers
    Best Regards,
    Chris

Maybe you are looking for

  • Communication problem with proxy server

    We have establish the configuration of an iPad to access the enterprise net, but when trying to access any webpage we get the next message: Safari can not open the page. Error:"There is a communication problem with proxy web server (HTTP)" The access

  • Where it is possible to change the value separator in Custom Field?

    Hello, This is a question around the "Custom Fields" and "Lookup Table" I linked a "Custom Field" to a "Lookup Table". I select the option "Allow multiple values to be selected from lookup table" When I used this "Custom Field" in a view, the selecte

  • Problems in ABAP Proxy Generation

    Hi All, I tried to generate an ABAP proxy for outbound message interface "Interface_ABC", which is under software component "DEF". In our development system, I had successfully created ordinary function modules and structures under the package "/XYZ/

  • Issuse with WebDynpro iViews - Java or ABAP

    We frequently get calls from users who complain that they can't see a screen in the portal, and it is usually because they have accidentally collapsed an iView. The problem we have found is that it doesn't seem to work with WebDynpro iViews - Java or

  • How to debug ACE FT Sync Problems ?

    Hello, in one of our contexts we have a sync problem on the standby unit. "sh ft group detail" gives "Running cfg sync status : Error on Standby device when applying configuration file replicated from active", while "Startup cfg sync status" is OK. "