ODI Selective Reverse throws Primary Key Violation error
Hi John,
Hope you are doing good.
The version of ODI is 10.1.3.4.5
I was trying to reverse an MS SQL table using selective reverse and its throwing the below error
com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY constraint 'PK_COL'. Cannot insert duplicate key in object 'dbo.SNP_COL'.
Also the error says - Technology or Driver does not support Reverse engineering
I reversed the same table yesterday and it went fine. But today i wanted to reverse an another table from the same database but its giving an error. I tried deleting the Model itself and tried reversing the both tables and now its not reversing the first on either.
Please suggest.
Thanks,
Sravan
dear
you have performed more than once the same reverse engineering.
If the problem is a doubling in snp * repository, you could delete rows via plsql this process and try again
tell me if this is not working or if
Similar Messages
-
Issue with INSERT INTO, throws primary key violation error even if the target table is empty
Hi,
I am running a simple
INSERT INTO Table 1 (column 1, column 2, ....., column n)
SELECT column 1, column 2, ....., column n FROM Table 2
Table 1 and Table 2 have same definition(schema).
Table 1 is empty and Table 2 has all the data. Column 1 is primary key and there is NO identity column.
This statement still throws Primary key violation error. Am clueless about this?
How can this happen when the target table is totally empty?
ChintuNope thats not true
Either you're not inserting to the right table or in the background some other trigger code is getting fired which is inserting into some table which causes a PK violation.
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
SSIS Upsert Primary Key Violation error
Hello,
I created an SSIS package with look up transform. My source is OLE DB Source and destination is OLEDB destination with Lookup no match output and OLEDB command with Lookup match output.
I am inserting data in destination database if records donot exist or running a stored proc in OLEDB command if the records already exist.
Still I am receiving a
Violation of PRIMARY KEY constraint 'PK_'. Cannot insert duplicate key in object 'dbo.tablename'. The duplicate key value is (734eb1e6-9987-41b6-a143-0039726df29d)."
Can some one tell me where I am doing it wrong. Shd I change the lookup transform to a different transform?
Experts I Need your inputs.
Thanks a tonMERGE command inserts, updates or deletes:
http://agilebi.com/jwelch/2007/07/05/sql-server-2008-using-merge-from-ssis/#86
http://technet.microsoft.com/en-us/library/bb522522.aspx
http://technet.microsoft.com/en-us/library/cc280522.aspx
Kalman Toth Database & OLAP Architect
SELECT Video Tutorials 4 Hours
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Hi all...
I hace created a Master table and a detail table. the detail table can have multiple records corresponding to a code in master table.this is the case which i want but its giving error of violation of primary key in detail table due to bydefault field Code in the table
Thanks in advanceHi
Good way are creating of UDO using these tables:
- Master = MASTER_DATA;
- Details = MASTER_DATA_ROWS.
If you import data you need to give values to saome SAP attributes.
Look this link [Thread: How to import data from excel to UserDefined Fields|How to import data from excel to UserDefined Fields;. May be it can help you.
Regards
Sierdna S. -
Primary key violation exception in auto increment column
Hi All,
I am facing one issue in Multi threaded environment.
I am getting Primary key violation exception in auto increment column. I have a table and the primary key is the auto increment column, and I have a trigger which is populating this column.
5 threads are running and inserting the data in the table and throwing Primary key violation exception randomly.
create table example (
id number not null,
name varchar2(30)
alter table example
add constraint PK1example primary key (id);
create sequence example_id_seq start with 1 increment by 1;
create or replace trigger example_insert
before insert on example
for each row
begin
select example_id_seq.nextval into :new.id from dual;
end;
Any idea how to handle auto increment column(trigger) in Multi threaded environment??
Thanks,user13566109 wrote:
Thanks All,
Problem was in approach; removed the trigger and placed a seq.nextval in insert query. It has resolved the issue.I very much suspect that that was not the issue.
The trigger would execute for each insertion and the nextval would have been unique for each insertion (that's how sequences work in oracle), so that wouldn't have been causing duplicates.
I suspect, more likely, that you had some other code somewhere that was using another sequence or some other method of generating the keys that was also inserting into the same table, so there was a conflict in the sources of the sequences being generated.
The way you showed you had coded above, was a perfectly normal way to assign primary keys from a sequence, and is not a problem in a multi user/threaded environment. -
My problem is this: I have data in a couple of temporary tables including
relations (one table referencing records from another table by using
temporary keys).
Next, I would like to insert the data from the temp tables to my productive tables which have the same structure but their own identity keys. Hence, I need to translate the 'temporary' keys to regular identity keys in my productive tables.
This is even more difficult because the system is highly concurrent, i.e. multiple sessions may try to insert
data that way simultaneously.
So far we were running the following solution, using a combination of
identity_insert and ident_current:
create table doc(id int identity primary key, number varchar(100))
create table pos (id int identity primary key, docid int references doc(id), qty int)
create table #doc(idx int, number varchar(100))
create table #pos (docidx int, qty int)
insert #doc select 1, 'D1'
insert #doc select 2, 'D2'
insert #pos select 1, 10
insert #pos select 1, 12
insert #pos select 2, 32
insert #pos select 2, 9
declare @docids table(ID int)
set identity_insert doc on
insert doc (id,number)
output inserted.ID into @docids
select ident_current('doc')+idx,number from #doc
set identity_insert doc off
-- Since scope_identity() is not reliable, we get the inserted identity values this way:
declare @docID int = (select min(ID) from @docids)
insert pos (docid,qty) select @docID+docidx-1, qty from #pos
Since the request to ident_current() is located directly in the insert statement, we always have an implicit transaction which should be thread safe to a certain extend.
We never had a problem with this solution for years until recently when we were running in occasional primary key violations. After some reasearch it turned out, that there were concurrent sessions trying to insert records in this way.
Does anybody have an explanation for the primary key violations or an alternative solution for the problem?
Thank you
David>> My problem is this: I have data in a couple of temporary tables including relations (one table referencing records [sic] from another table by using temporary keys [sic]). <<
NO, your problem is that you have no idea how RDBMS and SQL work.
1. Rows are not anything like records; this is a basic concept.
2. Temp tables are how old magnetic tape file mimic scratch tapes. SQL programmers use CTEs, views, derived tables, etc.
3. Keys are a subset of attributes of an entity, fundamental characteristics of them! A key cannot be temporary by definition.
>> Next, I would like to insert the data from the temp tables to my production tables which have the same structure but their own IDENTITY keys. Hence, I need to translate the 'temporary' keys to regular IDENTITY keys in my productive tables. <<
NO, you just get worse. IDENTITY is a 1970's Sybase/UNIX dialect, based on the sequential file structure used on 16-bit mini computers back then. It counts the physical insertion attempts (not even successes!) and has nothing to with a logical data model. This
is a mag tape drive model of 1960's EDP, and not RDBMS.
>> This is even more difficult because the system is highly concurrent, i.e. multiple sessions may try to insert data that way simultaneously. <<
Gee, that is how magnetic tapes work, with queues. This is one of many reasons competent SQL programers do not use IDENTITY.
>> So far we were running the following solution, using a combination of IDENTITY_INSERT and IDENT_CURRENT: <<
This is a kludge, not a solution.
There is no such thing as a generic “id” in RDBMS; it has to be “<something in particular>_id” to be valid. You have no idea what the ISO-11179 rules are. Even worse, your generic “id” changes names from table to table! By magic, it starts as a “Doc”,
then becomes a “Pos” in the next table! Does it wind up as a “doc-id”? Can it become a automobile? A squid? Lady Gaga?
This is the first principle of any data model; it is based on the Law of Identity; remember that from Freshman Logic 101? A data element has one and only one name in a model.
And finally, you do not know the correct syntax for INSERT INTO, so you use the 1970's Sybase/UNIX dialect! The ANSI/ISO Standard uses a table consrtuctor:
INSERT INTO Doc VALUES (1, 'D1'), (2, 'D2');
>> We never had a problem with this solution for years until recently when we were running in occasional PRIMARY KEY violations. After some research it turned out, that there were concurrent sessions trying to insert records [sic] in this way. <<
“No matter how far you have gone down the wrong road, turn around.” -- Turkish proverb.
You have been mimicking a mag tape file system and have not written correct SQL. It has caught up with you in a way you can see. Throw out this garbage and do it right.
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
hi,
SQL> alter table SERVRECD4 enable constraint SERVRECD4_I0;
alter table SERVRECD4 enable constraint SERVRECD4_I0
ERROR at line 1:
ORA-02437: cannot validate (MUSADMIN.SERVRECD4_I0) - primary key violated
SQL> select constraint_name from user_constraints where status<>'ENABLED';
CONSTRAINT_NAME
SERVRECD4_I0
I tried to check duplicates in this composite PK using following query:
select FK_SERVRECD1NORAPA, FK_SERVRECD1FK_JO0, FK_SERVRECD1FK_JOB, FK_SERVRECD1ID from SERVRECD4 where row id NOT IN (select MIN(rowid) from SERVRECD4 group by FK_SERVRECD1NORAPA, FK_SERVRECD1FK_JO0, FK_SERVRECD1FK_JOB, FK_SERVRECD1ID);
Result : No rows returned.
Then i checked it and records aer same for below query as in table so it means NO duplicates then i don't seem to figure out why when trying to ENABLE constraint raises Above error.
SQL> select count(min(rowid)) from SERVRECD4 group by FK_SERVRECD1NORAPA, FK_SERVRECD1FK_JO0, FK_SERVRECD1FK_JOB, FK_SERVRECD1ID;
COUNT(MIN(ROWID))
2729402
SQL> select count(*) from SERVRECD4;
COUNT(*)
2729402
Constraints Info:
Name : SERVRECD4_I0
Type : PRIMARY
Table
Columns : FK_SERVRECD1NORAPA,FK_SERVRECD1FK_JO0, FK_SERVRECD1FK_JOB, FK_SERVRECD1ID
Disabled : YES
Deferrable : NO
Initially : NO
Deferred : NO
Validate : NO
Its 10gR2 database on linux.
thxA little modification in my early query ->
select FK_SERVRECD1NORAPA,
FK_SERVRECD1FK_JO0,
FK_SERVRECD1FK_JOB,
FK_SERVRECD1ID ,
count(*)
from SERVRECD4
group by FK_SERVRECD1NORAPA,
FK_SERVRECD1FK_JO0,
FK_SERVRECD1FK_JOB,
FK_SERVRECD1ID
having count(trim(FK_SERVRECD1NORAPA)||
trim(FK_SERVRECD1FK_JO0)||
trim(FK_SERVRECD1FK_JOB)||
trim(K_SERVRECD1ID)) > 1;N.B.: Not Tested.....
Regards.
Satyaki De. -
How to avoid primary key insert error and pipe those would-be error rows to a separate table?
Hi All,
Question: How can I ID duplicate values in a particular column before generating a "Violation of PRIMARY KEY constraint" error?
Background: this SSIS package pulls rows from a remote server table for an insert to a local table. The local table is truncated in step 1 of this package. One of the source columns, "ProductName," is a varchar(50) NOT NULL, with no
constraints at all. In the destination table, that column has a primary key constraint. Even so, we don't expect duplicate primary key inserts due to the source data query. Nevertheless, I've been
tasked with identifying any duplicate ProductName values that may try to get inserted, piping them all to a "DuplicateInsertAttempt_ProductName" table, and sending an email to the interested parties. Since I have no way of knowing which row
should be imported and which should not, I assume the best method is to pipe all rows with a duplicate ProductName out so somebody else can determine which is right and which is wrong, at which point we'll need to improve the query of the source table.
What's the proper way to do this? I assume the "DuplicateInsertAttempt_ProductName" table needs identical schema to the import target table, but without any constraints. I also assume I must ID the duplicate values before attempting
the import so that no error is generated, but I'm not sure how to do this.
Any help would be greatly appreciated.
Thanks,
Ericagree about preventing a dupe or other error on some inconsequential dimension from killing a data mart load that takes a few hrs to run and is an important reporting system.
I looked into using the error output before, but i think it came up a bit short...
This is going from memory from a few years ago, but the columnid that comes out of the error data flow is an internal id for the column in the buffer that can't be easily used to get the column name.
No 'in flight'/in-process way exists to get the column name via something like thisbuffer.column[columnid].name unfortunately
In theory, the only way to get the column name was to initialise another version of the package (via loading the .dtsx xml) using the SMO .net libraries. I say in theory because I stopped considering it an option at that point
And the error code is fairly generic as well if i remember correctly. It's not the error that comes out of the db (Violation of UNIQUE KEY constraint 'x'. Cannot insert duplicate key in object 'dbo.y'. The duplicate key value is (y).) It's a generic
'insert failed'/'a constraint failed' type msg.
I usually leave the default ssis logging to handle all errors (and log them in the sysssislog table), and then I explicitly handle specific exceptions like dupes that I don't want to fail package/parent on error
Jakub @ Adelaide, Australia Blog -
Unique key violation i.e. primary key violation in JSF page
Hi, I am using Jdeveloper 11.1.1.5.0 and working with adf fusion web application. I have created entity object and view object. In view object I have primary key and I have set it's type to DB sequence. THe problem is though the attribute is DB sequence. IN JSF page it show unique key violation i.e. primary key violation. Please help me
So, do you have a trigger in the database that updates the value of the primary key? Or do you assign the value in some other way?
You can check [url http://download.oracle.com/docs/cd/E16162_01/web.1112/e16182/toc.htm]the docs for information about the ways it can be done. -
Unique Key Violation error while updating table
Hi All,
I am having problem with UNIQUE CONSTRAINT. I am trying to update a table and getting the violation error. Here is the over view. We have a table called ActivityAttendee. ActivityAttendee has the following columns. The problem to debug is this table has
over 23 million records. How can I catch where my query is going wrong?
ActivityAttendeeID INT PRIMARY KEY IDENTITY(1,1)
,ActivityID INT NOT NULL (Foreign key to parent table Activity)
,AtendeeTypeCodeID INT NOT NULL
,ObjectID INT NOT NULL
,EmailAddress VARCHAR(255) NULL
UNIQUE KEY is on ActivityID,AtendeeTypeCodeID,ObjectID,EmailAddress
We have a requirement where we need to update the ObjectID. There is a new mapping where I dump that into a temp table #tempActivityMapping (intObjectID INT NOT NULL, intNewObjectID INT NULL)
The problem is ActivityAttendee table might already have the new ObjectID and the unique combination.
For example: ActivityAttendee Table have the following rows
1,1,1,1,NULL
2,1,1,2,NULL
3,1,1,4,'abc'
AND the temp table has 2,1
So essentially when I update in this scenario, It should ignore the second row because, if I try updating that there will be a violation of key as the first record has the exact value. When I ran my query on test data it worked fine. But for 23 million records,
its going wrong some where and I am unable to debug that. Here is my query
UPDATE AA
SET AA.ObjectID = TMP.NewObjectID
FROM dbo.ActivityAttendee AA
INNER JOIN #tmpActivityMapping TMP ON AA.ObjectID = TMP.ObjectID
WHERE TMP.NewObjectID IS NOT NULL
AND NOT EXISTS(SELECT 1
FROM dbo.ActivityAttendee AA1
WHERE AA1.ActivityID = AA.ActivityID
AND AA1.AttendeeTypeCodeID = AA.AttendeeTypeCodeID
AND AA1.ObjectID = TMP.NewObjectID
AND ISNULL(AA1.EmailAddress,'') = ISNULL(AA.EmailAddress,'')>> I am having problem with UNIQUE CONSTRAINT. I am trying to update a table and getting the violation error. Here is the over view. We have a table called Activity_Attendee. <<
Your problem is schema design. Singular table names tell us there is only one of them the set. Activities are one kind of entity; Attendees are a totally different kind of entity; Attendees are a totally different kind of entity. Where are those tables? Then
they can have a relationship which will be a third table with REFERENCES to the other two.
Your table is total garbage. Think about how absurd “attendee_type_code_id” is. You have never read a single thing about data modeling. An attribute can be “attendee_type”, “attendee_code” or “attendee_id”but not that horrible mess. I have used something like
this in one of my busk to demonstrate the wrong way to do RDBMS as a joke, but you did it for real. The postfix is called an attribute property in ISO-11179 standards.
You also do not know that RDBMS is not OO. We have keys and not OIDs; but bad programmers use the IDENTITY table property (NOT a column!), By definition, it cannot be a key; let me say that again, by definition.
>> ActivityAttendee has the following columns. The problem to debug is this table has over 23 million records [sic: rows are not records]<<
Where did you get “UNIQUE KEY” as syntax in SQL?? What math are you doing the attendee_id? That is the only reason to make it INTEGER. I will guess that you meant attendee_type and have not taken the time to create an abbreviation encoding it.
The term “patent/child” table is wrong! That was network databases, not RDBMS. We have referenced and referencing table. Totally different concept!
CREATE TABLE Attendees
(attendee_id CHAR(10) NOT NULL PRIMARY KEY,
attendee_type INTEGER NOT NULL --- bad design.
CHECK (attendee_type BETWEEN ?? AND ??),
email_address VARCHAR(255),
CREATE TABLE Activities
(activity_id CHAR(10) NOT NULL PRIMARY KEY,
Now the relationship table. I have to make a guess about the cardinally be 1:1, 1:m or n:m.
CREATE TABLE Attendance_Roster
(attendee_id CHAR(10) NOT NULL --- UNIQUE??
REFERENCES Attendees (attendee_id),
activity_id Activities CHAR(10) NOT NULL ---UNIQUE??
REFERENCES Activities (activity_id)
PRIMARY KEY (attendee_id, activity_id), --- wild guess!
>> UNIQUE KEY is on activity_id, attendee_type_code_id_value_category, object_id, email_address <<
Aside from the incorrect “UNIQUE KEY” syntax, think about having things like an email_address in a key. This is what we SQL people call a non-key attribute.
>> We have a requirement where we need to update the ObjectID. There is a new mapping where I dump that into a temp table #tempActivityMapping (intObjectID INTEGER NOT NULL, intNewObjectID INTEGER NULL) <<
Mapping?? We do not have that concept in RDBMS. Also putting meta data prefixes like “int_” is called a “tibble” and we SQL people laugh (or cry) when we see it.
Then you have old proprietary Sybase UODATE .. FROM .. syntax. Google it; it is flawed and will fail.
Please stop programming until you have a basic understanding of RDBMS versus OO and traditional file systems. Look at my credits; when I tell you, I think I have some authority.
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
Problem with primary key violation in master-detail screens
Hi,
I found a problem/bug in master-detail screens in which the PK of the detail table consist of the PK of the master table and an additional column. E.g. a manually entered sequence 'in parent'.
I will use the following simple scenario to explain the problem (it's easy to reproduce):
PROJECT table
# id (PK)
* name
PROJECT REQUIREMENTS table
# prj_id (PK)
# sequence_id (PK)
* description
Just create the BC EO, VO and AM and set both display properties of the prj_id attribute of the project requirements VO to hidden.
Create a new screen in the application structure file in which you can select a project (table-form layout) and display the details (table layout) on the same page.
With this basic setup you can generate the app to enter, update and delete projects and their requirements.
The problem occurs if you have a project with a least 1 requirements stored in the database and you try to enter a second requirement with an existing sequence_id within the project. This is an use case in which a end-user enters wrong data.
So assume we have in the database:
prj_id sequence_id description
====== =========== ===========
1 1 req1
and the end-user enters (prj_id is entered automatically as it's not displayed):
1 1 req2 >> user should have enterd sequence_id 2...
Step 1. If you try to save an error will be displayed:
JBO-25013: Too many objects match the primary key oracle.jbo.Key[227300 1 ].
And the sequnece_id is emptied automatically.
Step 2. So the end-user re-enters the sequence_id but fills in 2 now and saves.
Another error is displayed: JBO-27014 sequence_id in AppModule is required
How strange? Everything is filled in already.
Step 3. If you just hit save again (without changing anything) you got a transaction completed successfully.
I checked the logfiles and noticed an exception during after executing step 2.
oracle.jbo.AttrValException: JBO-27014: Attribute SequenceId in AppModule.ProjectRequirementsView2 is required at oracle.jbo.AttrValException.<init>(AttrValException.java) at oracle.jbo.server.JboMandatoryAttributesValidator.validate(JboMandatoryAttributesValidator.java) at oracle.jbo.server.EntityDefImpl.validate(EntityDefImpl.java:2051) at oracle.jbo.server.EntityImpl.validateEntity(EntityImpl.java:1373) at mypackage1.ProjectRequirementsImpl.validateEntity(JwTekeningnummerImpl.java:273) at oracle.jbo.server.EntityImpl.validate(EntityImpl.java:1508) at oracle.jbo.server.EntityImpl.validateChildren(EntityImpl.java:1232) at oracle.jbo.server.EntityImpl.validateEntity(EntityImpl.java:1339) at oracle.jbo.server.EntityImpl.validate(EntityImpl.java:1508) at oracle.jbo.server.DBTransactionImpl.validate(DBTransactionImpl.java:3965) at oracle.adf.model.bc4j.DCJboDataControl.validate(DCJboDataControl.java:967) at oracle.adf.model.binding.DCBindingContainer.validateInputValues(DCBindingContainer.java:1683) at oracle.jheadstart.view.adfuix.JhsInitModelListener.validateInputValues(JhsInitModelListener.java:193) at oracle.jheadstart.view.adfuix.JhsInitModelListener._doModelUpdate(JhsInitModelListener.java:166) at oracle.jheadstart.view.adfuix.JhsInitModelListener.eventStarted(JhsInitModelListener.java:92) at oracle.cabo.servlet.AbstractPageBroker._fireUIXRequestEvent(Unknown Source) at oracle.cabo.servlet.AbstractPageBroker.handleRequest(Unknown Source) at oracle.cabo.servlet.ui.BaseUIPageBroker.handleRequest(Unknown Source) at oracle.adf.controller.struts.actions.StrutsUixLifecycle$NonRenderingPageBroker.handleRequest(StrutsUixLifecycle.java:325) at oracle.cabo.servlet.PageBrokerHandler.handleRequest(Unknown Source) at oracle.adf.controller.struts.actions.StrutsUixLifecycle._runUixController(StrutsUixLifecycle.java:215) at oracle.adf.controller.struts.actions.StrutsUixLifecycle.processUpdateModel(StrutsUixLifecycle.java:106) at oracle.jheadstart.controller.strutsadf.action.JhsStrutsUixLifecycle.processUpdateModel(JhsStrutsUixLifecycle.java:140) at oracle.adf.controller.struts.actions.DataAction.processUpdateModel(DataAction.java:317) at oracle.jheadstart.controller.strutsadf.action.JhsDataAction.processUpdateModel(JhsDataAction.java:622) at oracle.adf.controller.struts.actions.DataAction.processUpdateModel(DataAction.java:508) at oracle.adf.controller.lifecycle.PageLifecycle.handleLifecycle(PageLifecycle.java:112) at oracle.adf.controller.struts.actions.StrutsUixLifecycle.handleLifecycle(StrutsUixLifecycle.java:70) at oracle.adf.controller.struts.actions.DataAction.handleLifecycle(DataAction.java:223) at oracle.jheadstart.controller.strutsadf.action.JhsDataAction.handleLifecycle(JhsDataAction.java:389) at oracle.adf.controller.struts.actions.DataAction.execute(DataAction.java:155) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1485) at oracle.jheadstart.controller.strutsadf.JhsActionServlet.process(JhsActionServlet.java:127) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:527) at javax.servlet.http.HttpServlet.service(HttpServlet.java:765) at javax.servlet.http.HttpServlet.service(HttpServlet.java:853) at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:65) at oracle.security.jazn.oc4j.JAZNFilter.doFilter(Unknown Source) at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:16) at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:239) at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:20) at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:239) at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:20) at oracle.jheadstart.controller.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java) at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:645) at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:322) at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790) at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:270) at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112) at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192) at java.lang.Thread.run(Thread.java:534)
This also explains why I got a error message within the application at step 2, but not an expected error message!
From the logfile I can see something is going wrong in the oracle.jbo.server.EntityImpl.validateEntity method.
To get more insight what was goning on I overridden the validateEntity method in my VO Impl class:
protected void validateEntity()
System.out.println("MARCEL>> sequenceId=" + getSequenceId());
System.out.println("MARCEL>> description=" + getDescription());
super.validateEntity();
In step 1 the validateEntity is not called.
In step 2 the sequeceId = null (and description = reg2)
I expected the sequeceId to be 2 now as I entered this. Furthermore if I look at the (JhsActionServlet) request parameters in the log, I can see that the value of the sequenceId was 2. I guess something is going wrong at this point in converting the request parametes to the EO.
In step 3 the sequeceId = 2 (as expected)
Note that I'm using the evaluation copy version of JHeadstart 10.1.2.
If this was patched in any later build, could you please tell which changes I have to make to the JHS sources to solve this problem.
Regards,
MarcelMarcel,
I cannot reproduce this in version 10.1.2.2. We did not "fix" this, although changes in the runtime might have fixed this "silently". I suggest you upgrade to 10.1.2.2 and see whether you still get the error.
Steven Davelaar,
JHeadstart Team. -
Composite Primary Key Mapping error
I have the following code:
public class PreviousStepEJB3PK implements Serializable {
private static final long serialVersionUID = 3024775815042084864L;
public Long id;
public Long previousId;
public PreviousStepEJB3PK() {
public PreviousStepEJB3PK(Long id, Long previousId) {
this.id = id;
this.previousId = previousId;
public boolean equals(Object other) {
if (other instanceof PreviousStepEJB3PK) {
final PreviousStepEJB3PK otherPreviousStepPK = (PreviousStepEJB3PK) other;
final boolean areEqual = (otherPreviousStepPK.id.equals(id) && otherPreviousStepPK.previousId.equals(previousId));
return areEqual;
return false;
public int hashCode() {
return super.hashCode();
@Entity
@Table(name = "OS_CURRENTSTEP_PREV")
@NamedQuery(name = "findById",
query = "select object(o) from PreviousCurrentStepEJB3 o where o.id = ?1")
@IdClass(PreviousStepEJB3PK.class)
public class PreviousCurrentStepEJB3 implements Serializable {
private static final long serialVersionUID = 1717698904412346878L;
@Id
@Column(name = "ID", nullable = false)
private Long id;
@Id
@Column(name = "PREVIOUS_ID", nullable = false)
private Long previousId;
public PreviousCurrentStepEJB3() {
I´m using eclipse to deploy my application over an OC4J. When I launch the deploy I have the following error:
07/07/05 11:42:47 Caused by: Exception [TOPLINK-7150] (Oracle TopLink Essentials - 2006.8 (Build 060829)): oracle.toplink.essentials.exceptions.ValidationException
Exception Description: Invalid composite primary key specification. The names of the primary key fields or properties in the primary key class [ar.com.eds.mcd.fawkes.model.PreviousStepEJB3PK] and those of the entity bean class [class ar.com.eds.mcd.fawkes.model.PreviousCurrentStepEJB3] must correspond and their types must be the same. Also, ensure that you have specified id elements for the corresponding attributes in XML and/or an @Id on the corresponding fields or properties of the entity class.
07/07/05 11:42:47 at oracle.toplink.essentials.exceptions.ValidationException.invalidCompositePKSpecification(ValidationException.java:995)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataValidator.throwInvalidCompositePKSpecification(MetadataValidator.java:119)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.ClassAccessor.validatePrimaryKey(ClassAccessor.java:1463)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.ClassAccessor.process(ClassAccessor.java:463)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProcessor.processAnnotations(MetadataProcessor.java:196)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.processORMetadata(EntityManagerSetupImpl.java:993)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:501)
07/07/05 11:42:47 at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createContainerEntityManagerFactory(EntityManagerFactoryProvider.java:152)I ran into a similar problem. I was able to work around it by putting the @Id annotations on the accessors for the primary key fields rather than on the fields themselves. This seems like a bug to me.
-
Primary Key Violation at the time of Moving from primary range to secondary range.
Hi Experts,
I've observed a strange issue in our environment.
we are using sql server 2008 R2 with SP2.
whenever a table is moving from primary range to secondary range on it's identity values, application is getting crashed with the message as below.
Violation of PRIMARY KEY constraint 'PK6'. Cannot insert duplicate key in object 'dbo.TD_TRANN'. The duplicate key value is (17868679).
The statement has been terminated.
OR
Violation of UNIQUE KEY constraint 'IX_TDS_COST'. Cannot insert duplicate key in object 'dbo.TDS_COST'. The duplicate key value is (17, 19431201).
identity ranges were auto managed by replication. agents are running continuous.
please suggest.
Cheers, Vinod MalloluWell this is pretty simple, so there are two type of subscriptions (Server and client) in merge replication. So while adding article you provide following parameters:
@pub_identity_range
@identity_range
You can check the details of above parameters on following article:http://msdn.microsoft.com/en-us/library/ms174329.aspx
Snippet
@pub_identity_range= ]
pub_identity_range
Controls the identity range size allocated to a Subscriber with a server subscription when automatic identity range management is used. This identity range is reserved for a republishing Subscriber to allocate to its own Subscribers.
pub_identity_range is bigint, with a default of NULL. You must specify this parameter if
identityrangemanagementoption is auto or if
auto_identity_range is true.
[ @identity_range= ]
identity_range
Controls the identity range size allocated both to the Publisher and to the Subscriber when automatic identity range management is used.
identity_range is bigint, with a default of NULL. You must specify this parameter if
identityrangemanagementoption is auto or if
auto_identity_range is true.
So for example you are adding "Server" type subscription then we consider @pub_identity_range value while assigning the range to that sub. If it is "Client" type subscription in that case we consider @identity_range value.
You could run following query to check the range assigned to each publisher and subscriber:
SELECT B.SUBSCRIBER_SERVER,B.DB_NAME,A.* FROM MSMERGE_IDENTITY_RANGE A,SYSMERGESUBSCRIPTIONS B
WHERE A.SUBID=B.SUBID
This should answer your other question as well.
Vikas Rana | Please mark solved if I've answered your question, vote for it as helpful to help other user's find a solution quicker -------------------------------------------------------------------------------- This posting is provided "AS IS"
with no warranties, and confers no rights. ------------------------------------------------ -
SET NOCOUNTON causing Primary key violation exception from client code
Hi,
We have a stored procedure in SQL Server - 2012 which contains two insert statements as below
BEGIN
SET NOCOUNT ON;
if(@Status ='1')
Begin
insert into dcmnt_mstr
(trn_id,sub_no,usr_id,ref_id,email_id,reg_no,ordrd_date,ordr_cust_ref,email_status,chrg_status,upd_user,row_created_by)
values(@TranId,@SubsriberNo,@UserId,@RefId,@EmailId,'00000000',getdate(),@RefText,'0','0','Bulk Order',@AppId)
end
insert into dcmnt_bulk
(trn_id,reg_no,pkg_type,proc_status,upd_user)
values
(@TranId,@RegNo,@PkgType,'0','Bulk Order')
END
By using the SSMS2012 query analyzer we are able execute this SP successfully.
But when this is executed from client code[Java Code] we are getting an error like below
com.jnetdirect.jsql.x: Violation of PRIMARY KEY constraint 'PK_dcmnt_mstr'.
Cannot insert duplicate key in object 'dbo.dcmnt_mstr'. The duplicate key value is (5421e73993b46c15).
The same is working fine from the client code once the statement SET NOCOUNT ON; is removed from the SP.
Could some one can help to identify the root cause?
Thank youIt looks like the application code is examining the row count returned and retrying the insert if less than 1. So you need to either specify SET NOCOUNT OFF (the default) or change the app code.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Toplink Essentials + Composite Primary Key Mapping Error
Exception [TOPLINK-7150] (Oracle TopLink Essentials - 2006.8 (Build 060829)): oracle.toplink.essentials.exceptions.ValidationException
Exception Description: Invalid composite primary key specification. The names of the primary key fields or properties in the primary key class [ar.com.eds.mcd.fawkes.model.PreviousStepEJB3PK] and those of the entity bean class [class ar.com.eds.mcd.fawkes.model.PreviousCurrentStepEJB3] must correspond and their types must be the same. Also, ensure that you have specified id elements for the corresponding attributes in XML and/or an @Id on the corresponding fields or properties of the entity class.
Is it a Toplink bug?
I have the following code:
public class PreviousStepEJB3PK implements Serializable {
private static final long serialVersionUID = 3024775815042084864L;
public Long id;
public Long previousId;
public PreviousStepEJB3PK() {
public PreviousStepEJB3PK(Long id, Long previousId) {
this.id = id;
this.previousId = previousId;
public boolean equals(Object other) {
if (other instanceof PreviousStepEJB3PK) {
final PreviousStepEJB3PK otherPreviousStepPK = (PreviousStepEJB3PK) other;
final boolean areEqual = (otherPreviousStepPK.id.equals(id) && otherPreviousStepPK.previousId.equals(previousId));
return areEqual;
return false;
public int hashCode() {
return super.hashCode();
@Entity
@Table(name = "OS_CURRENTSTEP_PREV")
@NamedQuery(name = "findById",
query = "select object(o) from PreviousCurrentStepEJB3 o where o.id = ?1")
@IdClass(PreviousStepEJB3PK.class)
public class PreviousCurrentStepEJB3 implements Serializable {
private static final long serialVersionUID = 1717698904412346878L;
@Id
@Column(name = "ID", nullable = false)
private Long id;
@Id
@Column(name = "PREVIOUS_ID", nullable = false)
private Long previousId;
public PreviousCurrentStepEJB3() {
I´m using eclipse to deploy my application over an OC4J. When I launch the deploy I have the following error:
07/07/05 11:42:47 Caused by: Exception [TOPLINK-7150] (Oracle TopLink Essentials - 2006.8 (Build 060829)): oracle.toplink.essentials.exceptions.ValidationException
Exception Description: Invalid composite primary key specification. The names of the primary key fields or properties in the primary key class [ar.com.eds.mcd.fawkes.model.PreviousStepEJB3PK] and those of the entity bean class [class ar.com.eds.mcd.fawkes.model.PreviousCurrentStepEJB3] must correspond and their types must be the same. Also, ensure that you have specified id elements for the corresponding attributes in XML and/or an @Id on the corresponding fields or properties of the entity class.
07/07/05 11:42:47 at oracle.toplink.essentials.exceptions.ValidationException.invalidCompositePKSpecification(ValidationException.java:995)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataValidator.throwInvalidCompositePKSpecification(MetadataValidator.java:119)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.ClassAccessor.validatePrimaryKey(ClassAccessor.java:1463)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.ClassAccessor.process(ClassAccessor.java:463)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProcessor.processAnnotations(MetadataProcessor.java:196)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.processORMetadata(EntityManagerSetupImpl.java:993)
07/07/05 11:42:47 at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:501)
07/07/05 11:42:47 at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createContainerEntityManagerFactory(EntityManagerFactoryProvider.java:152)I ran into a similar problem. I was able to work around it by putting the @Id annotations on the accessors for the primary key fields rather than on the fields themselves. This seems like a bug to me.
Maybe you are looking for
-
Snap crackle pop issue....fixed - audigy 2 plat/
My first post, Hope the title doesn't mislead/anger anyone, but i truly believe these solutions are uni'versal and can be applied with just about everyone. My card is almost 5 years old now or so... I've seen the frustration on this board about this
-
In need of a new Mac Pro, which one?
In the market for a new Mac Pro, mainly for CS5 apps, an ocassional Final Cut project and some Modo rendering (simple work). My Mac is an early 2008 2.8 GHz 8 core with 10 gigs of RAM, rock solid performance and trouble free and I'm looking to buy a
-
Copy control routines in Sales Order and Debit Memo Req.
Hi, I have a scenario wherein, I am creating a Debit Memo Request with reference to a Sales Order. DMR(Debit Memo Request ) & SO (Sales Order)will not be created on the same day. I want to have Pricing date in the DMR (both at item and Header level)
-
How to map an array to fixed fields using Biztalk mapper
I need to remap an array of objects like this: <Root> <ListOfObjs> <Obj> <Attr1>0000</Attr1> <Attr2>Hello!</Attr2> </Obj> <Obj> <Attr1>1111</Attr1> <Attr2>Hello1!</Attr2>
-
Just installed iphoto from ilife 11 now iget message that the application can't be opened because it may be damaged or incomplete. Should I reinstall? Thank you for any suggestions.