UNIQUE constraint vs checking before INSERT
I have a SQL server table RealEstate with columns - Id, Property, Property_Value. This table has about 5-10 million rows and can increase even more in the future. I want to insert a row only if a combination of Id, Property, Property_Value does not exist
in this table.
Example Table -
1,Rooms,5
1,Bath,2
1,Address,New York
2,Rooms,2
2,Bath,1
2,Address,Miami
Inserting 2,Address,Miami should NOT be allowed. But, 2,Price,2billion is okay. I am curious to know which is the "best" way to do this and
why. The why part is most important to me.
Check if a row exists before you insert it.
Set unique constraints on all 3 columns and let the database do the checking for you.
Is there any scenario where one would be better than the other ?
Thanks.
Why?
Because the database engine does exactly what you want - it is designed to do this in a way that anticipates collisions with simultaneous inserts and allows only a single row for any given combination of values. If you choose to manage this at the
application level - which is the alternative you propose - then EVERY application that attempts to insert rows must be designed to both check immediately before insertion and immediately afterwards (since these inserts can occur simulateously and you must
allow for communication delays between database and client). And since we know that programmers are not infallible (many other adjectives come to mind as well), there exists a high probability that the duplicate checking logic will fail. And do
not forget that there are many ways of inserting data into the table - it is not just your front-end application that must use this logic - it is also every other application that is used to manage data (such as SSMS, SSIS, bcp, etc.)
Similar Messages
-
Unique constraint error on delete/insert
Hi,
I am using Jdeveloper 11.1.1.3.0. I have a ADF table where we can copy lines and delete lines. I get unique constraint error when I save. Looks like the insert operation in happening before the delete operation. Is there a way to set the execution order so that the logical behaviour is delete/update/insert.
Thanks
SVHi,
The unquie contraint is not from the primary key. There are three columns in the table (batch_id, line_number, line_type) which must be unique. In the UI, the user can delete lines, update lines and add lines and finally click the save button that does the commit. On delete the line_number gets re-numbered. So when committing, the unique error occurs because the line number already exists. Looks like insert is happening before update/delete. I cannot do commit after each delete/update/insert. I have to do it only if the user clicks the save button in the end. Is there a way to control the order of execution?
SR -
Duplicate record check before inserting records
Hi All
I want to show an user friendly message instead of (oracle.jbo.TooManyObjectsException: JBO-25013: Too many objects match the primary key oracle.jbo.Key). So in my EO i have written the following code:
OADBTransaction transaction = getOADBTransaction();
Object[] empNumberKey = {value};
EntityDefImpl empDefinition =
XXXXempEOImpl.getDefinitionObject();
XXXXempEOImpl empNo=
(XXXXempEOImpl)empDefinition.findByPrimaryKey(transaction,
new Key(empNumberKey));
if (empNo != null) {
throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
getEntityDef().getFullName(),
getPrimaryKey(), "CompanyNumber",
value, "AK",
"FWK_TBX_T_EMP_ID_UNIQUE");
setAttributeInternal(COMPANYNUMBER, value);
My observation is when duplicate empNumber is passed as '0011' then the error message is not thrown.But if i pass duplicate empNumber like '5411' error is thrown. So does it mean new Key(empNumberKey)) chops off leading 0's. Please note that in database the values are stored as '0011'. Pleasre advice. The validation fails only when value is having leading 0's.You need to create a select command before Insert and check for the result returned after executing ExecuteScalar, this will return the records count to decide whether to insert or not,
Check the below example:
http://stackoverflow.com/questions/15320544/how-to-check-if-record-exists-if-not-insert-using-vb-net
Fouad Roumieh -
Unique constraint violation error
Hello All,
I have a procedure called - FHM_DASHBOARD_PROC which inserts the data into a table called FHM_DASHBOARD_F fetching records from several tables. However, for a particular type of record, that data is not being inserted because of the Unique constraint violation
the procedure is:
create or replace
PROCEDURE FHM_DASHBOARD_PROC AS
DB_METRICS_CNT1Z number;
--V_PODNAME varchar2(10);
V_KI_CODE_DB_STATSZ varchar2(50);
V_ERRORSTRING varchar2(100);
--CURSOR PODNAME_CUR IS SELECT PODNAME,SHORTNAME FROM CRMODDEV.POD_DATA WHERE PODSTATUS_ID=1 AND PODTYPE_ID=1 ORDER BY PODNAME;
-- DB STATS
BEGIN
-- OPEN PODNAME_CUR;
-- LOOP
-- FETCH PODNAME_CUR INTO V_PODNAME,V_POD_SHORTNAME ;
-- EXIT WHEN PODNAME_CUR%NOTFOUND;
BEGIN
SELECT COUNT(*) INTO DB_METRICS_CNT1Z FROM FHM_DB_METRICS_F A, FHM_DB_D B where A.DBNAME=B.DBNAME and PODNAME=V_PODNAME AND DB_DATE=TRUNC(SYSDATE-1);
DBMS_OUTPUT.PUT_LINE('DB_METRICS_CNT1Z :'|| DB_METRICS_CNT1Z);
IF DB_METRICS_CNT1Z >0 THEN
DBMS_OUTPUT.PUT_LINE('DB STATS');
INSERT INTO FHM_DASHBOARD_F(PODNAME,DASH_DATE,KI_CODE,KI_VALUE,KI_STATUS)
(SELECT PODNAME, DASH_DATE AS CU_DATE, KI.KI_CODE, NVL(PF.KI_VALUE,0),
CASE
WHEN PF.KI_VALUE = ki.warning_threshold then 2
when PF.KI_VALUE=0 then 0
ELSE 1
END AS ALERT_STATUS
FROM
(SELECT PODNAME,DB_DATE AS DASH_DATE,decode(a.stats_last_status,'SUCCEEDED',1,'FAILED',2,'STOPPED',2,NULL,0) KI_VALUE from
FHM_DB_METRICS_F a,fhm_db_d b where a.dbname=b.dbname and podname='XYZ' and db_date=TRUNC(SYSDATE-1) and dbtype='OLTP')PF,
FHM_KEY_INDICATOR_D KI where PF.PODNAME=KI.POD_NAME AND KI.TIER_CODE=3 AND KI.KI_NAME='DB_STATS'
AND (PF.PODNAME,TRUNC(PF.DASH_DATE),KI.KI_CODE) NOT IN (SELECT PODNAME,DASH_DATE,KI_CODE FROM FHM_DASHBOARD_F));
COMMIT;
ELSE
SELECT KI_CODE INTO V_KI_CODE_DB_STATSZ FROM FHM_KEY_INDICATOR_D WHERE POD_NAME=V_PODNAME AND KI_NAME='DB_STATS';
DBMS_OUTPUT.PUT_LINE('V_KI_CODE_DB_STATSZ :'||V_KI_CODE_DB_STATSZ);
INSERT INTO FHM_DASHBOARD_F(PODNAME,DASH_DATE,KI_CODE,KI_VALUE,KI_STATUS) VALUES(V_PODNAME,TRUNC(SYSDATE-1),V_KI_CODE_DB_STATSZ,0,0);
COMMIT;
END IF;
EXCEPTION
WHEN OTHERS THEN
V_ERRORSTRING :='INSERT INTO FHM_DASHBOARD_F_ERROR_LOG(POD_NAME,KI_NAME,ERRORNO,ERRORMESSAGE,DATETIME) VALUES
('''||V_PODNAME||''',''DB_STATS'','''||SQLCODE||''','''||SQLERRM||''',SYSDATE)';
EXECUTE IMMEDIATE V_ERRORSTRING;
COMMIT;
END;
--END LOOP;
--CLOSE PODNAME_CUR;
END;
END FHM_DASHBOARD_PROC;and the table where the data is inserting is
CREATE TABLE "CRMODDEV"."FHM_DASHBOARD_F"
"PODNAME" VARCHAR2(25 BYTE) NOT NULL ENABLE,
"DASH_DATE" DATE,
"KI_CODE" NUMBER NOT NULL ENABLE,
"KI_VALUE" NUMBER,
"KI_STATUS" NUMBER,
CONSTRAINT "FHM_DASHBOARD_F_DATE_PK" PRIMARY KEY ("DASH_DATE", "PODNAME", "KI_CODE") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING COMPUTE STATISTICS STORAGE(INITIAL 4194304 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "CRMODDEV_IDX" ENABLE,
CONSTRAINT "FHM_DASHBOARD_F_KI_CODE_FK" FOREIGN KEY ("KI_CODE") REFERENCES "CRMODDEV"."FHM_KEY_INDICATOR_D" ("KI_CODE") ENABLE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING STORAGE
INITIAL 3145728 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
TABLESPACE "CRMODDEV_TBL" ENABLE ROW MOVEMENT ;the Primary key constraint is FHM_DASHBOARD_F_DATE_PK and is on 3 columns of the table DASH_DATE, PODNAME, KI_CODE
And this is the query used in the Procedure for inserting the data into the table
(SELECT PODNAME, DASH_DATE AS CU_DATE, KI.KI_CODE, NVL(PF.KI_VALUE,0),
CASE
WHEN PF.KI_VALUE = ki.warning_threshold then 2
when PF.KI_VALUE=0 then 0
ELSE 1
END AS ALERT_STATUS
From
(Select Podname,Db_Date As Dash_Date,Decode(A.Stats_Last_Status,'SUCCEEDED',1,'FAILED',2,'STOPPED',2,Null,0) Ki_Value From -- Added Distinct
FHM_DB_METRICS_F a,fhm_db_d b where a.dbname=b.dbname and podname in ('XYZ') and db_date = TRUNC(SYSDATE-2) and dbtype='OLTP')PF,
Fhm_Key_Indicator_D Ki Where Pf.Podname=Ki.Pod_Name And Ki.Tier_Code=3 And Ki.Ki_Name='DB_STATS'
And (Pf.Podname,Trunc(Pf.Dash_Date),Ki.Ki_Code) Not In (Select Podname,Dash_Date,Ki_Code From Fhm_Dashboard_F));It gives *2 record* as result
XYZ 20-JAN-12 2521 1 1
XYZ 20-JAN-12 2521 1 1
So it gives Unique constraint violation error while inserting. Then, I changed in the above inserting code by adding a distinct clause. After that the query gives only ONE record as result. However, that record also is not being inserted into the table and giving the same error.
Now the question is How shall I insert this record into the table successfully ?
Though the message is too long, However, I have given you the full structure of the object/procedure and error.
Thank You in Advance.when you have 5 columns in the result set adding DISTINCT is n ot the solution as you may get the same error again.
Check the target table whether the data exists before inserting ..if not check the table structure for unique constraint created on other columns.
select *from <table_name>
where
DASH_DATE=date '2012-01-20'
and PODNAME='XYZ'
and KI_CODE=2521; -
Dear All
i have a a project ADF-BC / JSF - JDeveloper 11.1.2.3.0 latest, and i have EO contains PK constrain in db in 2 fields (userid & Roleid) and i implemented bundle to handle error message with jbo error code and it works fine in AM test
and i have VO contains LOV in one attribute of this unique constrain columns (Roleid), now i dropped VO in jsf page as a af:table as below with input text with list of values for roleid and auto submit = true , and i face unexpected behavior from lov attribute in case of entering repeated value ..
when i enter another repeated value , it give me error message i created in the bundle and everything ok until now
but when i tab out of input text with list of values , it go back to old value as may be validation fired in back ground , it is not a problem until now
when i try to make anything else, he still gives me error message of duplicated key
i change the value again to correct value to avoid duplication error message , i am surprised , still i get the error message and shows me the repeated value again !!
simply it still save the old repeated value however i corrected , please any one help me to know what is happening and how to solve ?
Attribute in EO :
<Attribute
Name="RoleId"
Precision="10"
ColumnName="ROLE_ID"
SQLType="VARCHAR"
Type="java.lang.String"
ColumnType="VARCHAR2"
TableName="USER_ROLES"
PrimaryKey="true">
<DesignTime>
<Attr Name="_DisplaySize" Value="10"/>
</DesignTime>
<validation:ExistsValidationBean
Name="RoleId_Rule_1"
ResId="CS.model.BC.EO.UserRolesEO.RoleId_Rule_1"
OperandType="EO"
AssocName="CS.model.BC.ASS.UsersRolesFk2ASS"/>
</Attribute>
Interface af : table
<af:table value="#{bindings.UserRoles2.collectionModel}" var="row"
rows="#{bindings.UserRoles2.rangeSize}"
emptyText="#{bindings.UserRoles2.viewable ? 'No data to display.' : 'Access Denied.'}"
fetchSize="#{bindings.UserRoles2.rangeSize}"
rowBandingInterval="0"
filterModel="#{bindings.UserRoles2Query.queryDescriptor}"
queryListener="#{bindings.UserRoles2Query.processQuery}"
filterVisible="true" varStatus="vs"
selectedRowKeys="#{bindings.UserRoles2.collectionModel.selectedRow}"
selectionListener="#{bindings.UserRoles2.collectionModel.makeCurrent}"
rowSelection="single" id="t1" columnSelection="none"
columnStretching="column:c3">
<af:column sortProperty="#{bindings.UserRoles2.hints.RoleId.name}"
filterable="true" sortable="true"
headerText="#{bindings.UserRoles2.hints.RoleId.label}"
id="c2">
<af:inputListOfValues id="roleIdId"
popupTitle="Search and Select: #{bindings.UserRoles2.hints.RoleId.label}"
value="#{row.bindings.RoleId.inputValue}"
model="#{row.bindings.RoleId.listOfValuesModel}"
required="#{bindings.UserRoles2.hints.RoleId.mandatory}"
columns="#{bindings.UserRoles2.hints.RoleId.displayWidth}"
shortDesc="#{bindings.UserRoles2.hints.RoleId.tooltip}"
autoSubmit="true" editMode="select">
<f:validator binding="#{row.bindings.RoleId.validator}"/>
</af:inputListOfValues>
</af:column>
<af:column sortProperty="#{bindings.UserRoles2.hints.RoleName.name}"
sortable="true"
headerText="#{bindings.UserRoles2.hints.RoleName.label}"
id="c3">
<af:outputFormatted value="#{row.bindings.RoleName.inputValue}"
id="of7" partialTriggers="roleIdId"/>
</af:column>
<af:column sortProperty="#{bindings.UserRoles2.hints.Active.name}"
filterable="true" sortable="true"
headerText="#{bindings.UserRoles2.hints.Active.label}"
id="c4">
<af:outputFormatted value="#{row.bindings.Active.inputValue}"
id="of8" partialTriggers="roleIdId"/>
</af:column>
</af:table>
Edited by: user8854969 on Oct 7, 2012 1:34 PM
Edited by: user8854969 on Oct 7, 2012 2:16 PMI believe there is a little confusion here. The error I am encountering has to do with a unique constraint violation and not a foreign key constraint. If I have the data:
PK FK sequence
1 5 1
2 5 2
3 5 3
with a unique constraint on (FK, sequence) and want to change it to:
PK FK sequence
1 5 1
4 5 2 --insert
2 5 3 --update on sequence
3 5 4 --update on sequence
I am currently getting a unique constraint violation because the insert is issued before the updates, and the updates alone cause problems because they are issued out of order (i.e. if I do the shifting operation without the insertion of a new record). -
Unique constraint on an indexed field
I have a table with ALOC and ZLOC fields. They are starting points and destinations, respectively. If you can imagine a circuit, however, they represent the same wire whether the ALOC comes first or the ZLOC. New circuits come in and are inserted with an ALOC and a ZLOC, both 8 characters each. They are not unique individually. But they go in together in a master table that is constrained unique. So far everything is OK. The problem comes when you have an ALOC and ZLOC combination as in the first line below. You cannot enter another one like it. However, according to business and common sense rules, the second line also cannot be entered, because it represents the same wire, just turned around. Only 1 version of what is below can be entered into this 17 character field at a time. If ALOC and ZLOC are the same, of course, it only needs to be checked once as it is the same location and would also be the same if turned around.
GNBONCEU-BURLNCDA
BURLNCDA-GNBONCEU
Does anyone have any hints at how I can work this? Somehow, I have to check if the first version is unique. If it is not, it may be unique the other way around, so I would not enter it into what we call the MASTER field. Of course, if it is unique on the first pass, it won't go in.
Thanks for any ideas.REVERSE() function will not be helpful in this case.
You should create a mechanism that would do this check before inserting in table.
If the format of input value is hyphen delimited, then you might want to create a function that would break string from hyphen and then reverse them. Then, check in the database if any of the strings (normal or reversed) exists.
A trigger in combination with function will serve your purpose. -
Mapping failed due to Unique Constraint
Hi,
We are using Oracle Financial Analytics and noticed that one of the mapping SIL_PositionDimenstionHierarchy failed due to Unique Constraint.
Mapping was inserting data into W_POSITION_DH table.
Is there a way to find the source table from where it is inserting the record?
I don't have much Informatica experience.
Thanks,
PoojakHi,
Our servers are hosted to Oracle On Demand. I have opened a ticket with Oracle. They have asked us to aply a patch to increase the version of the Mapping.
Please see update from Analyst.
"Development team is requesting to apply patch number 9782718, which solves this issue as well.
You need to go to Patches and Updates tab on Support Portal, click on Oracle, Siebel and Hyperion products link and do a simple search with patch number 9782718. The patch files contain detailed instructions.
Please test on your Test environment first before applying to Production. Let us know if you have any issues.
Thanks,
Poojak -
Hello,
I need a (NUMBER) column to be unique but allowing to have one exception (the 0).
Is that possible?
If not as a constraint: any other way within the DB ?
Thanks!SQL> create table t (a number)
Table created.
SQL> create unique index t_idx on t (decode(a,0,null,a))
Index created.
SQL> insert into t values (1)
1 row created.
SQL> insert into t values (2)
1 row created.
SQL> insert into t values (1)
Error at line 12
ORA-00001: unique constraint (T_IDX) violated
SQL> insert into t values (0)
1 row created.
SQL> insert into t values (0)
1 row created.
SQL> drop table t
Table dropped. -
Insert called before delete in a collection with unique constraint
Hi all,
I have a simple @OneToMany private mapping:
private Collection<Item> items;
@OneToMany(mappedBy = "parent", cascade = CascadeType.ALL)
public Collection<Item> getItems() {
return items;
public void setItems(Collection<Item> items) {
this.items = items;
public void customize(ClassDescriptor classDescriptor) throws Exception {
OneToManyMapping mapping = (OneToManyMapping)
classDescriptor.getMappingForAttributeName("items");
mapping.privateOwnedRelationship();
I have a unique constraint on my Items table that a certain value cannot be duplicated.
My problem appears when I remove a previously saved item from the collection and add a new item containing the same data, at the same time.
After I save the parent and do a flush, I receive SQLIntegrityConstraintViolationException because TopLink performs first an insert query instead of deleting the existing item.
I tested the application and everything went fine with: remove item / save parent / insert item / save parent
I checked on the Internet and the documentation but didn't find anything similar to my problem. I tried debugging TopLink's internal calls but I'm missing some general ideas about all the inner workings and don't know what to look for. I use TopLink version: Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))
Does anyone have a hint of what to look for?
Edited by: wise_guybg on Sep 25, 2009 4:01 PM
Edited by: wise_guybg on Oct 5, 2009 11:22 AMThank you for the suggestions James
As I mentioned briefly I have done some debugging but couldn't understand how collections are updated. What I did find out is that setShouldPerformDeletesFirst() doesn't come into play in this case because this is not a consecutive change on entities.
What I have in my case is a collection inside an entity that the user has tampered with and now TopLink has to do a merge. I cannot call flush() in the middle since the user has not approved that the changes made to the entity should be saved.
I see that for TopLink it's not easy to figure out the order in which changes were made to a collection. Here is pseudo-code of when the constraint is touched:
entity.items.remove(a)
entity.items.add(b)
merge(entity)
And here is code that executes without a problem:
entity.items.remove(a)
merge(entity)
entity.items.add(b)
merge(entity)
So once again, I think that collection changes are managed differently but I don't find a way to tell TopLink how to handle them. Any ideas? -
Unique constraint violation - Finding inserted row before commit
Hi,
I have a scenario something like that, where I need to insert a row to a table - contact, for different employees under different department. So I happen to insert same employee contact multiple times & do commit at the last if there is no contact in the table. How to find out if the same employee record is already inserted ?.
The unique constraints is on emp id + depatment id in the contact table. So I face the issue when i do the commit, it finds the same emp id + dept id contact has been inserted multiple times.
Please let me know how to handle it?
Regards,
Dhamo.Hi
What is exactly what you want to achieve? Do you want to display a message to the user? Do you want to prevent it to post the contact if it already exists?
Regards -
Insert result of query into a table with unique constraint
Hi,
I have a query result that I would like to store in a table. The target table has a unique constraint. In MySQL you can do
insert IGNORE into myResultTable <...select statement...>
The IGNORE clause means if inserting a row would violate a unique or primary key constraint, do not insert the row, but continue inserting the rest of the query. Leaving the IGNORE clause out would cause the insert to fail and an error to return.
I would like to do this in oracle... that is insert the results of a query that are not already in the target table. What is the best way to do this? One way is use a procedural language and loop through the first query, checking to see if each row is a duplicate before inserting it. I would think this would be slow if there are lots of records. Other options...
insert into myTargetTable
select value from mySourceTable where ... and not exists (select 'x' from myTargetTable where value = mySourceTable.value)
insert into myTargetTable
select mySourceTable.value
from myTargetTable RIGHT JOIN mySourceTable
ON myTargetTable.value = mySourceTable.value
where ...
and myTargetTable.value IS NULL
any other suggestions?
Thanks,
SimonTry doing a MINUS instead of not exists., ie Source MINUS Target.
Disabling the constraint will not help you since this will allow the duplicate rows to be inserted into the table. I don't think you want this.
--kalpana -
TopLink inserts when it should update, unique constraint exception
The title says most of it. I am creating a series of objects and then updating them in rapid succession. It would be great to handle all the values during the insert, but it's not possible for this process. The majority of the time, the cached object is updated correctly and no problem occurs, but every once in a while TopLink tries to re-insert the previously inserted object, instead of updating it. Obviously this throws a unique constraint exception for the PK, and boots me out of the process.
I can refreshObject and then it works fine. I'm looking for the underlying cause though. I want to be able to use the cache!
Thanks!!
Aaron
Oracle JDBC driver Version: 10.2.0.3.0
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
TopLink Release-Designation: 10g Release 3Hello,
How are you obtaining these objects to update them? The likely cause is that you are running out of memory and the objects in the shared identity map are getting garbage collected due to weak references. This means that when they get registered, TopLink can't find them in the cache so assumes they are new (the default existence check is check cache).
If this is the case, there are a few options.
1) Increase the size of the cache for the class in question to something more appropriate for your application, or use a fullIdentyMap so nothing ever gets pushed out. Of course, a FullIdentityMap should not be used lightly as it prevents its objects from beign gc'd and has consequences to related objects as described in:
Caching Causing Memory Leak Effect
Both these options will require more memory resources though, so if garbage collection is running and clearing out the references because you are already low on memory, this might make GC need to run more frequently
2) Increase the JVM memory. This assumes completely that GC is clearing out the unused weak references from your cache because it is low on memory - GC can still occur so it doesn't guarantee the problem will be any better
3) Read in the object through the UnitofWork before making changes (instead of using RegisterObject on existing objects), use the registerExistingObject for known existing objects or use the uow mergeClone method. Merge should cause the object to be read from the database if it is not in the cache, but it depends on the existence options used
Best Regards,
Chris -
Unique constraint error but key when inserting unique keys!
Hi,
I am tring to update an existing database with some older data, I am getting an error which complains about a unique constraint being violated although there is no voilation evident after testing.
I am puzzled by this and was wondering if someone could give me any extra info on another possible cause.
I googled but found that this error is caused by duplicating primary keys which I am not doing though I still get this error.
I run this query then the update and get the output below:
desc test_sales;
desc sales_order;
SELECT order_id FROM sales_order WHERE order_id IN (SELECT order_id FROM test_Sales);
INSERT INTO sales_order
(order_id,order_date,customer_id, ship_date,total)
SELECT
order_id,
order_date,
cust_id,
ship_date,
total
FROM
test_sales;
desc test_sales;
Name Null Type
------------------------------ -------- CUST_ID NUMBER(6)
OLD_SYSTEM_ID VARCHAR2(25)
DESCRIPTION VARCHAR2(35)
ORDER_DATE DATE
SHIP_DATE DATE
QUANTITY NUMBER
ORDER_ID NOT NULL NUMBER(4)
ITEM_ID NOT NULL NUMBER(4)
SITE_COUNT NUMBER(2)
TOTAL NUMBER(8,2)
PRODUCT_CODE NUMBER(6)
LIST_PRICE NUMBER(8,2)
12 rows selected
desc sales_order;
Name Null Type
------------------------------ -------- ORDER_ID NOT NULL NUMBER(4)
ORDER_DATE DATE
CUSTOMER_ID NUMBER(6)
SHIP_DATE DATE
TOTAL NUMBER(8,2)
5 rows selected
ORDER_ID
0 rows selected
Error starting at line 7 in command:
INSERT INTO sales_order
(order_id,order_date,customer_id, ship_date,total)
SELECT
order_id,
order_date,
cust_id,
ship_date,
total
FROM
test_sales
Error report:
SQL Error: ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 8
ORA-00001: unique constraint (MICHAELKELLY.SYS_C00210356) violatedMessage was edited by:
Mike1981
Message was edited by:
Mike1981ORA-00001: unique constraint (MICHAELKELLY.SYS_C00210356) violated
=> check dba_cons_columns to see what the constraint actually exists off -
Insert / delete fails on unique constraint (order problem)
Hi dear Kodo team,
within a transaction, we are deleting a certain jdo object. The deletion is
undone later (within the same tx), resulting in creating a new object with
the same contents as the deleted one (gets another oid, of course). In our
database, there is a unique constraint on two mapped fields of this object.
Unfortunately, kodo issues the INSERT statement before the DELETE
statement, thus producing a unique constraint violation.
Is there a way to influence the statement order? Is there any solution other
than disabling / deferring this constraint?
Thanks in advance,
ContusBeside Oracle, we are using our db schema on db platforms that do not
support deferred constraints. That's why we would like to know if there is
a way to influence statement order without removing those constraints.
Thanks,
Contus
Alex Roytman wrote:
In general the best option is to use deferred constraints on all your FK
and unique constraints. You still get complete referential integrity but
do not need Kodo to order your statements for you. If you are already
doing it but still get this error message and you are using Oracle 9.2 it
seems to be a bug in oracle JDBC. I submitted it to oracle - they accepted
it and supposedly working on resolution
"contus" <[email protected]> wrote in message
news:bulku6$qad$[email protected]..
Hi dear Kodo team,
within a transaction, we are deleting a certain jdo object. The deletionis
undone later (within the same tx), resulting in creating a new object
with the same contents as the deleted one (gets another oid, of course).
In our database, there is a unique constraint on two mapped fields of
thisobject.
Unfortunately, kodo issues the INSERT statement before the DELETE
statement, thus producing a unique constraint violation.
Is there a way to influence the statement order? Is there any solutionother
than disabling / deferring this constraint?
Thanks in advance,
Contus -
Reattempt of insert after a ORA-0001 unique constraint violation
Hi,
I'm inserting into a table. The primary key on this table is made up of the user id and a one-up transaction number. Unfortunately, I cannot change the design of this table.
Because I have to query the table to get the next transaction number before I insert into the table, I sometimes get a ORA-0001 (unique constraint violation) error because some other session grabbed the next transaction number and committed before I did.
To deal with this I retry the insert, that is, read the table again for the next tran number and insert. I allow for this up to 3 times. If after the third attempt I fail again, I rollback.
I'm seeing 3 records in the table.
So here are my questions: Do I need to rollback when I get the ORA-0001 error? I thought I wouldn't have to. If I do, why? The insert failed, how could the commit statement commit 3 records?
Thanks!No, the userid and transaction numbers are not the same (combined) for each of the 3 rows.
Here is the logic to retry again when I get a ORA-0001:
PROCEDURE insert_record(
table1_rec_in IN table1%ROWTYPE,
tran_number OUT table1.trans_number%TYPE,
attempt_number IN PLS_INTEGER)
IS
next_tran_number table1.trans_number%TYPE;
BEGIN
SELECT NVL(MAX(trans_number), 0) + 1
INTO next_tran_number
FROM table1
WHERE userid = table1_rec_in.table1_userid;
INSERT INTO table1
(userid,
trans_number,
amount,
transdate)
VALUES
(table1_rec_in.userid,
next_tran_number,
table1_rec_in.amount,
SYSDATE);
tran_number := next_tran_number;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
IF attempt_number < 3
THEN
DECLARE
next_attempt_number PLS_INTEGER;
BEGIN
next_attempt_number := attempt_number + 1;
insert_record(
table1_rec_in,
tran_number,
next_attempt_number);
END;
ELSE
RAISE unable_to_insert_rec;
END IF;
WHEN OTHERS THEN
RAISE unable_to_insert_rec;
END;
I'm using recursion to try the insert again. Is this the source of my problems? I don't see it and can't reproduce it.
Maybe you are looking for
-
Encrypting Message / Payload received from DB Procedure - DBAdapter
Hi, Im trying to encrypt the message returned from DBAdapter. My scenario is 1) Get User details from thridparty application 2) Pass the details to DB Procedure (Using DBAdapter here) 2) DB procedure creates user and send back its password in plain t
-
Bug in CF11 regarding Query of queries syntax?
So I have decided to try CF11 because of an official outstanding CF10 bug. Once I installed CF11, I get an error when running code like this: <cfquery name="LOCAL.stat_questions" datasource="#APPLICATION.dsn#"> SELECT survey_questionID
-
Dear All, Our Production Database files are hosted in SAN Storage. Right now we want to movieto a new storage provided by a different storage company I would like to know what are things/procedures to be done from the database side and any links that
-
Why can't i open my app store?
why can't i open my app store?
-
No DLT? Need Copy Protection? NO PROBLEM!
http://www.adobeforums.com/cgi-bin/webx/.3bbf860d This post will tell you exactly how to create your DDP images on your HDD, and best of all - i how to write them to DVD-R or DVD+R media, with no need for a DLT machine!!