Problem arises adding unique constraint

Dear all
The problem I am facing while adding a unique constraint(based on 2 columns) is ,the table has already duplicate records.I am unable to delete those duplicate records as it has several child record table and those child tables are parent of another table.....and so on. I dont want to use ON DELETE CASCADE.any kind of suggestion?

Hi,
If you want to delete the duplicates you can but that is upto you.
You can delete the duplicate rows by disabling all foreign key constraints.
SQL> ED
Wrote file afiedt.buf
  1  select constraint_name,constraint_type,STATUS
  2  from user_constraints
  3* where table_name='EMP'
SQL> /
CONSTRAINT_NAME                C STATUS
PK_EMP                         P ENABLED
FK_DEPTNO                      R ENABLED
SQL> ALTER TABLE EMP DISABLE CONSTRAINT FK_DEPTNO;
Table altered.
SQL> select constraint_name,constraint_type,STATUS
  2  from user_constraints
  3  where table_name='EMP'
  4  ;
CONSTRAINT_NAME                C STATUS
PK_EMP                         P ENABLED
FK_DEPTNO                      R DISABLED-San
Edited by: san on Feb 24, 2011 1:02 PM

Similar Messages

  • Adding a UNIQUE Constraint to an existing table

    Hi,
    I got this issue because of the existing data. My issue is in my table I want to add a unique constraint to two columns but i cannot add this becuase of the existing repeating data. And I cannot do a data repair to fix the repeating data since the customer is reling on this data so we can not get the decision to repair the data.
    As a solution I try this method, by adding a function to check the repeating data before inserting the data to the table. It's working fine to a user but when it come to multiple users it's not working because users can log to the database and can do their transactions simultaneously.
    My question is; is there a way in oracle to add a constraint to the data that can add in future not to the old existing data?
    Thanks,
    Darex.

    user9359353 wrote:
    Hi,
    As a solution I try this method, by adding a function to check the repeating data before inserting the data to the table. It's working fine to a user but when it come to multiple users it's not working because users can log to the database and can do their transactions simultaneously.
    show us what is "not working". if you are calling this function from a trigger the correct way, the first person to commit will have their data inserted and the next person will not be able to insert.
    edit: you may want to have a read through this thread: where I was encountering a problem with multi-row validation using triggers:
    Row level validation dependant on other rows?
    note this post:
    Rob van Wijk wrote:
    Hi WhiteHat,
    Here are two blogposts of mine about this subject that you might find useful.
    One with some guidelines about implementing entity rules: http://rwijk.blogspot.com/2008/08/implementing-entity->rules.html
    And one with an example how to implement (among others) an overlap check in the context of another option for your >question, the product RuleGen: http://rwijk.blogspot.com/2008/05/rulegen-test.html
    Hope this helps.
    Regards,
    Rob.Edited by: WhiteHat on Jun 8, 2011 3:51 PM

  • Error while adding Image: ORA-00001: unique constraint

    Dear all,
    I have an error while adding images to MDM I can´t explain. I want to add 7231 images. About 6983 run fine. The rest throws this error.
    Error: Service 'SRM_MDM_CATALOG', Schema 'SRMMDMCATALOG2_m000', ERROR CODE=1 ||| ORA-00001: unique constraint (SRMMDMCATALOG2_M000.IDATA_6_DATAID) violated
    Last CMD: INSERT INTO A2i_Data_6 (PermanentId, DataId, DataGroupId, Description_L3, CodeName, Name_L3) VALUES (:1, :2, :3, :4, :5, :6)
    Name=PermanentId; Type=9; Value=1641157; ArraySize=0; NullInd=0;
    Name=DataId; Type=5; Value=426458; ArraySize=0; NullInd=0;
    Name=DataGroupId; Type=4; Value=9; ArraySize=0; NullInd=0;
    Name=Description_L3; Type=2; Value=; ArraySize=0; NullInd=0;
    Name=CodeName; Type=2; Value=207603_Img8078_gif; ArraySize=0; NullInd=0;
    Name=Name_L3; Type=2; Value=207603_Img8078.gif; ArraySize=0; NullInd=0;
    Error: Service 'SRM_MDM_CATALOG', Schema 'SRMMDMCATALOG2_m000', ERROR CODE=1 ||| ORA-00001: unique constraint (SRMMDMCATALOG2_M000.IDATA_6_DATAID) violated
    Last CMD: INSERT INTO A2i_Data_6 (PermanentId, DataId, DataGroupId, Description_L3, CodeName, Name_L3) VALUES (:1, :2, :3, :4, :5, :6)
    Name=PermanentId; Type=9; Value=1641157; ArraySize=0; NullInd=0;
    Name=DataId; Type=5; Value=426458; ArraySize=0; NullInd=0;
    Name=DataGroupId; Type=4; Value=9; ArraySize=0; NullInd=0;
    Name=Description_L3; Type=2; Value=; ArraySize=0; NullInd=0;
    Name=CodeName; Type=2; Value=207603_Img8085_gif; ArraySize=0; NullInd=0;
    Name=Name_L3; Type=2; Value=207603_Img8085.gif; ArraySize=0; NullInd=0;
    I checked all data. There is no such dataset in the database. Can anybody give me a hint how to avoid this error.
    One thing I wonder: The PermanentId is allways the same but I can´t do anything here.
    BR
    Roman
    Edited by: Roman Becker on Jan 13, 2009 12:59 AM

    Hi Ritam,
    For such issues, can you please create a new thread or directly email the author rather than dragging back up a very old thread, it is unlikely that the resolution would be the same as the database/application/etc releases would most probably be very different.
    For now I will close this thread as unanswered.
    SAP SRM Moderators.

  • Problem with update query on unique constraint(java programmer)

    Hi ,
    i created one table called mytab
    mytab there are two columns
    1)no 2)name
    no is unique
    there are two record 1 and 2
    mytab
    no name
    1 a
    2 b
    i want to update 1 record to 2
    and first record to 2
    i am using batchexecute
    str1="update mytab set no=1 where no=2"
    str2="update mytab set no=2 where no=1";
    st.addBatch(str1);
    st.addBatch(st2);
    st.executeBatch();
    and i am getting this exeception
    java.sql.BatchUpdateException: error occurred during batching: ORA-00001: unique constraint (EPRDEV.SYS_C0017184) violated
    is there any solution
    dont say change first record to null and then change it as second then so on
    why because there are so many records are there .
    other than using null as halt any other solution.
    please help
    with regards

    Well, you can almost do it. You see, although str1 will set col no to 1 where no = 2, now you will have two rows where no = 1. So str2 will update two rows to 2. If you can avoid this problem then you will use deferrable unique constraint supported by a non unique index to do what you want.
    eg.
    SQL>  create table t (no number, name varchar2(10));
    Table created.
    SQL> create index t_x on t(no);
    Index created.
    SQL> alter table t add constraint t_ux unique (no) deferrable using index;
    Table altered.
    SQL> insert into t values (1, 'a');
    1 row created.
    SQL> insert into t values (2, 'b');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from t;
            NO NAME
             1 a
             2 b
    2 rows selected.
    SQL> set constraint t_ux deferred;
    Constraint set.
    SQL> update t set no = 2 where no =1;
    1 row updated.
    SQL> update t set no = 1 where no = 2;
    2 rows updated.
    SQL> rollback;
    Rollback complete.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Insert / delete fails on unique constraint (order problem)

    Hi dear Kodo team,
    within a transaction, we are deleting a certain jdo object. The deletion is
    undone later (within the same tx), resulting in creating a new object with
    the same contents as the deleted one (gets another oid, of course). In our
    database, there is a unique constraint on two mapped fields of this object.
    Unfortunately, kodo issues the INSERT statement before the DELETE
    statement, thus producing a unique constraint violation.
    Is there a way to influence the statement order? Is there any solution other
    than disabling / deferring this constraint?
    Thanks in advance,
    Contus

    Beside Oracle, we are using our db schema on db platforms that do not
    support deferred constraints. That's why we would like to know if there is
    a way to influence statement order without removing those constraints.
    Thanks,
    Contus
    Alex Roytman wrote:
    In general the best option is to use deferred constraints on all your FK
    and unique constraints. You still get complete referential integrity but
    do not need Kodo to order your statements for you. If you are already
    doing it but still get this error message and you are using Oracle 9.2 it
    seems to be a bug in oracle JDBC. I submitted it to oracle - they accepted
    it and supposedly working on resolution
    "contus" <[email protected]> wrote in message
    news:bulku6$qad$[email protected]..
    Hi dear Kodo team,
    within a transaction, we are deleting a certain jdo object. The deletionis
    undone later (within the same tx), resulting in creating a new object
    with the same contents as the deleted one (gets another oid, of course).
    In our database, there is a unique constraint on two mapped fields of
    thisobject.
    Unfortunately, kodo issues the INSERT statement before the DELETE
    statement, thus producing a unique constraint violation.
    Is there a way to influence the statement order? Is there any solutionother
    than disabling / deferring this constraint?
    Thanks in advance,
    Contus

  • Cannot recreate unique constraint in 11gR2

    In 10.1.0.2, I was able to first drop a unique constraint then recreate it as follows:
    alter table OTPC_BLD_CHANGESET drop constraint UNQ_OTPC_BLD_CHANGESET_0;
    ALTER TABLE OTPC_BLD_CHANGESET ADD CONSTRAINT UNQ_OTPC_BLD_CHANGESET_0 UNIQUE (NAME, OWNER, COMMIT_DATE);
    However, when I tried to the same thing in 11.2.0.1, it complained:
    ERROR at line 1:
    ORA-00955: name is already used by an existing object
    Could anyone tell me how I can recreate a unique constraint by dropping it and creating it? Thanks.

    sb92075 wrote:
    The problem is that YOU insist on using an object name that already exists.Not exactly. If you look at original post it says:
    alter table OTPC_BLD_CHANGESET <font size =5 color=red>drop constraint</font> UNQ_OTPC_BLD_CHANGESET_0;
    ALTER TABLE OTPC_BLD_CHANGESET ADD CONSTRAINT UNQ_OTPC_BLD_CHANGESET_0 UNIQUE (NAME, OWNER, COMMIT_DATE);However something doesn't add up here. You can't get ORA-00955 while adding a constraint since it is not an object (table is while constraint isn't). If, for some reason, drop constraint failed then add constraint would result in:
    SQL> create table emp1 as select * from emp
      2  /
    Table created.
    SQL> alter table emp1
      2    add constraint emp1_uk
      3      unique(empno)
      4  /
    Table altered.
    SQL> alter table emp1
      2    add constraint emp1_uk
      3      unique(empno)
      4  /
        unique(empno)
    ERROR at line 3:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06502: PL/SQL: numeric or value error
    ORA-06512: at line 5
    ORA-02261: such unique or primary key already exists in the tableIf constraint was of a different type:
    SQL> alter table emp1
      2    add constraint emp1_uk
      3      check(1 = 1)
      4  /
      add constraint emp1_uk
    ERROR at line 2:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06502: PL/SQL: numeric or value error
    ORA-06512: at line 5
    ORA-02264: name already used by an existing constraint
    SQL> So OP needs to provide a snippet of SQL*Plus code with errors showing what was done.
    SY.

  • ORA-00001:Unique Constraint while inserting 2 similar records from source

    Hi,
    in TEST1 there are records:
    10 20 ABC
    10 20 DEF
    I amt trying to insert into TEST which has CODE as Primary Key.
    declare
    type cur is ref cursor;
    cur_t cur;
    type v_t is table of TEST%rowtype;
    tab v_t;
    v_act_cnt_str VARCHAR2(4000);
    v_act_cnt NUMBER:=0;
    BEGIN
    v_act_cnt_str:=' SELECT COUNT(*) '||' FROM TEST '||' WHERE '||'('||CODE||')'||' IN '||'('||'SELECT '||CODE||' FROM TEST1'||')';
    DBMS_OUTPUT.PUT_LINE('The Actual Count String is'||v_act_cnt_str);
    EXECUTE IMMEDIATE v_act_cnt_str INTO v_act_cnt;
    open cur_t for select * from TEST1 ORDER BY ROWNUM;
    loop
    fetch cur_t bulk collect into tab limit 10000;
    if v_act_cnt=0 THEN
    forall i in 1..tab.count
    insert into TEST values tab(i);
    commit;
    ELSE
    v_merge_act_str :=
    'MERGE INTO TEST '||
    ' DEST' || ' USING TEST1 '||
    ' SRC' || ' ON (' || DEST.CODE=SRC.CODE || ')' ||
    ' WHEN MATCHED THEN ';
    first_str := 'UPDATE ' || ' SET ' ||
    'DEST.NAME=SRC.NAME,DEST.DEPT_NAME=SRC.DEPT_NAME;
    execute immediate v_merge_act_str || first_str;
    v_merge_act_str := '';
    first_str := '';
    commit;
    END IF;
    end loop;
    END;
    ITS GIVING ERROR as:
    ORA-00001: unique constraint (PK_TEST1) violated
    Any help will be needful for me
    Edited by: user598986 on Sep 22, 2009 4:20 AM
    Edited by: user598986 on Sep 22, 2009 4:22 AM

    Your code makes absolutely no sense whatsover. The whole point of MERGE is that it allows us to conditionally apply records from a source table as inserts or updates to a target table. So why have you coded two separate statements? And why are you using such horrible dynamic SQL?
    Sorry to unload on you, but you seem to have your code unnecessarily complicated, and that it turn makes it unnecessarily harder to debug. As an added "bonus" this approach will also perform considerably slower than a single MERGE statement. SQL is all about set operations. Don't do anything procedurally which can be done in a set.
    Cheers, APC
    blog: http://radiofreetooting.blogspot.com

  • Uniques constraint violation error while executing statspack.snap

    Hi,
    I have configured a job to run the statspack snap at a interval of 20 min from 6:00 PM to 3:00 AM . Do perform this task , I have crontab 2 scripts : one to execute the job at 6 PM and another to break the job at 3 AM. My Oracle version is 9.2.0.7 and OS env is AIX 5.3
    My execute scripts look like:
    sqlplus perfstat/perfstat <<EOF
    exec dbms_job.broken(341,FALSE);
    exec dbms_job.run(341);
    The problem is , that the job work fine for weekdays but on weekend get aborted with the error :
    ORA-12012: error on auto execute of job 341
    ORA-00001: unique constraint (PERFSTAT.STATS$SQL_SUMMARY_PK) violated
    ORA-06512: at "PERFSTAT.STATSPACK", line 1361
    ORA-06512: at "PERFSTAT.STATSPACK", line 2471
    ORA-06512: at "PERFSTAT.STATSPACK", line 91
    ORA-06512: at line 1
    After looking on to metalink , I came to know that this is one listed bug 2784796 which was fixed in 10g.
    My question is , why there is no issue on weekdays using the same script. There is no activity on the db on weekend and online backup start quite late at night.
    Thanks
    Anky

    The reasons for hitting this bug are explained in Metalink, "...cursors with same sql text (at least 31 first characters), same hash_value but a different parent cursor...", you can also find the workaround in Note:393300.1.
    Enrique

  • Unique Constraint error while executing statspack.snap procedure

    The following is the error which popped up when i was trying to execute statspack.snap procedure from perfstat user:
    ORA-00001: unique constraint (PERFSTAT.STATS$LATCH_CHILDREN_PK) violated
    ORA-06512: at "PERFSTAT.STATSPACK", line 1619
    ORA-06512: at "PERFSTAT.STATSPACK", line 71
    ORA-06512: at line 1
    How could i resolve such a problem, as all the constraints and objects for this user are created while running the oracle supplied script 'spcreate.sql'.
    If any 1 knows how to handle such a situation , can come forward n please help me out.

    SQL> execute statspack.snap (i_snap_level=>10);
    ERROR at line 1:
    ORA-00001: unique constraint (PERFSTAT.STATS$LATCH_CHILDREN_PK) violated
    ORA-06512: at "PERFSTAT.STATSPACK", line 1619
    ORA-06512: at "PERFSTAT.STATSPACK", line 71
    ORA-06512: at line 1
    Cause
    -- Its because of the bug # 2384758.
    "STATSPACK.SNAP GIVES ORA-1 ON STATS$LATCH_CHILDREN_PK WHEN I_SNAP_LEVEL=>10"
    -- The STATS$LATCH_CHILDREN table has a primary key constraint on (snap_id, dbid, instance_number, latch#, child#).
    Fix
    -- This is fixed in 9.0.2 and will not be backported to earlier versions because the level 10 is not a normal level to be setting unless requested by oracle support.

  • Insert called before delete in a collection with unique constraint

    Hi all,
    I have a simple @OneToMany private mapping:
    private Collection<Item> items;
    @OneToMany(mappedBy = "parent", cascade = CascadeType.ALL)
    public Collection<Item> getItems() {
    return items;
    public void setItems(Collection<Item> items) {
    this.items = items;
    public void customize(ClassDescriptor classDescriptor) throws Exception {
    OneToManyMapping mapping = (OneToManyMapping)
    classDescriptor.getMappingForAttributeName("items");
    mapping.privateOwnedRelationship();
    I have a unique constraint on my Items table that a certain value cannot be duplicated.
    My problem appears when I remove a previously saved item from the collection and add a new item containing the same data, at the same time.
    After I save the parent and do a flush, I receive SQLIntegrityConstraintViolationException because TopLink performs first an insert query instead of deleting the existing item.
    I tested the application and everything went fine with: remove item / save parent / insert item / save parent
    I checked on the Internet and the documentation but didn't find anything similar to my problem. I tried debugging TopLink's internal calls but I'm missing some general ideas about all the inner workings and don't know what to look for. I use TopLink version: Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))
    Does anyone have a hint of what to look for?
    Edited by: wise_guybg on Sep 25, 2009 4:01 PM
    Edited by: wise_guybg on Oct 5, 2009 11:22 AM

    Thank you for the suggestions James
    As I mentioned briefly I have done some debugging but couldn't understand how collections are updated. What I did find out is that setShouldPerformDeletesFirst() doesn't come into play in this case because this is not a consecutive change on entities.
    What I have in my case is a collection inside an entity that the user has tampered with and now TopLink has to do a merge. I cannot call flush() in the middle since the user has not approved that the changes made to the entity should be saved.
    I see that for TopLink it's not easy to figure out the order in which changes were made to a collection. Here is pseudo-code of when the constraint is touched:
    entity.items.remove(a)
    entity.items.add(b)
    merge(entity)
    And here is code that executes without a problem:
    entity.items.remove(a)
    merge(entity)
    entity.items.add(b)
    merge(entity)
    So once again, I think that collection changes are managed differently but I don't find a way to tell TopLink how to handle them. Any ideas?

  • PK with TIMESTAMP causes insert unique constraint error at DST switch

    Hi,
    I have a test that inserts rows in a table that has a TIMESTAMP in its PK.
    When inserting rows that cross over the November DST change, it tries to insert these dates:
    --- Sun Nov 02 02:00:00 EST 2008
    --- Sun Nov 02 01:00:00 EST 2008
    --- Sun Nov 02 01:00:00 EDT 2008
    This is the output of 3 different UTC dates. We can see that there are 2 x 1am, but they differ in their DST. They are really 3 different UTC dates.
    But I get this error:
    --- Expected error: ORA-00001: unique constraint (DDLTEST1.SYS_C00142622) violated
    But I can get around that error and can insert these same dates if I set my JVM to UTC. The inserts work so I suspect this to be a JDBC issue.
    I am using the Oracle Thin driver in a spring app:
    <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
    <property name="url" value="jdbc:oracle:thin:@localhost:db" />
    I can post the sample code but wonder if there is an obvious answer to this.
    Note that MySql had the same problem. I fixed it by specifying the UTC timezone in the connection string, like this:
    jdbc:mysql://[host]/[db]?useLegacyDatetimeCode=false&useTimezone=true&sessionVariables=time_zone='UTC'
    Any idea how to get around that problem without setting JVM to UTC ?
    Claude
    Edited by: user2678899 on 10 juin 2009 10:09
    I removed #2 work around which was wrong
    Edited by: user2678899 on 10 juin 2009 10:23

    Timur Akhmadeev wrote:
    Hi,
    I suspect this to be a JDBC issue.Nope, this is your schema design gap. It breaks the main principle of PK: to uniquely identify a row in a table always. Your PK doesn't satisfy it, since it depends on client's settings.Why is setting the JVM to UTC working ? This is the part that confuses me.
    I create the 3 dates in Eastern time, then change the JVM default to UTC: the 3 inserts work.
    I create the 3 dates in Eastern time, then leave the JVM to Eastern time , the 3rd insert gets the unique constraint error.
    To me, the PK principle is not broken: these are 3 different UTC dates.

  • Handling unique constraint exception

    Hi,
    I have a problem with handling constraint exception. I'm wondering is there a way to get the value that violates unique constraint. Use case is following, user creates document and then adds 50 records and manually enters inventory and serial numbers that should be unique he tries to commit but since he entered let's say inventory number that already exists in database I get exception and I can only show him error message "you've entered non unique inventory or serial numbers" so then he has to go record by record and try to figure where is the mistake. it would be much easier if I could tell him that there's a duplicate inventory and serial number in record number 34. is this possible?
    thanks in advance,
    Tomislav.

    Tomislav,
    May be a good case to use a custom DBTransactionImpl, as detailed [url http://forums.oracle.com/forums/thread.jspa?threadID=700740&tstart=0]here. Something like this, perhaps (just off the top of my head):
      public void postChanges(TransactionEvent te)
        try
          super.postChanges(te);
        catch (DMLConstraintException ex)
          Row r = ex.getEntityRow();
          AttributeDef attrs[] = r.getStructureDef().getAttributeDefs();
          int numAttrs = attrs.length;
          String msg = "Duplicate key found. Primary key values: ";
          for (int i = 0; i < numAttrs; i++)
            if (attrs.isPrimaryKey())
    msg = msg + attrs[i].getName() + ":" + r.getAttribute(i).toString() + " ";
    throw new JboException(msg);
    Note that this code would catch all DML exceptions, not just unique key violations. That would be left as an exercise for the reader ;)
    John

  • Table with unique Constraint

    Dear All
    i have a a project ADF-BC / JSF - JDeveloper 11.1.2.3.0 latest, and i have EO contains PK constrain in db in 2 fields (userid & Roleid) and i implemented bundle to handle error message with jbo error code and it works fine in AM test
    and i have VO contains LOV in one attribute of this unique constrain columns (Roleid), now i dropped VO in jsf page as a af:table as below with input text with list of values for roleid and auto submit = true , and i face unexpected behavior from lov attribute in case of entering repeated value ..
    when i enter another repeated value , it give me error message i created in the bundle and everything ok until now
    but when i tab out of input text with list of values , it go back to old value as may be validation fired in back ground , it is not a problem until now
    when i try to make anything else, he still gives me error message of duplicated key
    i change the value again to correct value to avoid duplication error message , i am surprised , still i get the error message and shows me the repeated value again !!
    simply it still save the old repeated value however i corrected , please any one help me to know what is happening and how to solve ?
    Attribute in EO :
    <Attribute
    Name="RoleId"
    Precision="10"
    ColumnName="ROLE_ID"
    SQLType="VARCHAR"
    Type="java.lang.String"
    ColumnType="VARCHAR2"
    TableName="USER_ROLES"
    PrimaryKey="true">
    <DesignTime>
    <Attr Name="_DisplaySize" Value="10"/>
    </DesignTime>
    <validation:ExistsValidationBean
    Name="RoleId_Rule_1"
    ResId="CS.model.BC.EO.UserRolesEO.RoleId_Rule_1"
    OperandType="EO"
    AssocName="CS.model.BC.ASS.UsersRolesFk2ASS"/>
    </Attribute>
    Interface af : table
    <af:table value="#{bindings.UserRoles2.collectionModel}" var="row"
    rows="#{bindings.UserRoles2.rangeSize}"
    emptyText="#{bindings.UserRoles2.viewable ? 'No data to display.' : 'Access Denied.'}"
    fetchSize="#{bindings.UserRoles2.rangeSize}"
    rowBandingInterval="0"
    filterModel="#{bindings.UserRoles2Query.queryDescriptor}"
    queryListener="#{bindings.UserRoles2Query.processQuery}"
    filterVisible="true" varStatus="vs"
    selectedRowKeys="#{bindings.UserRoles2.collectionModel.selectedRow}"
    selectionListener="#{bindings.UserRoles2.collectionModel.makeCurrent}"
    rowSelection="single" id="t1" columnSelection="none"
    columnStretching="column:c3">
    <af:column sortProperty="#{bindings.UserRoles2.hints.RoleId.name}"
    filterable="true" sortable="true"
    headerText="#{bindings.UserRoles2.hints.RoleId.label}"
    id="c2">
    <af:inputListOfValues id="roleIdId"
    popupTitle="Search and Select: #{bindings.UserRoles2.hints.RoleId.label}"
    value="#{row.bindings.RoleId.inputValue}"
    model="#{row.bindings.RoleId.listOfValuesModel}"
    required="#{bindings.UserRoles2.hints.RoleId.mandatory}"
    columns="#{bindings.UserRoles2.hints.RoleId.displayWidth}"
    shortDesc="#{bindings.UserRoles2.hints.RoleId.tooltip}"
    autoSubmit="true" editMode="select">
    <f:validator binding="#{row.bindings.RoleId.validator}"/>
    </af:inputListOfValues>
    </af:column>
    <af:column sortProperty="#{bindings.UserRoles2.hints.RoleName.name}"
    sortable="true"
    headerText="#{bindings.UserRoles2.hints.RoleName.label}"
    id="c3">
    <af:outputFormatted value="#{row.bindings.RoleName.inputValue}"
    id="of7" partialTriggers="roleIdId"/>
    </af:column>
    <af:column sortProperty="#{bindings.UserRoles2.hints.Active.name}"
    filterable="true" sortable="true"
    headerText="#{bindings.UserRoles2.hints.Active.label}"
    id="c4">
    <af:outputFormatted value="#{row.bindings.Active.inputValue}"
    id="of8" partialTriggers="roleIdId"/>
    </af:column>
    </af:table>
    Edited by: user8854969 on Oct 7, 2012 1:34 PM
    Edited by: user8854969 on Oct 7, 2012 2:16 PM

    I believe there is a little confusion here. The error I am encountering has to do with a unique constraint violation and not a foreign key constraint. If I have the data:
    PK FK sequence
    1 5 1
    2 5 2
    3 5 3
    with a unique constraint on (FK, sequence) and want to change it to:
    PK FK sequence
    1 5 1
    4 5 2 --insert
    2 5 3 --update on sequence
    3 5 4 --update on sequence
    I am currently getting a unique constraint violation because the insert is issued before the updates, and the updates alone cause problems because they are issued out of order (i.e. if I do the shifting operation without the insertion of a new record).

  • Unique constraint violation on version enabled table

    hi!
    we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
    if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
    we somewhat assume that to be a pretty standard scenario when using versioned data.
    is this some kind of bug or do we just miss something important here?
    more information:
    - versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
    - database version is 10.2.0.1.0
    - wm installation output:
    ALLOW_CAPTURE_EVENTS OFF
    ALLOW_MULTI_PARENT_WORKSPACES OFF
    ALLOW_NESTED_TABLE_COLUMNS OFF
    CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
    NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    NUMBER_OF_COMPRESS_BATCHES 50
    OWM_VERSION 10.2.0.1.0
    UNDO_SPACE UNLIMITED
    USE_TIMESTAMP_TYPE_FOR_HISTORY ON
    - all operations are done on LIVE workspace
    any help is appreciated.
    EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
    regards,
    Andreas Schilling
    Message was edited by:
    aschilling

    hi!
    we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
    if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
    we somewhat assume that to be a pretty standard scenario when using versioned data.
    is this some kind of bug or do we just miss something important here?
    more information:
    - versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
    - database version is 10.2.0.1.0
    - wm installation output:
    ALLOW_CAPTURE_EVENTS OFF
    ALLOW_MULTI_PARENT_WORKSPACES OFF
    ALLOW_NESTED_TABLE_COLUMNS OFF
    CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
    NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    NUMBER_OF_COMPRESS_BATCHES 50
    OWM_VERSION 10.2.0.1.0
    UNDO_SPACE UNLIMITED
    USE_TIMESTAMP_TYPE_FOR_HISTORY ON
    - all operations are done on LIVE workspace
    any help is appreciated.
    EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
    regards,
    Andreas Schilling
    Message was edited by:
    aschilling

  • Performance Impact of Unique Constraint on a Date Column

    In a table I have a compound unique constraint which extends over 3 columns. As a part of functionality I need to add another DATE column to this unique constraint.
    I would like to know the performance implications of adding a DATE column to the unique constraint. Would the DATE column behave like another VARCHAR2 or NUMBER column, or would it degrade the performance significantly?
    Thanks
    Message was edited by:
    user627808

    What performance are you concerned about degrading? Inserts? Or queries? If you're talking about queries, what sort of access path are you concerned about?
    Are you concerned that merely changing the definition of the unique constraint would impact performance? Or are you worried that whatever functional change you are making would impact performance (i.e. if you are now retaining historical data in the table rather than just updating it)?
    Regardless of the performance impact, unique indexes (and unique constraints) need to be correct. If you need to allow duplicates on the 3 current columns with different dates, then you would need to change the unique constraint definition regardless of the performance impact. Fast and wrong generally isn't going to be preferrable to slow and right.
    Generally, though, there probably is no reason to be terribly concerned about performance here. Indexing a date is no different than indexing any other primitive data type.
    Justin

Maybe you are looking for