Table with unique Constraint

Dear All
i have a a project ADF-BC / JSF - JDeveloper 11.1.2.3.0 latest, and i have EO contains PK constrain in db in 2 fields (userid & Roleid) and i implemented bundle to handle error message with jbo error code and it works fine in AM test
and i have VO contains LOV in one attribute of this unique constrain columns (Roleid), now i dropped VO in jsf page as a af:table as below with input text with list of values for roleid and auto submit = true , and i face unexpected behavior from lov attribute in case of entering repeated value ..
when i enter another repeated value , it give me error message i created in the bundle and everything ok until now
but when i tab out of input text with list of values , it go back to old value as may be validation fired in back ground , it is not a problem until now
when i try to make anything else, he still gives me error message of duplicated key
i change the value again to correct value to avoid duplication error message , i am surprised , still i get the error message and shows me the repeated value again !!
simply it still save the old repeated value however i corrected , please any one help me to know what is happening and how to solve ?
Attribute in EO :
<Attribute
Name="RoleId"
Precision="10"
ColumnName="ROLE_ID"
SQLType="VARCHAR"
Type="java.lang.String"
ColumnType="VARCHAR2"
TableName="USER_ROLES"
PrimaryKey="true">
<DesignTime>
<Attr Name="_DisplaySize" Value="10"/>
</DesignTime>
<validation:ExistsValidationBean
Name="RoleId_Rule_1"
ResId="CS.model.BC.EO.UserRolesEO.RoleId_Rule_1"
OperandType="EO"
AssocName="CS.model.BC.ASS.UsersRolesFk2ASS"/>
</Attribute>
Interface af : table
<af:table value="#{bindings.UserRoles2.collectionModel}" var="row"
rows="#{bindings.UserRoles2.rangeSize}"
emptyText="#{bindings.UserRoles2.viewable ? 'No data to display.' : 'Access Denied.'}"
fetchSize="#{bindings.UserRoles2.rangeSize}"
rowBandingInterval="0"
filterModel="#{bindings.UserRoles2Query.queryDescriptor}"
queryListener="#{bindings.UserRoles2Query.processQuery}"
filterVisible="true" varStatus="vs"
selectedRowKeys="#{bindings.UserRoles2.collectionModel.selectedRow}"
selectionListener="#{bindings.UserRoles2.collectionModel.makeCurrent}"
rowSelection="single" id="t1" columnSelection="none"
columnStretching="column:c3">
<af:column sortProperty="#{bindings.UserRoles2.hints.RoleId.name}"
filterable="true" sortable="true"
headerText="#{bindings.UserRoles2.hints.RoleId.label}"
id="c2">
<af:inputListOfValues id="roleIdId"
popupTitle="Search and Select: #{bindings.UserRoles2.hints.RoleId.label}"
value="#{row.bindings.RoleId.inputValue}"
model="#{row.bindings.RoleId.listOfValuesModel}"
required="#{bindings.UserRoles2.hints.RoleId.mandatory}"
columns="#{bindings.UserRoles2.hints.RoleId.displayWidth}"
shortDesc="#{bindings.UserRoles2.hints.RoleId.tooltip}"
autoSubmit="true" editMode="select">
<f:validator binding="#{row.bindings.RoleId.validator}"/>
</af:inputListOfValues>
</af:column>
<af:column sortProperty="#{bindings.UserRoles2.hints.RoleName.name}"
sortable="true"
headerText="#{bindings.UserRoles2.hints.RoleName.label}"
id="c3">
<af:outputFormatted value="#{row.bindings.RoleName.inputValue}"
id="of7" partialTriggers="roleIdId"/>
</af:column>
<af:column sortProperty="#{bindings.UserRoles2.hints.Active.name}"
filterable="true" sortable="true"
headerText="#{bindings.UserRoles2.hints.Active.label}"
id="c4">
<af:outputFormatted value="#{row.bindings.Active.inputValue}"
id="of8" partialTriggers="roleIdId"/>
</af:column>
</af:table>
Edited by: user8854969 on Oct 7, 2012 1:34 PM
Edited by: user8854969 on Oct 7, 2012 2:16 PM

I believe there is a little confusion here. The error I am encountering has to do with a unique constraint violation and not a foreign key constraint. If I have the data:
PK FK sequence
1 5 1
2 5 2
3 5 3
with a unique constraint on (FK, sequence) and want to change it to:
PK FK sequence
1 5 1
4 5 2 --insert
2 5 3 --update on sequence
3 5 4 --update on sequence
I am currently getting a unique constraint violation because the insert is issued before the updates, and the updates alone cause problems because they are issued out of order (i.e. if I do the shifting operation without the insertion of a new record).

Similar Messages

  • Insert result of query into a table with unique constraint

    Hi,
    I have a query result that I would like to store in a table. The target table has a unique constraint. In MySQL you can do
    insert IGNORE into myResultTable <...select statement...>
    The IGNORE clause means if inserting a row would violate a unique or primary key constraint, do not insert the row, but continue inserting the rest of the query. Leaving the IGNORE clause out would cause the insert to fail and an error to return.
    I would like to do this in oracle... that is insert the results of a query that are not already in the target table. What is the best way to do this? One way is use a procedural language and loop through the first query, checking to see if each row is a duplicate before inserting it. I would think this would be slow if there are lots of records. Other options...
    insert into myTargetTable
    select value from mySourceTable where ... and not exists (select 'x' from myTargetTable where value = mySourceTable.value)
    insert into myTargetTable
    select mySourceTable.value
    from myTargetTable RIGHT JOIN mySourceTable
    ON myTargetTable.value = mySourceTable.value
    where ...
    and myTargetTable.value IS NULL
    any other suggestions?
    Thanks,
    Simon

    Try doing a MINUS instead of not exists., ie Source MINUS Target.
    Disabling the constraint will not help you since this will allow the duplicate rows to be inserted into the table. I don't think you want this.
    --kalpana                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Insert called before delete in a collection with unique constraint

    Hi all,
    I have a simple @OneToMany private mapping:
    private Collection<Item> items;
    @OneToMany(mappedBy = "parent", cascade = CascadeType.ALL)
    public Collection<Item> getItems() {
    return items;
    public void setItems(Collection<Item> items) {
    this.items = items;
    public void customize(ClassDescriptor classDescriptor) throws Exception {
    OneToManyMapping mapping = (OneToManyMapping)
    classDescriptor.getMappingForAttributeName("items");
    mapping.privateOwnedRelationship();
    I have a unique constraint on my Items table that a certain value cannot be duplicated.
    My problem appears when I remove a previously saved item from the collection and add a new item containing the same data, at the same time.
    After I save the parent and do a flush, I receive SQLIntegrityConstraintViolationException because TopLink performs first an insert query instead of deleting the existing item.
    I tested the application and everything went fine with: remove item / save parent / insert item / save parent
    I checked on the Internet and the documentation but didn't find anything similar to my problem. I tried debugging TopLink's internal calls but I'm missing some general ideas about all the inner workings and don't know what to look for. I use TopLink version: Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))
    Does anyone have a hint of what to look for?
    Edited by: wise_guybg on Sep 25, 2009 4:01 PM
    Edited by: wise_guybg on Oct 5, 2009 11:22 AM

    Thank you for the suggestions James
    As I mentioned briefly I have done some debugging but couldn't understand how collections are updated. What I did find out is that setShouldPerformDeletesFirst() doesn't come into play in this case because this is not a consecutive change on entities.
    What I have in my case is a collection inside an entity that the user has tampered with and now TopLink has to do a merge. I cannot call flush() in the middle since the user has not approved that the changes made to the entity should be saved.
    I see that for TopLink it's not easy to figure out the order in which changes were made to a collection. Here is pseudo-code of when the constraint is touched:
    entity.items.remove(a)
    entity.items.add(b)
    merge(entity)
    And here is code that executes without a problem:
    entity.items.remove(a)
    merge(entity)
    entity.items.add(b)
    merge(entity)
    So once again, I think that collection changes are managed differently but I don't find a way to tell TopLink how to handle them. Any ideas?

  • SIL_GLCOGSFact Failed with Unique Constraint

    Hi,
    A Task SIL_GLCOGSFact failed with ORA-00001: unique constraint (DW.W_GL_COGS_F_U1) violated
    while insert into table W_GL_COGS_F.
    There is not any record existing in the fact table. There is no way for such error. So weird.
    Please help me out.
    Roger
    Error in Session log as following:
    Database driver error...
    Function Name : Execute
    SQL Stmt : INSERT INTO W_GL_COGS_F(GL_ACCOUNT_WID,BUDGT_ORG_WID,CUSTOMER_WID,CUSTOMER_FIN_PROFL_WID,TERRITORY_WID,SALES_GROUP_ORG_WID,CUSTOMER_CONTACT_WID,CUSTOMER_SOLD_TO_LOC_WID,CUSTOMER_SHIP_TO_LOC_WID,CUSTOMER_BILL_TO_LOC_WID,CUSTOMER_PAYER_LOC_WID,SUPPLIER_WID,SUPPLIER_ACCOUNT_WID,SALES_REP_WID,SERVICE_REP_WID,ACCOUNT_REP_WID,PRODUCT_WID,SALES_PRODUCT_WID,INVENTORY_PRODUCT_WID,SUPPLIER_PRODUCT_WID,COMPANY_LOC_WID,PLANT_LOC_WID,SALES_OFC_LOC_WID,LEDGER_WID,COMPANY_ORG_WID,BUSN_AREA_ORG_WID,CTRL_AREA_ORG_WID,FIN_AREA_ORG_WID,SALES_ORG_WID,PURCH_ORG_WID,ISSUE_ORG_WID,DOC_TYPE_WID,CLRNG_DOC_TYPE_WID,POSTING_TYPE_WID,CLR_POST_TYPE_WID,COST_CENTER_WID,PROFIT_CENTER_WID,BANK_WID,PAY_TERMS_WID,TRANSACTION_DT_WID,TRANSACTION_TM_WID,CONVERSION_DT_WID,ORDERED_ON_DT_WID,INVOICED_ON_DT_WID,DELIVERED_ON_DT_WID,CUSTOMER_REQUEST_DT_WID,GOODS_ISSUE_DT_WID,STOCK_XFER_DT_WID,CLEARING_DOC_DT_WID,BASELINE_DT_WID,PLANNING_DT_WID,ACCOUNT_DOC_ID,COGS_DOC_AMT,COGS_LOC_AMT,XACT_QTY,UOM_CODE,DB_CR_IND,ACCT_DOC_NUM,ACCT_DOC_ITEM,ACCT_DOC_SUB_ITEM,CLEARING_DOC_NUM,CLEARING_DOC_ITEM,SALES_ORDER_NUM,SALES_ORDER_ITEM,SALES_SCH_LINE,INVOICE_NUM,INVOICE_ITEM,DELIVERY_DOC_NUM,DELIVERY_DOC_ITEM,GI_DOC_NUM,GI_DOC_ITEM,STO_DOC_NUM,STO_DOC_ITEM,DOC_HEADER_TEXT,LINE_ITEM_TEXT,ALLOCATION_NUM,FED_BALANCE_ID,BALANCE_ID,DOC_STATUS_WID,POSTED_ON_DT_WID,POSTED_ON_TM_WID,CLEARED_ON_DT_WID,GL_RECONCILED_ON_DT,DOC_CURR_CODE,LOC_CURR_CODE,LOC_EXCHANGE_RATE,GLOBAL1_EXCHANGE_RATE,GLOBAL2_EXCHANGE_RATE,GLOBAL3_EXCHANGE_RATE,CREATED_BY_WID,CHANGED_BY_WID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,DELETE_FLG,W_INSERT_DT,W_UPDATE_DT,DATASOURCE_NUM_ID,ETL_PROC_WID,INTEGRATION_ID,TENANT_ID,X_CUSTOM,GL_RECONCILED_ON_PROC_WID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    Database driver error...
    Function Name : Execute Multiple
    SQL Stmt : INSERT INTO W_GL_COGS_F(GL_ACCOUNT_WID,BUDGT_ORG_WID,CUSTOMER_WID,CUSTOMER_FIN_PROFL_WID,TERRITORY_WID,SALES_GROUP_ORG_WID,CUSTOMER_CONTACT_WID,CUSTOMER_SOLD_TO_LOC_WID,CUSTOMER_SHIP_TO_LOC_WID,CUSTOMER_BILL_TO_LOC_WID,CUSTOMER_PAYER_LOC_WID,SUPPLIER_WID,SUPPLIER_ACCOUNT_WID,SALES_REP_WID,SERVICE_REP_WID,ACCOUNT_REP_WID,PRODUCT_WID,SALES_PRODUCT_WID,INVENTORY_PRODUCT_WID,SUPPLIER_PRODUCT_WID,COMPANY_LOC_WID,PLANT_LOC_WID,SALES_OFC_LOC_WID,LEDGER_WID,COMPANY_ORG_WID,BUSN_AREA_ORG_WID,CTRL_AREA_ORG_WID,FIN_AREA_ORG_WID,SALES_ORG_WID,PURCH_ORG_WID,ISSUE_ORG_WID,DOC_TYPE_WID,CLRNG_DOC_TYPE_WID,POSTING_TYPE_WID,CLR_POST_TYPE_WID,COST_CENTER_WID,PROFIT_CENTER_WID,BANK_WID,PAY_TERMS_WID,TRANSACTION_DT_WID,TRANSACTION_TM_WID,CONVERSION_DT_WID,ORDERED_ON_DT_WID,INVOICED_ON_DT_WID,DELIVERED_ON_DT_WID,CUSTOMER_REQUEST_DT_WID,GOODS_ISSUE_DT_WID,STOCK_XFER_DT_WID,CLEARING_DOC_DT_WID,BASELINE_DT_WID,PLANNING_DT_WID,ACCOUNT_DOC_ID,COGS_DOC_AMT,COGS_LOC_AMT,XACT_QTY,UOM_CODE,DB_CR_IND,ACCT_DOC_NUM,ACCT_DOC_ITEM,ACCT_DOC_SUB_ITEM,CLEARING_DOC_NUM,CLEARING_DOC_ITEM,SALES_ORDER_NUM,SALES_ORDER_ITEM,SALES_SCH_LINE,INVOICE_NUM,INVOICE_ITEM,DELIVERY_DOC_NUM,DELIVERY_DOC_ITEM,GI_DOC_NUM,GI_DOC_ITEM,STO_DOC_NUM,STO_DOC_ITEM,DOC_HEADER_TEXT,LINE_ITEM_TEXT,ALLOCATION_NUM,FED_BALANCE_ID,BALANCE_ID,DOC_STATUS_WID,POSTED_ON_DT_WID,POSTED_ON_TM_WID,CLEARED_ON_DT_WID,GL_RECONCILED_ON_DT,DOC_CURR_CODE,LOC_CURR_CODE,LOC_EXCHANGE_RATE,GLOBAL1_EXCHANGE_RATE,GLOBAL2_EXCHANGE_RATE,GLOBAL3_EXCHANGE_RATE,CREATED_BY_WID,CHANGED_BY_WID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,DELETE_FLG,W_INSERT_DT,W_UPDATE_DT,DATASOURCE_NUM_ID,ETL_PROC_WID,INTEGRATION_ID,TENANT_ID,X_CUSTOM,GL_RECONCILED_ON_PROC_WID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> CMN_1761 Timestamp Event: [Tue Nov 17 18:50:31 2009]
    WRITER_1_*_1> WRT_8425 ERROR: Writer execution failed.
    WRITER_1_*_1> CMN_1761 Timestamp Event: [Tue Nov 17 18:50:31 2009]
    WRITER_1_*_1> WRT_8114
    Row # [1] in bad file
    WRITER_1_*_1> CMN_1053 : Rowdata: ( RowType=0(insert) Src Rowid=42 Targ Rowid=42
    GL_ACCOUNT_WID (GL_ACCOUNT_WID:Double:): "271134.0000000000"
    BUDGT_ORG_WID (BUDGT_ORG_WID:Double:): "0.00000000000000"
    CUSTOMER_WID (CUSTOMER_WID:Double:): "0.00000000000000"
    CUSTOMER_FIN_PROFL_WID (CUSTOMER_FIN_PROFL_WID:Double:): "0.00000000000000"
    TERRITORY_WID (TERRITORY_WID:Double:): "0.00000000000000"
    SALES_GROUP_ORG_WID (SALES_GROUP_ORG_WID:Double:): "0.00000000000000"
    CUSTOMER_CONTACT_WID (CUSTOMER_CONTACT_WID:Double:): "0.00000000000000"
    CUSTOMER_SOLD_TO_LOC_WID (CUSTOMER_SOLD_TO_LOC_WID:Double:): "0.00000000000000"
    CUSTOMER_SHIP_TO_LOC_WID (CUSTOMER_SHIP_TO_LOC_WID:Double:): "0.00000000000000"
    CUSTOMER_BILL_TO_LOC_WID (CUSTOMER_BILL_TO_LOC_WID:Double:): "0.00000000000000"
    CUSTOMER_PAYER_LOC_WID (CUSTOMER_PAYER_LOC_WID:Double:): "0.00000000000000"
    SUPPLIER_WID (SUPPLIER_WID:Double:): "0.00000000000000"
    SUPPLIER_ACCOUNT_WID (SUPPLIER_ACCOUNT_WID:Double:): "0.00000000000000"
    SALES_REP_WID (SALES_REP_WID:Double:): "0.00000000000000"
    SERVICE_REP_WID (SERVICE_REP_WID:Double:): "0.00000000000000"
    ACCOUNT_REP_WID (ACCOUNT_REP_WID:Double:): "0.00000000000000"
    PRODUCT_WID (PRODUCT_WID:Double:): "107667.0000000000"
    SALES_PRODUCT_WID (SALES_PRODUCT_WID:Double:): "5093039.000000000"
    INVENTORY_PRODUCT_WID (INVENTORY_PRODUCT_WID:Double:): "4376743.000000000"
    SUPPLIER_PRODUCT_WID (SUPPLIER_PRODUCT_WID:Double:): "0.00000000000000"
    COMPANY_LOC_WID (COMPANY_LOC_WID:Double:): "0.00000000000000"
    PLANT_LOC_WID (PLANT_LOC_WID:Double:): "2147.000000000000"
    SALES_OFC_LOC_WID (SALES_OFC_LOC_WID:Double:): "0.00000000000000"
    LEDGER_WID (LEDGER_WID:Double:): "2030.000000000000"
    COMPANY_ORG_WID (COMPANY_ORG_WID:Double:): "0.00000000000000"
    BUSN_AREA_ORG_WID (BUSN_AREA_ORG_WID:Double:): "0.00000000000000"
    CTRL_AREA_ORG_WID (CTRL_AREA_ORG_WID:Double:): "0.00000000000000"
    FIN_AREA_ORG_WID (FIN_AREA_ORG_WID:Double:): "0.00000000000000"
    SALES_ORG_WID (SALES_ORG_WID:Double:): "0.00000000000000"
    PURCH_ORG_WID (PURCH_ORG_WID:Double:): "0.00000000000000"
    ISSUE_ORG_WID (ISSUE_ORG_WID:Double:): "0.00000000000000"
    DOC_TYPE_WID (DOC_TYPE_WID:Double:): "6111.000000000000"
    CLRNG_DOC_TYPE_WID (CLRNG_DOC_TYPE_WID:Double:): "0.00000000000000"
    POSTING_TYPE_WID (POSTING_TYPE_WID:Double:): "0.00000000000000"
    CLR_POST_TYPE_WID (CLR_POST_TYPE_WID:Double:): "0.00000000000000"
    COST_CENTER_WID (COST_CENTER_WID:Double:): "3317.000000000000"
    PROFIT_CENTER_WID (PROFIT_CENTER_WID:Double:): "2025.000000000000"
    BANK_WID (BANK_WID:Double:): "0.00000000000000"
    PAY_TERMS_WID (PAY_TERMS_WID:Double:): "0.00000000000000"
    TRANSACTION_DT_WID (TRANSACTION_DT_WID:Double:): "20090211.00000000"
    TRANSACTION_TM_WID (TRANSACTION_TM_WID:Double:): "0.00000000000000"
    CONVERSION_DT_WID (CONVERSION_DT_WID:Double:): "0.00000000000000"
    ORDERED_ON_DT_WID (ORDERED_ON_DT_WID:Double:): "0.00000000000000"
    INVOICED_ON_DT_WID (INVOICED_ON_DT_WID:Double:): "0.00000000000000"
    DELIVERED_ON_DT_WID (DELIVERED_ON_DT_WID:Double:): "0.00000000000000"
    CUSTOMER_REQUEST_DT_WID (CUSTOMER_REQUEST_DT_WID:Double:): "0.00000000000000"
    GOODS_ISSUE_DT_WID (GOODS_ISSUE_DT_WID:Double:): "0.00000000000000"
    STOCK_XFER_DT_WID (STOCK_XFER_DT_WID:Double:): "0.00000000000000"
    CLEARING_DOC_DT_WID (CLEARING_DOC_DT_WID:Double:): "0.00000000000000"
    BASELINE_DT_WID (BASELINE_DT_WID:Double:): "0.00000000000000"
    PLANNING_DT_WID (PLANNING_DT_WID:Double:): "0.00000000000000"
    ACCOUNT_DOC_ID (ACCOUNT_DOC_ID:UniChar.80:): "2899322~12389~Feb-09~230"
    COGS_DOC_AMT (COGS_DOC_AMT:Double:): "30.00000000000000"
    COGS_LOC_AMT (COGS_LOC_AMT:Double:): "30.00000000000000"
    XACT_QTY (XACT_QTY:Double:): "1.000000000000000"
    UOM_CODE (UOM_CODE:UniChar.50:): ""
    DB_CR_IND (DB_CR_IND:UniChar.30:): "DEBIT"
    ACCT_DOC_NUM (ACCT_DOC_NUM:UniChar.30:): "(NULL)"
    ACCT_DOC_ITEM (ACCT_DOC_ITEM:Double:): "(NULL)"
    ACCT_DOC_SUB_ITEM (ACCT_DOC_SUB_ITEM:Double:): "(NULL)"
    CLEARING_DOC_NUM (CLEARING_DOC_NUM:UniChar.30:): "(NULL)"
    CLEARING_DOC_ITEM (CLEARING_DOC_ITEM:Double:): "(NULL)"
    SALES_ORDER_NUM (SALES_ORDER_NUM:UniChar.30:): "(NULL)"
    SALES_ORDER_ITEM (SALES_ORDER_ITEM:Double:): "(NULL)"
    SALES_SCH_LINE (SALES_SCH_LINE:Double:): "(NULL)"
    INVOICE_NUM (INVOICE_NUM:UniChar.30:): "(NULL)"
    INVOICE_ITEM (INVOICE_ITEM:Double:): "(NULL)"
    DELIVERY_DOC_NUM (DELIVERY_DOC_NUM:UniChar.30:): "(NULL)"
    DELIVERY_DOC_ITEM (DELIVERY_DOC_ITEM:Double:): "(NULL)"
    GI_DOC_NUM (GI_DOC_NUM:UniChar.30:): "(NULL)"
    GI_DOC_ITEM (GI_DOC_ITEM:Double:): "(NULL)"
    STO_DOC_NUM (STO_DOC_NUM:UniChar.30:): "(NULL)"
    STO_DOC_ITEM (STO_DOC_ITEM:Double:): "(NULL)"
    DOC_HEADER_TEXT (DOC_HEADER_TEXT:UniChar.255:): "(NULL)"
    LINE_ITEM_TEXT (LINE_ITEM_TEXT:UniChar.255:): "(NULL)"
    ALLOCATION_NUM (ALLOCATION_NUM:UniChar.30:): "(NULL)"
    FED_BALANCE_ID (FED_BALANCE_ID:UniChar.320:): "12389~10004~BUDGET~~10004~~"
    BALANCE_ID (BALANCE_ID:UniChar.320:): "10004~12389~"
    DOC_STATUS_WID (DOC_STATUS_WID:Double:): "102006.0000000000"
    POSTED_ON_DT_WID (POSTED_ON_DT_WID:Double:): "20090211.00000000"
    POSTED_ON_TM_WID (POSTED_ON_TM_WID:Double:): "20090211.00000000"
    CLEARED_ON_DT_WID (CLEARED_ON_DT_WID:Double:): "0.00000000000000"
    GL_RECONCILED_ON_DT (GL_RECONCILED_ON_DT:Date:): "(NULL)"
    DOC_CURR_CODE (DOC_CURR_CODE:UniChar.30:): "USD"
    LOC_CURR_CODE (LOC_CURR_CODE:UniChar.30:): "USD"
    LOC_EXCHANGE_RATE (LOC_EXCHANGE_RATE:Double:): "1.000000000000000"
    GLOBAL1_EXCHANGE_RATE (GLOBAL1_EXCHANGE_RATE:Double:): "1.000000000000000"
    GLOBAL2_EXCHANGE_RATE (GLOBAL2_EXCHANGE_RATE:Double:): "33.96700000000000"
    GLOBAL3_EXCHANGE_RATE (GLOBAL3_EXCHANGE_RATE:Double:): "1.000000000000000"
    CREATED_BY_WID (CREATED_BY_WID:Double:): "11254.00000000000"
    CHANGED_BY_WID (CHANGED_BY_WID:Double:): "11254.00000000000"
    CREATED_ON_DT (CREATED_ON_DT:Date:): "02/11/2009 14:56:23"
    CHANGED_ON_DT (CHANGED_ON_DT:Date:): "(NULL)"
    AUX1_CHANGED_ON_DT (AUX1_CHANGED_ON_DT:Date:): "02/11/2009 14:56:23"
    AUX2_CHANGED_ON_DT (AUX2_CHANGED_ON_DT:Date:): "(NULL)"
    AUX3_CHANGED_ON_DT (AUX3_CHANGED_ON_DT:Date:): "(NULL)"
    AUX4_CHANGED_ON_DT (AUX4_CHANGED_ON_DT:Date:): "(NULL)"
    DELETE_FLG (DELETE_FLG:UniChar.1:): "N"
    W_INSERT_DT (W_INSERT_DT:Date:): "11/17/2009 18:39:40"
    W_UPDATE_DT (W_UPDATE_DT:Date:): "11/17/2009 18:39:40"
    DATASOURCE_NUM_ID (DATASOURCE_NUM_ID:Double:): "41.00000000000000"
    ETL_PROC_WID (ETL_PROC_WID:Double:): "16.00000000000000"
    INTEGRATION_ID (INTEGRATION_ID:UniChar.80:): "46214900~12389~2899322~Feb-09~230"
    TENANT_ID (TENANT_ID:UniChar.80:): "DEFAULT"
    X_CUSTOM (X_CUSTOM:UniChar.10:): "(NULL)"
    GL_RECONCILED_ON_PROC_WID (GL_RECONCILED_ON_PROC_WID:Double:): "(NULL)"
    )

    ETL will do a commit for every 10k records i believe. Might be when it is trying to insert the first 10k records it is having the duplicates. So check in the staging table whether that unique column combination is having any duplicates.

  • Help with unique constraint ERROR!!!

    This error occurs after invoking a Stored Procedure program. When it tries to insert, the PK table goes beserk:
    ORA-00001: unique constraint (HASUNI.THA_OTHER_ACTIVITY_PK) violated
    ORA-06512: at "HASUNI.POPULATE_GDW_ACTIVITY", line 203
    ORA-06512: at "HASUNI.GDW_MASTER", line 21
    ORA-06512: at line 9
    I tried to clear the PK table to get past the unique constraint error, but I can't seem to delete contents.
    PLEASE HELP!!!!

    The other queries are erroring out. It doesn't like the HASUNI.THA_OTHER_ACTIVITY_PK. It says this is an undefined table.
    Insert Statement:
    INSERT INTO HASUNI.THA_OTHER_ACTIVITY <-- line 203
    SELECT
    viewActivity.GROUPID,
    SUBSTR(viewActivity.MEMBERID,11,9)||SUBSTR(viewActivity.MEMBERID,21,9),
    viewActivity.ACTIVITYSEQUENCE,
    val_first_day,
    TO_DATE('31DEC9999'),
    viewActivity.PRODUCTID,
    viewActivity.PRODPLANTYPE,
    SUBSTR(viewActivity.MEMBERID,11,9),
    SUBSTR(viewActivity.MEMBERID,21,9),
    viewActivity.CLASSVAL,
    viewActivity.PCC,
    viewActivity.BRANCH,
    viewActivity.ARC,
    viewActivity.LOCATION,
    viewActivity.MEMBERID,
    viewActivity.DOB,
    viewActivity.GENDER,
    viewActivity.MEMBERSTATUS,
    viewActivity.CHPNID,
    viewActivity.SSN,
    NULL,
    NULL,
    SUBSTR(viewActivity.ZIPCODE,1,5),
    SUBSTR(viewActivity.ZIPCODE,7,4),
    viewActivity.ST,
    viewActivity.SOURCESYSTEMID,
    viewActivity.SOURCE,
    viewActivity.INITIATEDBY,
    viewActivity.OUTCOMES,
    viewActivity.OUTCOMESDATE,
    viewActivity.INITACTDATE,
    viewActivity.INITACTTAKEN,
    viewActivity.INITACTTAKENDATE,
    viewActivity.INITUSERID,
    viewActivity.LASTMODUSRID,
    viewActivity.LASTMODDATE,
    'HAS',
    val_last_day,
    NULL
    FROM
    HASUNI.VW_GDW_ACTIVITY viewActivity,
    HASUNI.HA_OUTREACH outreach
    WHERE
    viewActivity.MEMBERID IN (SELECT viewActivity.MEMBERID FROM HASUNI.VW_GDW_ACTIVITY viewActivity)
    AND outreach.MMBR_ID = viewActivity.MEMBERID
    AND viewActivity.INITACTTAKEN = 'OTHER'
    AND (TRUNC(viewActivity.INITACTDATE) BETWEEN val_first_day AND val_last_day
    OR (TRUNC(viewActivity.INITACTDATE) < val_first_day
    AND TRUNC(viewActivity.LASTMODDATE) BETWEEN val_first_day AND val_last_day));

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Insert in table with  unique index

    Hi
    I Create a table save a factor for to calculate date, but other 2 columns are key table
    CREATE TABLE TMP_FATOR
      SETID      VARCHAR2(5 BYTE)                   NOT NULL,
      COMPANYID  VARCHAR2(15 BYTE)                  NOT NULL,
      FATOR      NUMBER
    CREATE UNIQUE INDEX IDX_TMP_FATOR ON TMP_FATOR
    (SETID, COMPANYID)
    NOLOGGINGI want to insert in table , but skip errors , I tried with
    declare
      i  number;
    begin
       i:=1;
               EXECUTE IMMEDIATE 'TRUNCATE TABLE SYSADM.TMP_FATOR';
       BEGIN
             INSERT INTO /*+ APPEND*/ SYSADM.TMP_FATOR
                    SELECT  T1.SETID,
                            T1.COMPANYID,
                             SYSADM.pkg_ajusta_kenan.fnc_fator_dias_desconto(T1.SETID,T1.COMPANYID) fator
           FROM SYSADM.PS_LOC_ITEM_SN T1;          
          EXCEPTION
               WHEN DUP_VAL_ON_INDEX THEN
                NULL;
           WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE(SQLERRM);                    
          END;
          COMMIT;
    end;But did not work
    Why ?

    The determinisic keyword is just part of the
    declaration whether declaring a standalone function
    or a packaged function.
    SCOTT @ nx102 Local> create package test_pkg
    2  as
    3    function determin_foo( p_arg in number )
    4      return number
    5      deterministic;
    6  end;
    7  /
    Package created.
    Elapsed: 00:00:00.34
    1  create or replace package body test_pkg
    2  as
    3    function determin_foo( p_arg in number )
    4      return number
    5      deterministic
    6    is
    7    begin
    8      return p_arg - 1;
    9    end;
    0* end;
    SCOTT @ nx102 Local> /
    Package body created.
    Elapsed: 00:00:00.14JustinCan I to have other procedures and functions inside pacckage ?

  • Import of table with proper constraint name

    in my database i have named the constraints, i took an export of the database and tried to import to another database, but in the new database the user named constraint is changed to system named constraints.
    Can any one let me know can this happen, but for indexes it remains the same.
    Thanks in advance
    Farzana

    In databaseA (version oracle9i) i created the primary and foreign keys with corresponding table name.
    when i imported the same dump on databaseB (version oracle10g), i saw that the name of the constraint has been changed to system generated one.
    Can any one explain on it

  • Insert into a table with unique columns from another table.

    There are two tables,
    STG_DATA                                                          
    ORDER_NO    DIR_CUST_IND
    1002                     DNA
    1005                     GEN
    1005    
    1008                     NULL
    1001                     NULL
    1001                     NULL
    1006                     NULL
    1000                     ZZZ
    1001                     ZZZ
    FACT_DATA
    ORDER_NO    DIR_CUST_IND
    1005                      NULL
    1006                      NULL
    1008                      NULL
    I need to insert only unique [ORDER_NO] from STG_DATA to FACT_DATA with corresponding [DIR_CUST_IND]. Though STG_DATA has multiple rows with same ORDER_NO, I need to insert only one of that it can be any record.
    Sarvan

    CREATE TABLE #Level(ORDER_NO INT, DIR_CUST_IND CHAR(3))
    INSERT #Level
    SELECT 1002,'DNA' UNION
    SELECT 1005,'GEN' UNION
    SELECT 1005,NULL UNION
    SELECT 1008,NULL UNION
    SELECT 1001,NULL UNION
    SELECT 1001,NULL UNION
    SELECT 1006,NULL UNION
    SELECT 1000,'ZZZ' UNION
    SELECT 1001,'ZZZ'
    SELECT ORDER_NO,DIR_CUST_IND
    FROM( SELECT ROW_NUMBER()OVER(PARTITION BY ORDER_NO ORDER BY ORDER_NO) RowNum,*
    FROM #Level)A
    WHERE RowNum=1
    I hope this would give you enough idea. All you have to do is just write insert statement.
    Next time please post DDL & DML.
    Chaos isn’t a pit. Chaos is a ladder. Many who try to climb it fail and never get to try again. The fall breaks them. And some are given a chance to climb, but they refuse. They cling to the realm, or the gods, or love. Illusions. Only the ladder is real.
    The climb is all there is.

  • Unique constraint violation on version enabled table

    hi!
    we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
    if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
    we somewhat assume that to be a pretty standard scenario when using versioned data.
    is this some kind of bug or do we just miss something important here?
    more information:
    - versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
    - database version is 10.2.0.1.0
    - wm installation output:
    ALLOW_CAPTURE_EVENTS OFF
    ALLOW_MULTI_PARENT_WORKSPACES OFF
    ALLOW_NESTED_TABLE_COLUMNS OFF
    CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
    NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    NUMBER_OF_COMPRESS_BATCHES 50
    OWM_VERSION 10.2.0.1.0
    UNDO_SPACE UNLIMITED
    USE_TIMESTAMP_TYPE_FOR_HISTORY ON
    - all operations are done on LIVE workspace
    any help is appreciated.
    EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
    regards,
    Andreas Schilling
    Message was edited by:
    aschilling

    hi!
    we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
    if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
    we somewhat assume that to be a pretty standard scenario when using versioned data.
    is this some kind of bug or do we just miss something important here?
    more information:
    - versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
    - database version is 10.2.0.1.0
    - wm installation output:
    ALLOW_CAPTURE_EVENTS OFF
    ALLOW_MULTI_PARENT_WORKSPACES OFF
    ALLOW_NESTED_TABLE_COLUMNS OFF
    CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
    NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
    NUMBER_OF_COMPRESS_BATCHES 50
    OWM_VERSION 10.2.0.1.0
    UNDO_SPACE UNLIMITED
    USE_TIMESTAMP_TYPE_FOR_HISTORY ON
    - all operations are done on LIVE workspace
    any help is appreciated.
    EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
    regards,
    Andreas Schilling
    Message was edited by:
    aschilling

  • Unique Constraint on a Table

    I want to have a unique constraint on deptno and emp_name in emp table.
    What I want is:
    Any Dept can have 100 Steves OR 100 Johns but only 1 Robert
    Dept No               Emp Name     
    10               Steve
    10               Steve
    10               Robert
    20               Steve
    20               Robert
    20                John
    20               John
    How can we achieve this with unique constraint??
    Cheers
    Prasad.

    You are talking about unique constraint and you want to have duplicate records in a column with unique constraint. No, what you want is not possible with unique constraint. When you have a unique constraint on emp_name, how can you have 100 johns or 100 steves. Forget about 100, you can't have the second john/steve.

  • Insert with unique index slow in 10g

    Hi,
    We are experiencing very slow response when a dup key is inserted into a table with unique index under 10g. the scenario can be demonstrated in sqlplus with 'timing on':
    CREATE TABLE yyy (Col_1 VARCHAR2(5 BYTE) NOT NULL, Col_2 VARCHAR2(10 BYTE) NOT NULL);
    CREATE UNIQUE INDEX yyy on yyy(col_1,col_2);
    insert into yyy values ('1','1');
    insert into yyy values ('1','1');
    the 2nd insert results in "unique constraint" error, but under our 10g the response time is consistently in the range of 00:00:00.64. The 1st insert only took 00:00:00.01. BTW, if no index or non-unique index then you can insert many times and all of them return fast. Under our 9.2 DB the response time is always under 00:00:00.01 with no-, unique- and non-unique index.
    We are on AIX 5.3 & 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production.
    Has anybody seen this scenario?
    Thanks,
    David

    It seems that in 10g Oracle simply is doing something more.
    I used your example and run following script on 9.2 and 10.2. Hardware is the same i.e. these are two instances on the same box.
    begin
      for i in 1..10000 loop
        begin
          insert into yyy values ('1','1');
        exception when others then null;
        end;
      end loop;
    end;
    /on 10g it took 01:15.08 and on 9i 00:47.06
    Running trace showed that in 9i there was difference in plan of following recursive sql:
    9i plan:
    select c.name, u.name
    from
    con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
      c.owner# = u.user#
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    10000      0.43       0.43          0          0          0           0
    Execute  10000      1.09       1.07          0          0          0           0
    Fetch    10000      0.23       0.19          0      20000          0           0
    total    30000      1.76       1.70          0      20000          0           0
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  NESTED LOOPS 
          0   NESTED LOOPS 
          0    TABLE ACCESS BY INDEX ROWID CDEF$
          0     INDEX RANGE SCAN I_CDEF4 (object id 53)
          0    TABLE ACCESS BY INDEX ROWID CON$
          0     INDEX UNIQUE SCAN I_CON2 (object id 49)
          0   TABLE ACCESS CLUSTER USER$
          0    INDEX UNIQUE SCAN I_USER# (object id 11)10g plan
    select c.name, u.name
    from
    con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
      c.owner# = u.user#
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    10000      0.21       0.20          0          0          0           0
    Execute  10000      1.20       1.31          0          0          0           0
    Fetch    10000      2.37       2.59          0      20000          0           0
    total    30000      3.79       4.11          0      20000          0           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  HASH JOIN  (cr=2 pr=0 pw=0 time=301 us)
          0   NESTED LOOPS  (cr=2 pr=0 pw=0 time=44 us)
          0    TABLE ACCESS BY INDEX ROWID CDEF$ (cr=2 pr=0 pw=0 time=40 us)
          0     INDEX RANGE SCAN I_CDEF4 (cr=2 pr=0 pw=0 time=27 us)(object id 53)
          0    TABLE ACCESS BY INDEX ROWID CON$ (cr=0 pr=0 pw=0 time=0 us)
          0     INDEX UNIQUE SCAN I_CON2 (cr=0 pr=0 pw=0 time=0 us)(object id 49)
          0   TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)So in 10g it had hash join instead of nested loop join at least for this particular select. Probably time to gather stats on sys tables?
    The difference in time wasn't so big though 4.11 vs 1.70 so it doesn't explain all the time taken.
    But you can probably check whether you haven't more difference.
    Also you can download Thomas Kyte runstats_pkg and run it on both environments to compare whether some stats or latches haven't very big difference.
    Gints Plivna
    http://www.gplivna.eu

  • ORA-00001: unique constraint: How to discard the insert and print error

    Hi: I have a table with a constraint on a single field. The application that is inserting into this table is a multithreaded application. Sometimes two inserts could come with the same value for this field. Is it possible in oracle to configure it to ignore the request which causes this error instead of throwing this error back to the application?
    Thanks
    Ravi

    What type of application are you developing in which it's ok for you to ignore a users request?
    As a user i'd be a little upset if i submitted a request, and the proper response was suppressed, i go along about my day assuming the application did what i told it to do....never knowing that it just decided to ignore what i'd asked it to do.

  • Peculiar problem in oracle 10g  on AIX 5.3.0 With Check constraints

    Hi Every One,
    I am facing peculiar problem in oracle 10.2.0.1.0,AIX 5.3.0. I created table with check constraints like this
    create table test1 (name nvarchar2(1),check (name in('Y','N')));
    SQL> create table test1 (name nvarchar2(1),check (name in('Y','N')));
    Table created.
    SQL> insert into test1 values ('Y');
    1 row created.
    SQL> COMMIT;
    SQL> select from test1 where name = 'Y';* Why this statement is n't working
    no rows selected
    SQL> select * from test1;
    N
    Y
    ANOTHER INTERSTING ONE IS
    SQL> select * from test1 where name in('Y'); Why this statement is n't working
    no rows selected
    SQL> select * from test1 where name in('Y','Y'); it's working
    N
    Y
    SQL> select * from test1 where name in('','Y'); it's working
    N
    Y
    SQL> select * from test1 where name in('7','Y'); it's working
    N
    Y
    Like
    SQL> select * from test1 where name like 'Y'; it's not working
    no rows selected
    I created a table without check constraints
    SQL> create table test2 (name nvarchar2(1));
    Table created.
    SQL> insert into test2 values ('Y');
    1 row created.
    SQL> select * from test2;
    N
    Y
    SQL> select * from test2 where name ='Y'; it's working
    N
    Y
    SQL> select * from test2 where name like 'Y'; it's working
    N
    Y
    Database Details
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    PARAMETER VALUE
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 10.2.0.1.0
    Why it's happening. Whehter check constraint is valid or not in Equallity operator and like and in .
    Whereever we using single character column with check constraint,it's working with Equality operator and like and in.
    IT'S WORKING FINE WITHOUT CHECK CONSTRAINTS.WE HAVE TWO AIX MACHINES WITH ORACLE10G.THE SAME PROBLEM OCCURING IN TWO MACHINES
    PLEASE HELP ME .
    THANK YOU,
    WITH REGARDS,
    N.VINODH

    h
    Edited by: user3266490 on Dec 3, 2008 2:30 AM

  • Regexp_like with check constraint

    Hi all,
    My requirement is User should not enter the data like ( ½, ¼,...).
    I have created table with check constraint with the following syntax:
    CREATE TABLE mytest (c1 VARCHAR2(20),
    CHECK (REGEXP_LIKE(c1,'^[[:alnum:]+[:digit:]+[!@#]]+$')));
    The above means, except alphanumeric, digits, and keyboard characters should allow. But it is not allowing any characters.
    Please help me, any mistake in the above syntax.
    Thanks
    Mano

    Hi, Mano,
    user533671 wrote:
    Hi all,
    My requirement is User should not enter the data like ( ½, ¼,...).
    I have created table with check constraint with the following syntax:That's a couple of examples of values that are not allowed. Now give some examples of values that are allowed, and the reasons why each are allowe or not.
    For example, is 'GREEN/BLUE', with a slash between letters, allowed? How about '1/' with no number after the slash?
    >
    CREATE TABLE mytest (c1 VARCHAR2(20),
    CHECK (REGEXP_LIKE(c1,'^[[:alnum:]+[:digit:]+[!@#]]+$')));
    The above means, except alphanumeric, digits, and keyboard characters should allow. But it is not allowing any characters.
    Please help me, any mistake in the above syntax.You're using square brackets inside square brackets. If you really need to do that, then the right ']' must be the first character listed.
    Post 5 or 10 INSERT statements. Tell which INSERTs should work, and which ones should fail because of the CHECK constraint, and explain why in each case.

Maybe you are looking for

  • My apple tv remote play button is not working so i can not access anything...what can i do?

    My remote will not work on Apple TV. The play button does not play so I can not access any content. What can I do?

  • No Microsoft Dynamics CRM User Exists

    Hi We are in the process of deploying UPK 3.5.1 to a number of users in the business. Once they have entered the library location in the Profile Wizard and click next they receive the following Server Error message. "No Microsoft Dynamics CRM User ex

  • Avid Export for FCP X

    What is the best method (or a few methods) to export Avid News Cutter video files for FCP X?

  • Merge PDF

    Does the Preview app in Tiger have the ability to merge PDF files by just dragging&dropping from one to the other. Or is this only in Leopard?

  • Handling contraint violations in a user friendly way

    How do you trap constraint violation errors is a user friendly manner ... I've seen a number of related posts on the forum but have not yet seen a solution that people are happy with. Ideally, Id like to: - avoid displaying the 'raw' oracle error to