Problemm on Unique index

Hi !
Please help me in solving this issue , i am stuck up with an insertion querry..
my constraint is
create table STG_PLNT_PUR_ORD
SOURCE_SYSTEM VARCHAR2(30) not null,
TYPE_LOOKUP_CODE VARCHAR2(25),
PO_NUM VARCHAR2(20) not null,
LINE_NUM NUMBER not null,
RELEASE_NUM NUMBER,
SHIPMENT_NUM NUMBER not null,
DISTRIBUTION_NUM NUMBER,
AUTHORIZATION_STATUS VARCHAR2(25),
SUPPLIER_CODE VARCHAR2(30),
SUPPLIER_SITE_ID VARCHAR2(15),
PART_NO VARCHAR2(40),
UNIT_MEAS_LOOKUP_CODE VARCHAR2(25),
QTY_ORDERED NUMBER,
DUE_DATE DATE,
QUANTITY_RECEIVED NUMBER,
QUANTITY_ACCEPTED NUMBER,
QUANTITY_REJECTED NUMBER,
QTY_UI NUMBER,
QUANTITY_CANCELLED NUMBER,
QUANTITY_DELIVERED NUMBER,
QTY_DUE NUMBER,
QUANTITY_BILLED NUMBER,
UNIT_PRICE NUMBER,
CURRENCY VARCHAR2(15),
MIN_ORDER_QUANTITY NUMBER,
PO_AMT NUMBER,
CREATE_DATE DATE,
CREATED_BY VARCHAR2(30),
UPDATE_DATE DATE,
UPDATED_BY VARCHAR2(30),
LAST_UPDATE_DATE DATE,
FULL_NAME VARCHAR2(240),
PLANNER_CODE VARCHAR2(10)
tablespace DEV_LM4M_TAB
pctfree 10
pctused 40
initrans 1
maxtrans 255
storage
initial 4M
next 4M
minextents 1
maxextents unlimited
pctincrease 0
-- Create/Recreate indexes
create unique index IDX1_STG_PLNT_PUR_ORD on STG_PLNT_PUR_ORD (SOURCE_SYSTEM,PO_NUM,LINE_NUM,RELEASE_NUM,SHIPMENT_NUM,DISTRIBUTION_NUM)
tablespace DEV_LM4M_TAB
pctfree 10
initrans 2
maxtrans 255
storage
initial 4M
next 4M
minextents 1
maxextents unlimited
pctincrease 0
This is select querry from whr i am inserting values to stg_plnt_pur_ord table
select count(*),
PO_NUM,
SHIPMENT_NUMBER,
LINE_NUMBER,
RELEASE_NUM,
DISTRIBUTION_NUM
from (SELECT /*+ USE HASH */
PH.type_lookup_code "TYPE_LOOKUP_CODE",
nvl(PH.segment1,0) "PO_NUM",
nvl(PLL.shipment_num,0) "SHIPMENT_NUMBER",
nvl(PL.line_num,0) "LINE_NUMBER",
nvl(PLL.po_release_id,0) "RELEASE_NUM",
nvl(PD.distribution_num ,0)"DISTRIBUTION_NUM",
PH.authorization_status "AUTHORIZATION_STATUS",
PL.min_order_quantity "MINIMUM_QUANTITY",
PV.segment1 "SUPPLIER_CODE",
PVSA.vendor_site_id "SUPPLIER_SITE_ID",
MS.segment1 "PART_NO",
PL.unit_meas_lookup_code "UNIT_MEAS_LKUP_CODE",
PPF.full_name "FULL_NAME",
MS.planner_code "PLANNER_CODE",
PLL.quantity "QUANTITY_ORDERED",
DECODE(PLL.need_by_date, NULL, PLL.promised_date, PLL.need_by_date) "DUE_DATE",
PLL.quantity_received "QUANTITY_RECIEVED",
PLL.quantity_accepted "QUANTITY_ACCEPTED",
PLL.quantity_rejected "QUANTITY_REJECTED",
(PLL.quantity_received - PD.quantity_delivered) "QUANTITY_UI",
PLL.quantity_cancelled "QUANTITY_CANCELLED",
PD.quantity_delivered "QUANTITY_DELIVERED",
(PLL.quantity - PLL.quantity_received - PLL.quantity_cancelled) "QUANTITY_DUE",
PLL.quantity_billed "QUANTITY_BILLED",
PL.unit_price "UNIT_PRICE",
PH.currency_code "CURRENCY",
(((PLL.quantity - PLL.quantity_cancelled) * PL.unit_price) *
DECODE(PH.rate, NULL, 1, PH.rate)) "PO_AMT",
PV.last_update_date "LAST_UPDATE_DATE"
FROM po_headers_all PH,
po_lines_all PL,
po_vendors PV,
mtl_system_items MS,
po_line_locations_all PLL,
po_distributions_all PD,
po_vendor_sites_all PVSA,
per_all_people_f PPF --,
-- po_releases_all PR
WHERE PH.type_lookup_code IN ('BLANKET', 'STANDARD') AND
PH.po_header_id = PL.po_header_id AND
PV.vendor_id = PH.vendor_id AND
PVSA.vendor_site_id = PH.vendor_site_id AND
PL.item_id = MS.inventory_item_id AND
PPF.person_id(+) = PH.agent_id AND
PPF.object_version_number =
(SELECT MAX(object_version_number)
FROM per_all_people_f PPF2
WHERE PPF.person_id = PPF2.person_id
GROUP BY person_id) AND PH.po_header_id = PLL.po_header_id AND
PL.po_line_id = PLL.po_line_id
--AND PR.po_release_id(+)=PLL.po_release_id
AND PH.po_header_id = PD.po_header_id AND
PL.po_line_id = PD.po_line_id AND
PLL.line_location_id = PD.line_location_id AND
NVL(PLL.closed_code, 'x') NOT LIKE 'FINALLY%' AND
NVL(PL.closed_code, 'x') NOT LIKE 'FINALLY%' AND
NVL(PH.closed_code, 'x') NOT LIKE 'FINALLY%' AND
NVL(PH.cancel_flag, 'x') != 'Y' AND
PH.ship_to_location_id IN (1, 134) AND MS.organization_id = 1)
group by PO_NUM,
SHIPMENT_NUMBER,
LINE_NUMBER,
RELEASE_NUM,
DISTRIBUTION_NUM
having count(*) > 1;
After the execution of this qurry i am getting 0 rows returnd
but while inserting into table it is giving me unquie constraint voilated i doesnt wht is thr prblm
Please help me.........

unique index IDX1_STG_PLNT_PUR_ORD on STG_PLNT_PUR_ORD   (SOURCE_SYSTEM,PO_NUM,LINE_NUM,RELEASE_NUM,SHIPMENT_NUM,DISTRIBUTION_NUM) it tells you that your are trying to insert a rows for the 4 columns above
which is already exists in STG_PLNT_PUR_ORD.
INSERT INTO stg_plnt_pur_ord
(source_system,                    <---
  type_lookup_code,
  po_num,                           <---
  line_num,                         <---
  release_num,                      <---
  shipment_num,                     <---
  distribution_num,                 <---
  authorization_status,
  supplier_code,
  supplier_site_id,
  part_no,
  unit_meas_lookup_code,
  qty_ordered,
  due_date,
  quantity_received,
  quantity_accepted,
  quantity_rejected,
  qty_ui,
  quantity_cancelled,
  quantity_delivered,
  qty_due,
  quantity_billed,
  unit_price,
  currency,
  min_order_quantity,
  po_amt,
  create_date,
  created_by,
  update_date,
  updated_by,
  last_update_date,
  full_name,
  planner_code)
VALUES
(l_source,                         <---
  l_type_lookup_code(i),
  l_po_num(i),                      <---
  l_line_num(i),                    <---
  l_release_num(i),                 <---
  l_shipment_num(i),                <---
  l_distribution_num(i),            <---
  l_authorization_status(i),
  l_supplier_code(i),
  l_supplier_site_id(i),
  l_part_no(i),
  l_unit_meas_lookup_code(i),
  l_qty_ordered(i),
  l_due_date(i),
  l_quantity_received(i),
  l_quantity_accepted(i),
  l_quantity_rejected(i),
  l_qty_ui(i),
  l_quantity_cancelled(i),
  l_quantity_delivered(i),
  l_qty_due(i),
  l_quantity_billed(i),
  l_unit_price(i),
  l_currency(i),
  l_min_order_quantity(i),
  l_po_amt(i),
  SYSDATE,
  USER,
  SYSDATE,
  USER,
  l_last_update_date(i),
  l_full_name(i),
  l_planner_code(i));why not try to capture the rows that violate the constraints. something like
   Begin
     Insert Into STG_PLNT_PUR_ORD
     Values
   Exception
     When Others Then
       dbms_output.put_line('l_source: '||l_source);
       dbms_output.put_line('l_line_num: '||to_char(l_line_num(i)));
       dbms_output.put_line('l_release_num: '||to_char(l_release_num(i)));
       dbms_output.put_line('l_shipment_num: '||to_char(l_shipment_num(i)));
       dbms_output.put_line('l_distribution_num: '||to_char(l_distribution_num(i)));
   End;or create another table to log the row in error
   Begin
     Insert Into STG_PLNT_PUR_ORD
     Values
   Exception
     When Others Then
       insert into log_error
       values
       (l_source,
        l_line_num(i),
        l_release_num(i),
        l_shipment_num(i),
        l_distribution_num(i));
   End;

Similar Messages

  • Access path difference between Primary Key and Unique Index

    Hi All,
    Is there any specific way the oracle optimizer treats Primary key and Unique index differently?
    Oracle Version
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Sample test data for Normal Index
    SQL> create table t_test_tab(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab start with 1 increment by 1 ;
    Sequence created.
    SQL>  insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
            259  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> Sample test data when using Primary Key
    SQL> create table t_test_tab1(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab1 start with 1 increment by 1 ;
    Sequence created.
    SQL> insert into t_test_tab1 select seq_t_test_tab1.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1727568366
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB1 | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> alter table t_test_tab1 add constraint pk_t_test_tab1 primary key (col1);
    Table altered.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 2995826579
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    59   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| PK_T_TEST_TAB1 | 99999 |   488K|    59   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6867  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> If you see here the even though statistics were gathered,
         * In the 1st table T_TEST_TAB, the table is still using FULL table access after creation of index.
         * And in the 2nd table T_TEST_TAB1, table is using PRIMARY KEY as expected.
    Any comments ??
    Regards,
    BPat

    Thanks.
    Yes, ignored the NOT NULL part.Did a test and now it is working as expected
    SQL>  create table t_test_tab(col1 number not null, col2 number, col3 varchar2(12));
    Table created.
    SQL>
    create sequence seq_t_test_tab start with 1 increment by 1 ;SQL>
    Sequence created.
    SQL> insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  set autotrace traceonly
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6912  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>  create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 4115006285
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    63   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| IDX_T_TEST_TAB | 99999 |   488K|    63   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6881  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>

  • Difference between Unique key and Unique index

    Hi All,
    I've got confused in the difference between unique index & unique key in a table.
    While we create a unique index on a table, its created as a unique index.
    On the other hand, if we create a unique key/constraint on the table, Oracle also creates an index entry for that. So I can find the same name object in all_constraints as well as in all_indexes.
    My question here is that if during creation of unique key/constraint, an index is automatically created than why is the need to create unique key and then two objects , while we can create only one object i.e. unique index.
    Thanks
    Deepak

    This is only my understanding and is not according to any documentation, that is as follows.
    The unique key (constraint) needs an unique index for achieving constraint of itself.
    Developers and users can make any constraint (unique-key, primary-key, foreign-key, not-null ...) to enable,disable and be deferable. Unique key is able to be enabled, disabled, deferable.
    On the other hand, the index is used for performance-up originally, unique index itself doesn't have the concept like constraints. The index (including non-unique, unique) can be rebuilded,enabled,disabled etc. But I think that index cannot be set "deferable-builded" automatic.

  • Difference between unique constraint and unique index

    1. What is the difference between unique constraint and unique index when unique constraint is always indexed ? Which one is better in this case for better performance ?
    2. Is Composite index of 3 columns x,y,z better
    or having independent/ seperate indexes on 3 columns x,y,z is better for better performance ?
    3. It has been very confusing for me to decide which columns to index, I have indexed most foreignkey columns, is it a good idea ? We do lot of selects and DMLS on most of our tables. Is there any query that I can run and find out if indexes are really being used and if they are improving any performance. I have analyzed and computed my indexes using ANALYZE index index_name validate structure and COMPUTE STATISTICS;
    null

    1. Unique index is part of unique constraint. Of course you can create standalone unique index. But is is no point to skip the logical view of business if you spend same effort to achive.
    You create unique const. Oracle create the unique index for you. You may specify index characteristic in unique constraint.
    2. Depends. You can't utilize the composite index if the searching condition is not whole or front part of the indexing key. You can't utilize your index if you query the table for y=2. That is.
    3. As old words in database arena, Index may be good or bad for a table depending on the size of table, number of columns in the table... etc. It is very environmental dependent. In fact, It is part of database nomalization. Statistic is a way oracle use to determine the execution plan.
    Steve
    null

  • Difference Between Unique Index vs Unique Constraint

    Can you tell me
    what is the Difference Between Unique Index vs Unique Constraint.
    and
    Difference Between Unique Index and bitmap index.
    Edited by: Nilesh Hole,Pune, India on Aug 22, 2009 10:33 AM

    Nilesh Hole,Pune, India wrote:
    Can you tell me
    what is the Difference Between Unique Index vs Unique Constraint.http://www.jlcomp.demon.co.uk/faq/uk_idx_con.html
    and
    Difference Between Unique Index and bitmap index.The documentation is your friend:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#CNCPT1157
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref1008
    Regards,
    Rob.

  • Unique index vs non-unique index

    Hi Gurus,
    I'm getting lots of "TABLE ACCESS FULL" for lots of columns which have non-unique indexes is some queries. So my question is does optimizer does not pick up non-unique index but only pickes up unique indexes for those columns.
    Thanks
    Amitava.

    amitavachatterjee1975 wrote:
    Hi Gurus,
    I'm getting lots of "TABLE ACCESS FULL" for lots of columns which have non-unique indexes is some queries. So my question is does optimizer does not pick up non-unique index but only pickes up unique indexes for those columns.
    Thanks
    Amitava.WHY MY INDEX IS NOT BEING USED
    http://communities.bmc.com/communities/docs/DOC-10031
    http://searchoracle.techtarget.com/tip/Why-isn-t-my-index-getting-used
    http://www.orafaq.com/tuningguide/not%20using%20index.html

  • Unique Index vs. Unique Constraint

    Hi All,
    I'm studying for the Oracle SQL Expert Certification. At one point in the book, while talking about indices, the author says that a unique index is not the same a unique constraint. However, he doesn't explain why they're two different things.
    Could anyone clarify the difference between the two, please?
    Thanks a lot,
    Valerio

    A constraint has different meaning to an index. It gives the optimiser more information and allows you to have foreign keys on the column, whereas a unique index doesn't.
    eg:
    SQL> create table t1 (col1 number, col2 varchar2(20), constraint t1_uq unique (col1));
    Table created.
    SQL> create table t2 (col1 number, col2 varchar2(20));
    Table created.
    SQL> create unique index t2_idx on t2 (col1);
    Index created.
    SQL> create table t3 (col1 number, col2 number, col3 varchar2(20), constraint t3_fk
      2                   foreign key (col2) references t1 (col1));
    Table created.
    SQL> create table t4 (col1 number, col2 number, col3 varchar2(20), constraint t4_fk
      2                   foreign key (col2) references t2 (col1));
                     foreign key (col2) references t2 (col1))
    ERROR at line 2:
    ORA-02270: no matching unique or primary key for this column-listIt's like saying "What's the difference between a car seat and an armchair? They both allow you to sit down!"

  • Can we change the fields of database unique index in a customised table?

    Hi all..
    I want to know that can we create or change or delete the database unique index of a customized table?
    In my case, there is a customised table with 4 primary keys with all the records to be maintained thru transaction code SM30.
    There is database unique index maintained for this table which has 2 fields. These 2 fields are out of the 4 primary fields of the table.I hope I have made myself clear!
    Now when I am trying to insert a record in the table it give me a short dump.( It says duplication of records is not allowed)
    The reason being that the new record that I am trying to insert in the database table has those 2 fields for which the unique index is maintained is the same as an already existing record.And the other two fields are different from the already existing record.So overall the combination of the 4 primary fields is different.
    Please tell me how shall I proceed now?
    I also tried to change the Unique index but it is asking me some kind of authrization(You are not authorized to make changes (authorization object S_DEVELOP)).Also I am not sure whether changing the unique index is feasible or not.?
    Thanks.

    hi
    I think you will not be able to do unique indexing withou the help of primary keys,so use all the primary keys into the table field selections  and and then create indexing otherwise dupilication of keys can occur. if you are not able to keep the primary keys then go for non unique key indexing,where you have to add the client field and the any keys of your wish.

  • Error while creating the Unique Index of the Primary Key of an Item

    Hi all,
    I have deployed a new item (CO_CONTRACTUNIT_PRODUCT) in my publication. The deploy appears to be successfull as the item can be seen in the repository through the Mobile Manager.
    The problem occurs when i sync my local DB to get the item offline. While synchronizing, the following error appears, both in the sync window and the log file ol_sync.log.
    "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI"
    However, the debug file gives this other error regarding this table.
    ALL_INDEX:CREATE UNIQUE INDEX "TPCO_CONTRACTUNIT_PRODUCT_PK" ON CO_CONTRACTUNIT_PRODUCT (CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID) -5130Error at C:\ADE\omeprod_ol103021\olite\db\build\win\ocapi\..\..\..\src\ocapi\allindexes.cpp line:329 rc:-5130
    Build date Mar 29 2010
    okErr=(table or view %s.%s not found)
    mess=(CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID)
    AddLog(-5130 "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI")
    But the index that is being created and is giving the error is the index created automatically with the Primary Key of the table, and so nothing has been modified in that.
    The primary key of the table is created with the three columns that are part of the index that is returning the error.
    As I could not solve the error, I tried to drop and re-create the item in the repository, but no luck. As a last option, i tried to remove the item from the repository to be able to sync properly again (just like before creating the item), but the error still happens.
    Another weird point is that i have tried creating the item in another publication of another database (but with almost equal items), and the item could was downloaded to my local DB without any problem, which makes this problem still more bizarre.
    What can it be?
    Any help would be great!
    Roshni

    have you tried unistalling the client and reinstalling it?
    schema evolution changes are not always handled correctly please check thread:
    Modification of publication item into Mobile Server
    i quote from rekounas instructions:
    If you are just adding a field, you should only have to run the alter publication item API call.
    Here is an old note on schema evolution on different scenarios. The names for the APIs that they use are now deprecated. Use the ConsolidatorManager class and call the method alterPublicationItem("PUBLICATION_ITEM_NAME", "SELECT STMT") :
    A) Add column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Change the Oracle8i/9i database schema (add column)
    4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
    5. Start Mobile Server
    6. Execute a sync from the client
    7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
    B) Drop column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Delete column of the base table in the Oracle database schema
    4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
    5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
    6. Start Mobile Server
    7. Execute a sync from the client
    8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
    C) Change column datatype
    Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop/create column (do not use conversion procudures) at the base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
    snapshot definitions.
    D) Drop table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
    E) Add table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Add new base table
    4. Call CreatePublicationItem() and AddPublicationItem() method
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
    F) Changing Primary Keys
    Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop the snapshot using DropPublicationItem() method o
    4. Alter the base table
    5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
    G) To Change a Table Weight =>
    Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
    1. Run MGP to apply any changes in the InQueue to the Oracle databse
    2. Change table weight using SetTemplateItemMetadata() method
    3. Add/change constraint on the the base table which reflects the change in table weight
    4. Synchronize
    gl m8

  • Unique Index Error while running the ETL process

    Hi,
    I have Installed Oracle BI Applications 7.9.4 and Informatica PowerCenter 7.1.4. I have done all the configuration steps as specified in the Oracle BI Applications Installation and Configuration Guide. While running the ETL process from DAC for Execution Plan 'Human Resources Oracle 11.5.10' some tasks going to status Failed.
    When I checked the log files for these tasks, I found the following error
    ANOMALY INFO::: Error while executing : CREATE INDEX:W_PAYROLL_F_ASSG_TMP:W_PRL_F_ASG_TMP_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_PRL_F_ASG_TMP_U1
    ON
    W_PAYROLL_F_ASSG_TMP
    INTEGRATION_ID ASC
    ,DATASOURCE_NUM_ID ASC
    ,EFFECTIVE_FROM_DT ASC
    NOLOGGING PARALLEL
    with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    EXCEPTION CLASS::: java.lang.Exception
    I found some duplicate rows in the table W_PAYROLL_F_ASSG_TMP with the combination of the columns on which it is trying to create INDEX. Can anyone give me information for the following.
    1. Why it is trying to create the unique index on the combination of columns which may not be unique.
    2. Is it a problem with the data in the source database (means becoz of duplicate rows in the source system).
    How we need to fix this error. Do we need to delete the duplicate rows from the table in the data warehouse manually and re-run the ETL process or is there any other way to fix the problem.

    This query will identify the duplicate in the Warehouse table preventing the Index from being built:
    select count(*), integration_id, src_eff_from_dt from w_employee_ds group by integration_id, src_eff_from_dt having count(*)>1;
    To get the ETL to finish issue this delete to the W_EMPLOYEE_DS table:
    delete from w_employee_ds where integration_id = '2' and src_eff_from_dt ='04-JAN-91';
    To fix it so this does not happen again on another load you need to find the record in the Vision DB, it is in the PER_ALL_PEOPLE_F table. I have a Vision source and this worked:
    select rowid, person_id , LAST_NAME FROM PER_ALL_PEOPLE_F
    where EFFECTIVE_START_DATE = '04-JAN-91';
    ROWID PERSON_ID
    LAST_NAME
    AAAWXJAAMAAAwl/AAL 6272
    Kang
    AAAWXJAAMAAAwmAAAI 6272
    Kang
    AAAWXJAAMAAAwmAAA4 6307
    Lee
    delete from PER_ALL_PEOPLE_F
    where ROWID = 'AAAWXJAAMAAAwl/AAL';

  • UNIQUE INDEX and PRIMARY KEYS

    Hi Friends,
    I am confused about primary keys.
    What is the purpose of this key again? I know it is used for unique constraints.
    Supposing I have a table with two (2) columns which are each indexed as unique.
    Then they can me both candidate at primary key right?
    So why do I need a primary key again? when I have 2 columns which are uniquely index?
    Thanks a lot

    A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. This constraint does not apply to NULL values except for the BDB storage engine. For other engines, a UNIQUE index allows multiple NULL values for columns that can contain NULL
    The differences between the two are:
    1. Column(s) that make the Primary Key of a table cannot be NULL since by definition; the Primary Key cannot be NULL since it helps uniquely identify the record in the table. The column(s) that make up the unique index can be nullable. A note worth mentioning over here is that different RDBMS treat this differently –> while SQL Server and DB2 do not allow more than one NULL value in a unique index column, Oracle allows multiple NULL values. That is one of the things to look out for when designing/developing/porting applications across RDBMS.
    2. There can be only one Primary Key defined on the table where as you can have many unique indexes defined on the table (if needed).
    3. Also, in the case of SQL Server, if you go with the default options then a Primary Key is created as a clustered index while the unique index (constraint) is created as a non-clustered index. This is just the default behavior though and can be changed at creation time, if needed.
    So, if the unique index is defined on not null column(s), then it is essentially the same as the Primary Key and can be treated as an alternate key meaning it can also serve the purpose of identifying a record uniquely in the table.

  • Unique Index Created Automatically on FK Columns

    Referencing the following document under Forward Engineering, http://www.oracle.com/technetwork/developer-tools/datamodeler/newfeatures30-224506.html
    Unique index is created for One-to-one relationships defined in the Logical model if there is no PK/UK constraint or unique index defined over foreign key columns.
    Why are unique indexes created automatically for foreign key columns? Should "lookups" not be defined in the logical model? I don't understand why it's designed to behave that way.

    Why are unique indexes created automatically for foreign key columns? They are created automatically only for 1:1 relationships in order to support that business rule.
    Should "lookups" not be defined in the logical model? Can you elaborate on that?
    Philip

  • BizTalk 2006 Event Log Warnings - Cannot insert duplicate key row in object 'dta_MessageFieldValues' with unique index 'IX_MessageFieldValues'.

    We have been seeing the following 'warnings' in the event log of our BizTalk machine since upgrading to BTS 2006. They seem to occur randomly 6 or 8 times per day.
    Does anyone know what this means and what needs to be done to clear it up? we have only one BizTalk server which is running on only one machine.
    I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
    Source: BAM EventBus Service
    Event: 5
    Warning Details: Execute batch error. Exception information: TDDS failed to batch execution of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert duplicate key row in object 'dta_MessageFieldValues'
    with unique index 'IX_MessageFieldValues'. The statement has been terminated..

    Other than ensuring that there exists a separate and single tracking host instance, you're getting an error about duplicate keys.. which implies that you're trying to Create a BAM Activity twice with the same data.
    I suggest you have a in-depth examination of the BAM (TPE or API) associated with the orchestration. In TPE ensure that the first binding you select is the "Instance Id" or "Message Id" before going ahead to map the ports or others.
    Regards.

  • The record has duplicate key values - "Include in Unique Index" checkbox missing

    I have a Silverlight client app that I'm developing in Lightswitch in VS2013 Ultimate Update 1, and have run into a problem when adding records.
    I am working against an external SQL Server 2008 database, and the table in question only has the primary key constraint, meaning that the ID field is an identity field, set as the primary key.
    When I try and add records to the table from the app, I get an error "The Shiur record has duplicate key values." This seems to happen when I add a row that has the same Speaker as an existing one. If I add one with a new Speaker then
    it inserts OK.
    Whilst searching for information on this error, I saw many comments around about there being a problem in earlier releases with the "Include in Unique Index" not working properly. This looked like my problem, except for two points...
    1) This table doesn't have anything in a unique index (as far as I know), as I never set it up to have any. You can see the details here...
    This looks to me like there is only the one index, which is the primary key one.
    2) The "Include in Unique Index" checkbox doesn't appear on my Properties panel at all. If I open the table in the designer, then click in the Speaker row, I don't see "Include in Unique Index" at all...
    Actually, I don't see "Include in Unique Index" at all, irrespective of what I click. It's just not there.
    So, I don't seem to have a unqiue constraint to violate, and VS doesn't show me the "Include in Unique Index" to allow me to check this, but I still get the error on insert. Anyone any ideas what's going on here?
    Thanks
    FREE custom controls for Lightswitch! A collection of useful controls for Lightswitch developers.
    Download from the Visual Studio Gallery.
    If you're really bored, you could read about my experiments with .NET and some of Microsoft's newer technologies at
    http://dotnetwhatnot.pixata.co.uk/

    Hi Alan,
    'Include in Unique Index' is for intrinsic database entity properties.
    Ah, that explains why I can't see it then :)
    Is LightSwitch or SQL throwing the error?
    Ah, you're a genius! Well, not quite, but your question led me to the answer.
    For some odd reason, the primary key fields on my tables seem to have lost their identity setting, so when it tried to insert a new record, the ID wasn't automatically increased, so was going in as zero. This worked fine for the first insert, as there wasn't
    a record with zero ID, but failed for the second.
    Changed the ID fields to be identity, and all works fine now :)
    Thanks very much.
    FREE custom controls for Lightswitch! A collection of useful controls for Lightswitch developers.
    Download from the Visual Studio Gallery.
    If you're really bored, you could read about my experiments with .NET and some of Microsoft's newer technologies at
    http://dotnetwhatnot.pixata.co.uk/

  • Using UNUSABLE on Unique Index gives "initially in unusable state" error

    I am doing the following:
    1. Truncate table
    2 Alter PK Index Disable
    2. Alter Unique Index Unusable
    3. Insert /*+ APPEND NOLOGGING*/ into my table
    My problem is, I get an "ORA-26026: unique index ... initially in unusable state" error when I try to do the Insert. If I make the Unique index non-unique it isn't a problem. If I drop the index and recreate it its not a problem - so uniqueness doesn't appear to be the issue. Can you not use UNUSABLE with a Unique index, I couldn't find any reference to this in the 10g docs.
    I am using 10.1.0.4.
    Thanks
    Richard

    26026, 0000, "unique index %s.%s initially in unusable state"
    //* Cause: A unique index is in IU state (a unique index cannot have
    //* index maintenance skipped via SKIP_UNUSABLE_INDEXES).
    //* Action: Either rebuild the index or index partition, or use
    //* SKIP_INDEX_MAINTENANCE if the client is SQL*Loader.

Maybe you are looking for