Using UNUSABLE on Unique Index gives "initially in unusable state" error

I am doing the following:
1. Truncate table
2 Alter PK Index Disable
2. Alter Unique Index Unusable
3. Insert /*+ APPEND NOLOGGING*/ into my table
My problem is, I get an "ORA-26026: unique index ... initially in unusable state" error when I try to do the Insert. If I make the Unique index non-unique it isn't a problem. If I drop the index and recreate it its not a problem - so uniqueness doesn't appear to be the issue. Can you not use UNUSABLE with a Unique index, I couldn't find any reference to this in the 10g docs.
I am using 10.1.0.4.
Thanks
Richard

26026, 0000, "unique index %s.%s initially in unusable state"
//* Cause: A unique index is in IU state (a unique index cannot have
//* index maintenance skipped via SKIP_UNUSABLE_INDEXES).
//* Action: Either rebuild the index or index partition, or use
//* SKIP_INDEX_MAINTENANCE if the client is SQL*Loader.

Similar Messages

  • Primary Key supported by a non-unique index?

    Encountered a weird situation today. A utility we setup which allows Analysts to restore data into their tables, started failing when it attempted to drop an index. The index was supporting a Primary Key. Makes sense. But our script was supposed to only be attempting to drop/recreate non-unique indexes. Turns out the supporting index on the Primary Key was non-unique, and to the best of my knowledge came about as follows:
    SQL> create table junk (f number(1));
    Table created.
    SQL> create index junk_ix on junk(f);
    Index created.
    SQL> select UNIQUENESS from DBA_INDEXES where index_name = 'JUNK_IX';
    UNIQUENES
    NONUNIQUE
    SQL> alter table forbesc.junk add constraint junk_pk primary key (f) using index junk_ix;
    Table altered.
    SQL> select UNIQUENESS from DBA_INDEXES where index_name = 'JUNK_IX';
    UNIQUENES
    NONUNIQUE
    SQL> insert into junk values (1);
    1 row created.
    SQL> insert into junk values (1);
    insert into junk values (1)
    ERROR at line 1:
    ORA-00001: unique constraint (FORBESC.JUNK_PK) violated
    SQL> select index_name from dba_constraints where constraint_name = 'JUNK_PK';
    INDEX_NAME
    JUNK_IXWhat I can't figure out is how a non-unique index is enforcing uniqueness. I thought that it was the key in that very same process. I thought that perhaps an index with the 'SYS_123456' was getting created, perhaps, but I couldn't find one:
    SQL> select object_name, object_type from dba_objects order by created desc;
    OBJECT_NAME   OBJECT_TYPE
    JUNK_IX     INDEX
    JUNK     TABLE
    ...How is the uniqueness getting enforced in this case? This is in Oracle 11.1.0.7
    Thanks,
    --=Chuck

    It has always been that way. Oracle can, and will, use a non-unique index to enforce a PK constraint, The existing index just needs to have the PK column(s) as the leading column(s) of the index:
    SQL> create table t (id number, id1 number, descr varchar2(10));
    Table created.
    SQL> create index t_ids on t(id, id1);
    Index created.
    SQL> select index_name from user_indexes
      2  where table_name = 'T';
    INDEX_NAME
    T_IDS
    SQL> alter table t add constraint t_pk
      2  primary key (id);
    Table altered.
    SQL> select index_name from user_indexes
      2  where table_name = 'T';
    INDEX_NAME
    T_IDS
    SQL> insert into t values (1, 1, 'One');
    1 row created.
    SQL> insert into t values (1, 2, 'Two');
    insert into t values (1, 2, 'Two')
    ERROR at line 1:
    ORA-00001: unique constraint (OPS$ORACLE.T_PK) violatedJohn

  • ORA-01502 error in case of unusable unique index and bulk dml

    Hi, all.
    The db is 11.2.0.3 on a linux machine.
    I made a unique index unusable, and issued a dml on the table.
    Howerver, oracle gave me ORA-01502 error.
    In order to avoid ORA-01502 error, do I have to drop the unique index ,and do bulk dml, and recreate the index?
    Or Is there any other solution without re-creating the unique index?
    create table hoho.abcde as
    select level col1 from dual connect by level <=1000
    10:09:55 HOHO@PD1MGD>create unique index hoho.abcde_dx1 on hoho.abcde (col1);
    Index created.
    10:10:23 HOHO@PD1MGD>alter index hoho.abcde_dx1 unusable;
    Index altered.
    Elapsed: 00:00:00.03
    10:11:27 HOHO@PD1MGD>delete from hoho.abcde where rownum < 11;
    delete from hoho.abcde where rownum < 11
    ERROR at line 1:
    ORA-01502: index 'HOHO.ABCDE_DX1' or partition of such index is in unusable stateThanks in advance.
    Best Regards.

    Hi. all.
    The following is from "http://docs.oracle.com/cd/E14072_01/server.112/e10595/indexes002.htm#CIHJIDJG".
    Is there anyone who can show me a tip to avoid the following without dropping and re-creating an unique index?
    •DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
    Unusable indexes
    An unusable index is ignored by the optimizer and is not maintained by DML. One reason to make an index unusable is if you want to improve the performance of bulk loads. (Bulk loads go more quickly if the database does not need to maintain indexes when inserting rows.) Instead of dropping the index and later recreating it, which requires you to recall the exact parameters of the CREATE INDEX statement, you can make the index unusable, and then just rebuild it. You can create an index in the unusable state, or you can mark an existing index or index partition unusable. The database may mark an index unusable under certain circumstances, such as when there is a failure while building the index. When one partition of a partitioned index is marked unusable, the other partitions of the index remain valid.
    An unusable index or index partition must be rebuilt, or dropped and re-created, before it can be used. Truncating a table makes an unusable index valid.
    Beginning with Oracle Database 11g Release 2, when you make an existing index unusable, its index segment is dropped.
    The functionality of unusable indexes depends on the setting of the SKIP_UNUSABLE_INDEXES initialization parameter.
    When SKIP_UNUSABLE_INDEXES is TRUE (the default), then:
    •DML statements against the table proceed, but unusable indexes are not maintained.
    •DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
    •For non-partitioned indexes, the optimizer does not consider any unusable indexes when creating an access plan for SELECT statements. The only exception is when an index is explicitly specified with the INDEX() hint.
    •For a partitioned index where one or more of the partitions are unusable, the optimizer does not consider the index if it cannot determine at query compilation time if any of the index partitions can be pruned. This is true for both partitioned and non-partitioned tables. The only exception is when an index is explicitly specified with the INDEX() hint.
    When SKIP_UNUSABLE_INDEXES is FALSE, then:
    •If any unusable indexes or index partitions are present, any DML statements that would cause those indexes or index partitions to be updated are terminated with an error.
    •For SELECT statements, if an unusable index or unusable index partition is present but the optimizer does not choose to use it for the access plan, the statement proceeds. However, if the optimizer does choose to use the unusable index or unusable index partition, the statement terminates with an error.Thanks in advance.
    Best Regards.

  • "ORA-1715 : UNIQUE may not be used with a cluster index" but why?

    "ORA-1715 : UNIQUE may not be used with a cluster index" but why and what "may" means here? Any comments will be welcomed, thank you and best regards;
    show rel
    release 1002000300
    CREATE CLUSTER sc_srvr_id (
    srvr_id NUMBER(10)) SIZE 1024;
    SELECT cluster_name, tablespace_name, hashkeys,
    degree, single_table FROM user_clusters;
    CREATE UNIQUE INDEX idx_sc_srvr_id ON CLUSTER sc_srvr_id;
    ERROR at line 1:
    ORA-01715: UNIQUE may not be used with a cluster index
    CREATE INDEX idx_sc_srvr_id ON CLUSTER sc_srvr_id;
    SELECT index_name, index_type, tablespace_name
    FROM user_indexes where index_name like '%SRVR%' ;
    CREATE TABLE cservers (
    srvr_id    NUMBER(10),
    network_id NUMBER(10),
    status     VARCHAR2(1),
    latitude   FLOAT(20),
    longitude  FLOAT(20),
    netaddress VARCHAR2(15))
    CLUSTER sc_srvr_id (srvr_id);
    ALTER TABLE cservers add constraint pk_srvr_id primary key (srvr_id ) ;
    SELECT index_name, index_type, tablespace_name
    FROM user_indexes where index_name like '%SRVR%' ;
    INDEX_NAME                     INDEX_TYPE
    TABLESPACE_NAME
    IDX_SC_SRVR_ID                 CLUSTER
    USERS
    PK_SRVR_ID                     NORMAL
    USERSdo we really need another pkey index here?

    "May" has different meanings, one of which is:
    (used to express opportunity or permission)
    Metalink note 19067.1 says:
    This is not permitted.
    ... which agrees with the above meaning of it.
    Besides these, it does not make any sense to me to create a unique index on a cluster. You can have primary keys in the tables you include in the cluster, it depends on your business requirement. But why do you need a unique index on a cluster?

  • Creating an unique index instaed of using primary key index

    Hi ,
    I heard in a debate sometimes it's better to create a unique index on a column and using it instaed of using primary key index in oracle.I couldn't understand what the reason propely.
    Can anyone please help me in thsi topic if it is a valid one .
    Thanks in advance

    Hi,
    They are exactly NOT identical.
    1. Unique key can have NULL values where primary keys can't.
    2. Primary key is fundamentally those keys which do not change. I mean updating a primary key is not a good idea.
    SQL> drop table test;
    Table dropped.
    SQL> create table test ( a number(2));
    Table created.
    SQL> ed
    Wrote file afiedt.buf
      1* create unique index test_idx on test(a)
    SQL> /
    Index created.
    SQL> ed
    Wrote file afiedt.buf
      1* insert into test values(NULL)
    SQL> /
    1 row created.
    SQL> drop table test;
    Table dropped.
    SQL>
    SQL> create table test ( a number(2) primary key);
    Table created.
    SQL> insert into test values(NULL);
    insert into test values(NULL)
    ERROR at line 1:
    ORA-01400: cannot insert NULL into ("HR"."TEST"."A")
    SQL>Cheers,
    Edited by: Avinash Tripathi on Nov 24, 2009 11:17 AM

  • Can we change the fields of database unique index in a customised table?

    Hi all..
    I want to know that can we create or change or delete the database unique index of a customized table?
    In my case, there is a customised table with 4 primary keys with all the records to be maintained thru transaction code SM30.
    There is database unique index maintained for this table which has 2 fields. These 2 fields are out of the 4 primary fields of the table.I hope I have made myself clear!
    Now when I am trying to insert a record in the table it give me a short dump.( It says duplication of records is not allowed)
    The reason being that the new record that I am trying to insert in the database table has those 2 fields for which the unique index is maintained is the same as an already existing record.And the other two fields are different from the already existing record.So overall the combination of the 4 primary fields is different.
    Please tell me how shall I proceed now?
    I also tried to change the Unique index but it is asking me some kind of authrization(You are not authorized to make changes (authorization object S_DEVELOP)).Also I am not sure whether changing the unique index is feasible or not.?
    Thanks.

    hi
    I think you will not be able to do unique indexing withou the help of primary keys,so use all the primary keys into the table field selections  and and then create indexing otherwise dupilication of keys can occur. if you are not able to keep the primary keys then go for non unique key indexing,where you have to add the client field and the any keys of your wish.

  • Error while creating the Unique Index of the Primary Key of an Item

    Hi all,
    I have deployed a new item (CO_CONTRACTUNIT_PRODUCT) in my publication. The deploy appears to be successfull as the item can be seen in the repository through the Mobile Manager.
    The problem occurs when i sync my local DB to get the item offline. While synchronizing, the following error appears, both in the sync window and the log file ol_sync.log.
    "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI"
    However, the debug file gives this other error regarding this table.
    ALL_INDEX:CREATE UNIQUE INDEX "TPCO_CONTRACTUNIT_PRODUCT_PK" ON CO_CONTRACTUNIT_PRODUCT (CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID) -5130Error at C:\ADE\omeprod_ol103021\olite\db\build\win\ocapi\..\..\..\src\ocapi\allindexes.cpp line:329 rc:-5130
    Build date Mar 29 2010
    okErr=(table or view %s.%s not found)
    mess=(CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID)
    AddLog(-5130 "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI")
    But the index that is being created and is giving the error is the index created automatically with the Primary Key of the table, and so nothing has been modified in that.
    The primary key of the table is created with the three columns that are part of the index that is returning the error.
    As I could not solve the error, I tried to drop and re-create the item in the repository, but no luck. As a last option, i tried to remove the item from the repository to be able to sync properly again (just like before creating the item), but the error still happens.
    Another weird point is that i have tried creating the item in another publication of another database (but with almost equal items), and the item could was downloaded to my local DB without any problem, which makes this problem still more bizarre.
    What can it be?
    Any help would be great!
    Roshni

    have you tried unistalling the client and reinstalling it?
    schema evolution changes are not always handled correctly please check thread:
    Modification of publication item into Mobile Server
    i quote from rekounas instructions:
    If you are just adding a field, you should only have to run the alter publication item API call.
    Here is an old note on schema evolution on different scenarios. The names for the APIs that they use are now deprecated. Use the ConsolidatorManager class and call the method alterPublicationItem("PUBLICATION_ITEM_NAME", "SELECT STMT") :
    A) Add column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Change the Oracle8i/9i database schema (add column)
    4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
    5. Start Mobile Server
    6. Execute a sync from the client
    7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
    B) Drop column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Delete column of the base table in the Oracle database schema
    4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
    5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
    6. Start Mobile Server
    7. Execute a sync from the client
    8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
    C) Change column datatype
    Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop/create column (do not use conversion procudures) at the base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
    snapshot definitions.
    D) Drop table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
    E) Add table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Add new base table
    4. Call CreatePublicationItem() and AddPublicationItem() method
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
    F) Changing Primary Keys
    Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop the snapshot using DropPublicationItem() method o
    4. Alter the base table
    5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
    G) To Change a Table Weight =>
    Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
    1. Run MGP to apply any changes in the InQueue to the Oracle databse
    2. Change table weight using SetTemplateItemMetadata() method
    3. Add/change constraint on the the base table which reflects the change in table weight
    4. Synchronize
    gl m8

  • Dropping index gives ORA-29861

    Hello
    We have an application which reads data from MapInfo Spatialware using MapInfo Professional and upload this data into Oracle Spatial tables using MapInfo Easy Loader utility.
    The constraint is that tables in oracle spatial must not be dropped. So we have written a stored procedure which queries unique and spatial indexes of a feature table, then it drops them and truncates the table.
    Easy loader then uploads fresh data into these truncated tables and re creates spatial and unique indexes.
    This solution is working fine except for 1 feature table "ZONE". The stored procedure is unable to drop the indexes and error log shows error "ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE".
    I looked into discussion forums and they say to drop the indexes specifying keyword "FORCE" but how can i query whether index is corrupt and i must force drop it. Index state is all VALID which i check them...
    below is the stored procedure....
    CREATE OR REPLACE
    PROCEDURE Spa_Initializetables
         (     TNAME          IN     VARCHAR2,
         SPATIALCOL          IN     VARCHAR2,
              ERROR_CODE OUT     NUMBER,
              ERROR_MESSAGE     OUT     VARCHAR2
    IS
    CURSOR c1 IS
    SELECT * FROM user_sdo_index_info WHERE table_name LIKE UPPER(TNAME) AND column_name LIKE UPPER(SPATIALCOL);
    BEGIN
    --Truncate the given table
    EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || TNAME;
    --Loop through all the indexes found and drop them one by one
    FOR cur_ts IN c1 LOOP
    --DBMS_OUTPUT.PUT_LINE(cur_ts.index_name);
    EXECUTE IMMEDIATE 'DROP INDEX '|| cur_ts.index_name;
    END LOOP;
    ERROR_CODE := 0;
    EXCEPTION
         WHEN OTHERS THEN
         ERROR_CODE          :=     1;
         ERROR_MESSAGE     := SQLERRM;     
    END Spa_Initializetables;
    please let me know what best i can do
    Regards
    Sam

    Sorry, but I can't simulate this troulbe. look:
    create table a (id number, lmao mdsys.sdo_geometry)
    INSERT INTO mdsys.sdo_geom_metadata_table (SDO_OWNER,SDO_TABLE_NAME,SDO_COLUMN_NAME,SDO_DIMINFO,SDO_SRID)
    VALUES ('GIS','A','LMAO',SDO_DIM_ARRAY(
    SDO_DIM_ELEMENT('X', -180,180, 5e-4),
    SDO_DIM_ELEMENT('Y', -180,180, 5e-4)), NULL);                         
    create index a_g_idx on a(lmao) indextype is mdsys.spatial_index;
    insert into a values (1,SDO_geometry(2003,NULL,NULL,SDO_elem_info_array(1,1003,3),SDO_ordinate_array(1,1, 2,2)));
    commit;
    truncate table a;
    insert into a values (2,SDO_geometry(2003,NULL,NULL,SDO_elem_info_array(1,1003,3),SDO_ordinate_array(1,1, 2,2)));
    drop table a;
    No errors :(
    I have 10.2.0.3
    May be i didn't understood you ?

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Problemm on Unique index

    Hi !
    Please help me in solving this issue , i am stuck up with an insertion querry..
    my constraint is
    create table STG_PLNT_PUR_ORD
    SOURCE_SYSTEM VARCHAR2(30) not null,
    TYPE_LOOKUP_CODE VARCHAR2(25),
    PO_NUM VARCHAR2(20) not null,
    LINE_NUM NUMBER not null,
    RELEASE_NUM NUMBER,
    SHIPMENT_NUM NUMBER not null,
    DISTRIBUTION_NUM NUMBER,
    AUTHORIZATION_STATUS VARCHAR2(25),
    SUPPLIER_CODE VARCHAR2(30),
    SUPPLIER_SITE_ID VARCHAR2(15),
    PART_NO VARCHAR2(40),
    UNIT_MEAS_LOOKUP_CODE VARCHAR2(25),
    QTY_ORDERED NUMBER,
    DUE_DATE DATE,
    QUANTITY_RECEIVED NUMBER,
    QUANTITY_ACCEPTED NUMBER,
    QUANTITY_REJECTED NUMBER,
    QTY_UI NUMBER,
    QUANTITY_CANCELLED NUMBER,
    QUANTITY_DELIVERED NUMBER,
    QTY_DUE NUMBER,
    QUANTITY_BILLED NUMBER,
    UNIT_PRICE NUMBER,
    CURRENCY VARCHAR2(15),
    MIN_ORDER_QUANTITY NUMBER,
    PO_AMT NUMBER,
    CREATE_DATE DATE,
    CREATED_BY VARCHAR2(30),
    UPDATE_DATE DATE,
    UPDATED_BY VARCHAR2(30),
    LAST_UPDATE_DATE DATE,
    FULL_NAME VARCHAR2(240),
    PLANNER_CODE VARCHAR2(10)
    tablespace DEV_LM4M_TAB
    pctfree 10
    pctused 40
    initrans 1
    maxtrans 255
    storage
    initial 4M
    next 4M
    minextents 1
    maxextents unlimited
    pctincrease 0
    -- Create/Recreate indexes
    create unique index IDX1_STG_PLNT_PUR_ORD on STG_PLNT_PUR_ORD (SOURCE_SYSTEM,PO_NUM,LINE_NUM,RELEASE_NUM,SHIPMENT_NUM,DISTRIBUTION_NUM)
    tablespace DEV_LM4M_TAB
    pctfree 10
    initrans 2
    maxtrans 255
    storage
    initial 4M
    next 4M
    minextents 1
    maxextents unlimited
    pctincrease 0
    This is select querry from whr i am inserting values to stg_plnt_pur_ord table
    select count(*),
    PO_NUM,
    SHIPMENT_NUMBER,
    LINE_NUMBER,
    RELEASE_NUM,
    DISTRIBUTION_NUM
    from (SELECT /*+ USE HASH */
    PH.type_lookup_code "TYPE_LOOKUP_CODE",
    nvl(PH.segment1,0) "PO_NUM",
    nvl(PLL.shipment_num,0) "SHIPMENT_NUMBER",
    nvl(PL.line_num,0) "LINE_NUMBER",
    nvl(PLL.po_release_id,0) "RELEASE_NUM",
    nvl(PD.distribution_num ,0)"DISTRIBUTION_NUM",
    PH.authorization_status "AUTHORIZATION_STATUS",
    PL.min_order_quantity "MINIMUM_QUANTITY",
    PV.segment1 "SUPPLIER_CODE",
    PVSA.vendor_site_id "SUPPLIER_SITE_ID",
    MS.segment1 "PART_NO",
    PL.unit_meas_lookup_code "UNIT_MEAS_LKUP_CODE",
    PPF.full_name "FULL_NAME",
    MS.planner_code "PLANNER_CODE",
    PLL.quantity "QUANTITY_ORDERED",
    DECODE(PLL.need_by_date, NULL, PLL.promised_date, PLL.need_by_date) "DUE_DATE",
    PLL.quantity_received "QUANTITY_RECIEVED",
    PLL.quantity_accepted "QUANTITY_ACCEPTED",
    PLL.quantity_rejected "QUANTITY_REJECTED",
    (PLL.quantity_received - PD.quantity_delivered) "QUANTITY_UI",
    PLL.quantity_cancelled "QUANTITY_CANCELLED",
    PD.quantity_delivered "QUANTITY_DELIVERED",
    (PLL.quantity - PLL.quantity_received - PLL.quantity_cancelled) "QUANTITY_DUE",
    PLL.quantity_billed "QUANTITY_BILLED",
    PL.unit_price "UNIT_PRICE",
    PH.currency_code "CURRENCY",
    (((PLL.quantity - PLL.quantity_cancelled) * PL.unit_price) *
    DECODE(PH.rate, NULL, 1, PH.rate)) "PO_AMT",
    PV.last_update_date "LAST_UPDATE_DATE"
    FROM po_headers_all PH,
    po_lines_all PL,
    po_vendors PV,
    mtl_system_items MS,
    po_line_locations_all PLL,
    po_distributions_all PD,
    po_vendor_sites_all PVSA,
    per_all_people_f PPF --,
    -- po_releases_all PR
    WHERE PH.type_lookup_code IN ('BLANKET', 'STANDARD') AND
    PH.po_header_id = PL.po_header_id AND
    PV.vendor_id = PH.vendor_id AND
    PVSA.vendor_site_id = PH.vendor_site_id AND
    PL.item_id = MS.inventory_item_id AND
    PPF.person_id(+) = PH.agent_id AND
    PPF.object_version_number =
    (SELECT MAX(object_version_number)
    FROM per_all_people_f PPF2
    WHERE PPF.person_id = PPF2.person_id
    GROUP BY person_id) AND PH.po_header_id = PLL.po_header_id AND
    PL.po_line_id = PLL.po_line_id
    --AND PR.po_release_id(+)=PLL.po_release_id
    AND PH.po_header_id = PD.po_header_id AND
    PL.po_line_id = PD.po_line_id AND
    PLL.line_location_id = PD.line_location_id AND
    NVL(PLL.closed_code, 'x') NOT LIKE 'FINALLY%' AND
    NVL(PL.closed_code, 'x') NOT LIKE 'FINALLY%' AND
    NVL(PH.closed_code, 'x') NOT LIKE 'FINALLY%' AND
    NVL(PH.cancel_flag, 'x') != 'Y' AND
    PH.ship_to_location_id IN (1, 134) AND MS.organization_id = 1)
    group by PO_NUM,
    SHIPMENT_NUMBER,
    LINE_NUMBER,
    RELEASE_NUM,
    DISTRIBUTION_NUM
    having count(*) > 1;
    After the execution of this qurry i am getting 0 rows returnd
    but while inserting into table it is giving me unquie constraint voilated i doesnt wht is thr prblm
    Please help me.........

    unique index IDX1_STG_PLNT_PUR_ORD on STG_PLNT_PUR_ORD   (SOURCE_SYSTEM,PO_NUM,LINE_NUM,RELEASE_NUM,SHIPMENT_NUM,DISTRIBUTION_NUM) it tells you that your are trying to insert a rows for the 4 columns above
    which is already exists in STG_PLNT_PUR_ORD.
    INSERT INTO stg_plnt_pur_ord
    (source_system,                    <---
      type_lookup_code,
      po_num,                           <---
      line_num,                         <---
      release_num,                      <---
      shipment_num,                     <---
      distribution_num,                 <---
      authorization_status,
      supplier_code,
      supplier_site_id,
      part_no,
      unit_meas_lookup_code,
      qty_ordered,
      due_date,
      quantity_received,
      quantity_accepted,
      quantity_rejected,
      qty_ui,
      quantity_cancelled,
      quantity_delivered,
      qty_due,
      quantity_billed,
      unit_price,
      currency,
      min_order_quantity,
      po_amt,
      create_date,
      created_by,
      update_date,
      updated_by,
      last_update_date,
      full_name,
      planner_code)
    VALUES
    (l_source,                         <---
      l_type_lookup_code(i),
      l_po_num(i),                      <---
      l_line_num(i),                    <---
      l_release_num(i),                 <---
      l_shipment_num(i),                <---
      l_distribution_num(i),            <---
      l_authorization_status(i),
      l_supplier_code(i),
      l_supplier_site_id(i),
      l_part_no(i),
      l_unit_meas_lookup_code(i),
      l_qty_ordered(i),
      l_due_date(i),
      l_quantity_received(i),
      l_quantity_accepted(i),
      l_quantity_rejected(i),
      l_qty_ui(i),
      l_quantity_cancelled(i),
      l_quantity_delivered(i),
      l_qty_due(i),
      l_quantity_billed(i),
      l_unit_price(i),
      l_currency(i),
      l_min_order_quantity(i),
      l_po_amt(i),
      SYSDATE,
      USER,
      SYSDATE,
      USER,
      l_last_update_date(i),
      l_full_name(i),
      l_planner_code(i));why not try to capture the rows that violate the constraints. something like
       Begin
         Insert Into STG_PLNT_PUR_ORD
         Values
       Exception
         When Others Then
           dbms_output.put_line('l_source: '||l_source);
           dbms_output.put_line('l_line_num: '||to_char(l_line_num(i)));
           dbms_output.put_line('l_release_num: '||to_char(l_release_num(i)));
           dbms_output.put_line('l_shipment_num: '||to_char(l_shipment_num(i)));
           dbms_output.put_line('l_distribution_num: '||to_char(l_distribution_num(i)));
       End;or create another table to log the row in error
       Begin
         Insert Into STG_PLNT_PUR_ORD
         Values
       Exception
         When Others Then
           insert into log_error
           values
           (l_source,
            l_line_num(i),
            l_release_num(i),
            l_shipment_num(i),
            l_distribution_num(i));
       End;

  • ERROR ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

    Hi,
    SAPSSRC.log
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: START OF LOG: 20071018195059
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#12 $ SAP
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Nov 30 2005 20:41:21
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe -ctf I C:/temp/51030721/EXP2/DATA/SAPSSRC.STR C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/DDLORA.TPL C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/SAPSSRC.TSK ORA -l C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/SAPSSRC.log
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: job completed
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: END OF LOG: 20071018195059
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: START OF LOG: 20071018195133
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#12 $ SAP
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Nov 30 2005 20:41:21
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe -dbcodepage 4103 -i C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/SAPSSRC.cmd -l C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/SAPSSRC.log -stop_on_error
    DbSl Trace: ORA-1403 when accessing table SAPUSER
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
    (DB) INFO: ABTREE created #20071018195134
    (IMP) INFO: import of ABTREE completed (39 rows) #20071018195134
    (DB) INFO: ABTREE~0 created #20071018195134
    (DB) INFO: AKB_CHKCONF created #20071018195134
    (IMP) INFO: import of AKB_CHKCONF completed (0 rows) #20071018195134
    (DB) INFO: AKB_CHKCONF~0 created #20071018195134
    (DB) INFO: AKB_INDX created #20071018195134
    (IMP) INFO: import of AKB_INDX completed (0 rows) #20071018195134
    (DB) INFO: AKB_INDX~0 created #20071018195134
    (DB) INFO: AKB_USAGE_INFO created #20071018195134
    (IMP) INFO: import of AKB_USAGE_INFO completed (0 rows) #20071018195134
    (DB) INFO: AKB_USAGE_INFO~0 created #20071018195134
    (DB) INFO: AKB_USAGE_INFO2 created #20071018195134
    (IMP) INFO: import of AKB_USAGE_INFO2 completed (0 rows) #20071018195134
    (DB) INFO: AKB_USAGE_INFO2~0 created #20071018195134
    (DB) INFO: APTREE created #20071018195134
    (IMP) INFO: import of APTREE completed (388 rows) #20071018195134
    (DB) INFO: APTREE~0 created #20071018195134
    (DB) INFO: APTREE~001 created #20071018195134
    (DB) INFO: APTREET created #20071018195134
    (IMP) INFO: import of APTREET completed (272 rows) #20071018195134
    DbSl Trace: Error in exec_immediate()
    DbSl Trace: ORA-1452 occurred when executing SQL statement (parse error offset=35)
    (DB) ERROR: DDL statement failed
    (CREATE UNIQUE INDEX "APTREET~0" ON "APTREET" ( "SPRAS", "ID", "NAME" ) TABLESPACE PSAPBW1 STORAGE (INITIAL 44981 NEXT 0000000040K MINEXTENTS 0000000001 MAXEXTENTS 2147483645 PCTINCREASE 0 ) )
    DbSlExecute: rc = 99
    (SQL error 1452)
    error message returned by DbSl:
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    (DB) INFO: disconnected from DB
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: END OF LOG: 20071018195134
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: START OF LOG: 20071018195314
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#12 $ SAP
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Nov 30 2005 20:41:21
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe -dbcodepage 4103 -i C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/SAPSSRC.cmd -l C:\Program Files\sapinst_instdir\NW04\SYSTEM\ABAP\ORA\NUC\DB/SAPSSRC.log -stop_on_error
    DbSl Trace: ORA-1403 when accessing table SAPUSER
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
    (DB) ERROR: DDL statement failed
    (DROP INDEX "APTREET~0")
    DbSlExecute: rc = 103
    (SQL error 1418)
    error message returned by DbSl:
    ORA-01418: specified index does not exist
    (IMP) INFO: a failed DROP attempt is not necessarily a problem
    DbSl Trace: Error in exec_immediate()
    DbSl Trace: ORA-1452 occurred when executing SQL statement (parse error offset=35)
    (DB) ERROR: DDL statement failed
    (CREATE UNIQUE INDEX "APTREET~0" ON "APTREET" ( "SPRAS", "ID", "NAME" ) TABLESPACE PSAPBW1 STORAGE (INITIAL 44981 NEXT 0000000040K MINEXTENTS 0000000001 MAXEXTENTS 2147483645 PCTINCREASE 0 ) )
    DbSlExecute: rc = 99
    (SQL error 1452)
    error message returned by DbSl:
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    (DB) INFO: disconnected from DB
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\BW1\SYS\exe\run/R3load.exe: END OF LOG: 20071018195315
    I'm getting this error "duplicate keys found". I'm finished installing the central instance and during the database instance, i got this error. I'm installing BW 3.5 on x64 windows server 2003 platform. I'm using NU kernel 6.40.
    Thanks for your suggestions on how to resolve this error.
    Reward points guaranteed.

    Issue solved by deleting central and database instance and started a new build. it finished without an error.
    Thank you.

  • Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK' when trying to update quick launch menu sharepoint 2013

    When we try to deploy a wsp to sharepoint containing code to generate quick launch menu we get the following error messages when running the last enable-spfeature command in powershell. The same code is working in the development environment, but when we
    deploy to a test server the following error occurs:
    Add-SPSolution "C:\temp\ImpactSharePoint.wsp"
    Install-SPSolution -Identity impactsharepoint.wsp  -GACDeployment
    Enable-SPFeature impactsharepoint_branding -url http://im-sp1/sites/impact/
    Enable-SPFeature impactsharepoint_pages -url http://im-sp1/sites/impact/
    From UlsViewer.exe:
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database 880i
    High System.Data.SqlClient.SqlException (0x80131904): Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK'. The duplicate key value is (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6,
    1000).  The statement has been terminated.     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject
    stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)     at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj,
    Boolean& dataReady)     at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()     at System.Data.SqlClient.SqlDataReader.get_MetaData()     at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior
    runBehavior, String resetOptionsString)     at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite)  
      at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)     at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior
    cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)     at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)     at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
    behavior)     at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock)  ClientConnectionId:2bb4004c-aa75-470e-b11e-dbf1c476aaed
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database 880k
    High at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock)     at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock)     at Microsoft.SharePoint.Library.SPRequestInternalClass.AddNavigationNode(String
    bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified)     at Microsoft.SharePoint.Library.SPRequestInternalClass.AddNavigationNode(String
    bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified)     at Microsoft.SharePoint.Library.SPRequest.AddNavigationNode(String
    bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified)     at Microsoft.SharePoint.Navigation.SPNavigationNode.AddInternal(Int32
    iPreviousNodeId, Int32 iParentId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav)     at Microsoft.SharePoint.Navigation.SPNavigationNodeCollection.AddInternal(SPNavigationNode node, Int32 iPreviousNodeId)     at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.<ConfigureQuickLaunchBar>b__0()
        at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass5.<RunWithElevatedPrivileges>b__3()     at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)     at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback
    secureCode, Object param)     at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated secureCode)     at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.ConfigureQuickLaunchBar()     at ImpactSharePoint.Features.Pages.PagesEventReceiver.FeatureActivated(SPFeatureReceiverProperties
    properties)     at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce)     at Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, SPFeatureActivateFlags
    activateFlags, Boolean fForce)     at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly)
        at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtWeb(Boolean fActivate, Boolean fEnsure, Guid featid, SPFeatureDefinition featdef, String urlScope, String sProperties, Boolean fForce)     at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtScope(Boolean
    fActivate, Guid featid, SPFeatureDefinition featdef, String urlScope, Boolean fForce)     at Microsoft.SharePoint.PowerShell.SPCmdletEnableFeature.UpdateDataObject()     at Microsoft.SharePoint.PowerShell.SPCmdlet.ProcessRecord()  
      at System.Management.Automation.CommandProcessor.ProcessRecord()     at System.Management.Automation.CommandProcessorBase.DoExecute()     at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object
    input, Hashtable errorResults, Boolean enumerate)     at System.Management.Automation.PipelineOps.InvokePipeline(Object input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][]
    commandRedirections, FunctionContext funcContext)     at System.Management.Automation.Interpreter.ActionCallInstruction`6.Run(InterpretedFrame frame)     at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame
    frame)     at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)     at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame)     at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0
    arg0)     at System.Management.Automation.DlrScriptCommandProcessor.RunClause(Action`1 clause, Object dollarUnderbar, Object inputToProcess)     at System.Management.Automation.CommandProcessorBase.DoComplete()     at System.Management.Automation.Internal.PipelineProcessor.DoCompleteCore(CommandProcessorBase
    commandRequestingUpstreamCommandsToStop)     at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate)     at System.Management.Automation.Runspaces.LocalPipeline.InvokeHelper()
        at System.Management.Automation.Runspaces.LocalPipeline.InvokeThreadProc()     at System.Management.Automation.Runspaces.PipelineThread.WorkerProc()     at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext,
    ContextCallback callback, Object state, Boolean preserveSyncCtx)     at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)     at System.Threading.ExecutionContext.Run(ExecutionContext
    executionContext, ContextCallback callback, Object state)     at System.Threading.ThreadHelper.ThreadStart()
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database 880j
    High SqlError: 'Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK'. The duplicate key value is (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6, 1000).'
       Source: '.Net SqlClient Data Provider' Number: 2601 State: 1 Class: 14 Procedure: 'proc_NavStructAddNewNode' LineNumber: 92 Server: 'IMPACTCLUSTER\IMPACTDB'
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database 880j
    High SqlError: 'The statement has been terminated.'    Source: '.Net SqlClient Data Provider' Number: 3621 State: 0 Class: 0 Procedure: 'proc_NavStructAddNewNode' LineNumber: 92 Server: 'IMPACTCLUSTER\IMPACTDB'
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database tzku
    High ConnectionString: 'Data Source=IMPACTCLUSTER\IMPACTDB;Initial Catalog=WSS_Content;Integrated Security=True;Enlist=False;Pooling=True;Min Pool Size=0;Max Pool Size=100;Connect Timeout=15;Application Name=SharePoint[powershell][1][WSS_Content]'
       Partition: 6323df8a-5c57-4d3e-a477-09aa8b66100a ConnectionState: Closed ConnectionTimeout: 15
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database tzkv
    High SqlCommand: 'BEGIN TRAN DECLARE @abort int SET @abort = 0 DECLARE @EidBase int,@EidHome int SET @EidBase = 0 SET @EidHome = NULL IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAllocateEidBlockWebId @wssp0, @wssp1,
    @wssp2, @wssp3, @EidBase OUTPUT SELECT @wssp4 = @EidBase, @wssp5 = @abort END IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAddNewNodeByUrl '6323DF8A-5C57-4D3E-A477-09AA8B66100A','7AE114DF-9D52-4B08-AFFA-8C544CBC27B6',1,2072,-1,0,N'sites/impact/default.aspx',N'PersonSøk',N'PersonSøk',NULL,0,0,0,NULL,@EidBase,@EidHome
    OUTPUT SELECT @wssp6 = @abort END IF @abort = 0 BEGIN EXEC proc_NavStructLogChangesAndUpdateSiteChangedTime @wssp7, @wssp8, NULL END IF @abort <> 0 BEGIN ROLLBACK TRAN END ELSE BEGIN COMMIT TRAN END IF @abort = 0  BEGIN EXEC proc_UpdateDiskUsed
    '6323DF8A-5C57-4D3E-A477-09AA8B66100A' END '     CommandType: Text CommandTimeout: 0     Parameter: '@wssp0' Type: UniqueIdentifier Size: 0 Direction: Input Value: '6323df8a-5c57-4d3e-a477-09aa8b66100a'     Parameter: '@wssp1'
    Type: UniqueIdentifier Size: 0 Direction: Input Value: '7ae114df-9d52-4b08-affa-8c544cbc27b6'     Parameter: '@wssp2' Type: Int Size: 0 Direction: Input Value: '1'     Parameter: '@wssp3' Type: Int Size: 0 Direction: Input Value: '2072'
        Parameter: '@wssp4' Type: Int Size: 0 Direction: Output Value: '2072'     Parameter: '@wssp5' Type: Int Size: 0 Direction: Output Value: '0'     Parameter: '@wssp6' Type: Int Size: 0 Direction: Output Value: '10006'  
      Parameter: '@wssp7' Type: UniqueIdentifier Size: 0 Direction: Input Value: '6323df8a-5c57-4d3e-a477-09aa8b66100a'     Parameter: '@wssp8' Type: UniqueIdentifier Size: 0 Direction: Input Value: '7ae114df-9d52-4b08-affa-8c544cbc27b6'
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database aek90
    High SecurityOnOperationCheck = True
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database d0d6
    High System.Data.SqlClient.SqlException (0x80131904): Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK'. The duplicate key value is (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6,
    1000).  The statement has been terminated.     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject
    stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)     at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj,
    Boolean& dataReady)     at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()     at System.Data.SqlClient.SqlDataReader.get_MetaData()     at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior
    runBehavior, String resetOptionsString)     at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite)  
      at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)     at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior
    cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)     at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)     at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
    behavior)     at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock)     at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean
    retryfordeadlock)     at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock)  ClientConnectionId:2bb4004c-aa75-470e-b11e-dbf1c476aaed
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database ad194
    High ExecuteQuery failed with original error 0x80131904
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Database 8z23
    Unexpected Unexpected query execution failure in navigation query, HResult -2146232060. Query text (if available): "BEGIN TRAN DECLARE @abort int SET @abort = 0 DECLARE @EidBase int,@EidHome int SET @EidBase
    = 0 SET @EidHome = NULL IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAllocateEidBlockWebId @wssp0, @wssp1, @wssp2, @wssp3, @EidBase OUTPUT SELECT @wssp4 = @EidBase, @wssp5 = @abort END IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAddNewNodeByUrl '6323DF8A-5C57-4D3E-A477-09AA8B66100A','7AE114DF-9D52-4B08-AFFA-8C544CBC27B6',1,2072,-1,0,N'sites/impact/default.aspx',N'PersonSøk',N'PersonSøk',NULL,0,0,0,NULL,@EidBase,@EidHome
    OUTPUT SELECT @wssp6 = @abort END IF @abort = 0 BEGIN EXEC proc_NavStructLogChangesAndUpdateSiteChangedTime @wssp7, @wssp8, NULL END IF @abort <> 0 BEGIN ROLLBACK TRAN END ELSE BEGIN COMMIT TRAN END IF @abort = 0  BEGIN EXEC proc_UpdateDiskUsed
    '6323DF8A-5C57-4D3E-A477-09AA8B66100A' END "
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    General 8kh7
    High <nativehr>0x8107140d</nativehr><nativestack></nativestack>An unexpected error occurred while manipulating the navigational structure of this Web.
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    General aix9j
    High SPRequest.AddNavigationNode: UserPrincipalName=i:0).w|s-1-5-21-2030300366-1823906440-2562684930-2106, AppPrincipalName= ,bstrUrl=http://im-sp1/sites/impact ,bstrName=PersonSøk ,bstrNameResource=<null> ,bstrNodeUrl=/sites/impact/default.aspx
    ,lType=0 ,lParentId=2072 ,lPreviousSiblingId=-1 ,bAddToQuickLaunch=False ,bAddToSearchNav=False
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    General ai1wu
    Medium System.Runtime.InteropServices.COMException: <nativehr>0x8107140d</nativehr><nativestack></nativestack>An unexpected error occurred while manipulating the navigational structure of this
    Web., StackTrace:    at Microsoft.SharePoint.Navigation.SPNavigationNode.AddInternal(Int32 iPreviousNodeId, Int32 iParentId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav)     at Microsoft.SharePoint.Navigation.SPNavigationNodeCollection.AddInternal(SPNavigationNode
    node, Int32 iPreviousNodeId)     at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.<ConfigureQuickLaunchBar>b__0()     at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass5.<RunWithElevatedPrivileges>b__3()
        at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)     at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param)     at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated
    secureCode)     at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.ConfigureQuickLaunchBar()     at ImpactSharePoint.Features.Pages.PagesEventReceiver.FeatureActivated(SPFeatureReceiverProperties properties)    
    at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce)     at Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, SPFeatureActivateFlags activateFlags, Boolean
    fForce)     at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly)     at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtWeb(Boolean
    fActivate, Boolean fEnsure, Guid featid, SPFeatureDefinition featdef, String urlScope, String sProperties, Boolean fForce)     at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtScope(Boolean fActivate, Guid featid, SPFeatureDefinition
    featdef, String urlScope, Boolean fForce)     at Microsoft.SharePoint.PowerShell.SPCmdletEnableFeature.UpdateDataObject()     at Microsoft.SharePoint.PowerShell.SPCmdlet.ProcessRecord()     at System.Management.Automation.CommandProcessor.ProcessRecord()
        at System.Management.Automation.CommandProcessorBase.DoExecute()     at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate)     at System.Management.Automation.PipelineOps.InvokePipeline(Object
    input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][] commandRedirections, FunctionContext funcContext)     at System.Management.Automation.Interpreter.ActionCallInstruction`6.Run(InterpretedFrame
    frame)     at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)     at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)    
    at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame)     at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0 arg0)     at System.Management.Automation.DlrScriptCommandProcessor.RunClause(Action`1
    clause, Object dollarUnderbar, Object inputToProcess)     at System.Management.Automation.CommandProcessorBase.DoComplete()     at System.Management.Automation.Internal.PipelineProcessor.DoCompleteCore(CommandProcessorBase commandRequestingUpstreamCommandsToStop)
        at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate)     at System.Management.Automation.Runspaces.LocalPipeline.InvokeHelper()    
    at System.Management.Automation.Runspaces.LocalPipeline.InvokeThreadProc()     at System.Management.Automation.Runspaces.PipelineThread.WorkerProc()     at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext,
    ContextCallback callback, Object state, Boolean preserveSyncCtx)     at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)     at System.Threading.ExecutionContext.Run(ExecutionContext
    executionContext, ContextCallback callback, Object state)     at System.Threading.ThreadHelper.ThreadStart()
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
    0x11F4 SharePoint Foundation
    Feature Infrastructure 88jm
    High Feature receiver assembly 'ImpactSharePoint, Version=1.1.0.0, Culture=neutral, PublicKeyToken=3f4d824fecc0071e', class 'ImpactSharePoint.Features.Pages.PagesEventReceiver', method 'FeatureActivated' for feature
    'd8aabd95-076a-4650-a8a6-0aa5bd8ac8d1' threw an exception: Microsoft.SharePoint.SPException: An unexpected error occurred while manipulating the navigational structure of this Web. ---> System.Runtime.InteropServices.COMException: <nativehr>0x8107140d</nativehr><nativestack></nativestack>An
    unexpected error occurred while manipulating the navigational structure of this Web.     at Microsoft.SharePoint.Library.SPRequestInternalClass.AddNavigationNode(String bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32
    lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified)     at Microsoft.SharePoint.Library.SPRequest.AddNavigationNode(String bstrUrl, String bstrName, String bstrNameResource,
    String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified)     --- End of inner exception stack trace ---     at Microsoft.SharePoint.SPGlobal.HandleComException(COMException
    comEx)     at Microsoft.SharePoint.Library.SPRequest.AddNavigationNode(String bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav,
    String& pbstrDateModified)     at Microsoft.SharePoint.Navigation.SPNavigationNode.AddInternal(Int32 iPreviousNodeId, Int32 iParentId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav)     at Microsoft.SharePoint.Navigation.SPNavigationNodeCollection.AddInternal(SPNavigationNode
    node, Int32 iPreviousNodeId)     at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.<ConfigureQuickLaunchBar>b__0()     at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass5.<RunWithElevatedPrivileges>b__3()
        at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)     at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param)     at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated
    secureCode)     at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.ConfigureQuickLaunchBar()     at ImpactSharePoint.Features.Pages.PagesEventReceiver.FeatureActivated(SPFeatureReceiverProperties properties)    
    at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce)
    5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
    The code:
    using System.Diagnostics;
    using Microsoft.SharePoint;
    using Microsoft.SharePoint.Navigation;
    namespace ImpactSharePoint.ConfigureSharePointInstance
        public class NavigationConfig
            public void ConfigureQuickLaunchBar(SPWeb web)
                SPSecurity.RunWithElevatedPrivileges(delegate
                    //Delete All Links
                    web.AllowUnsafeUpdates = true;
                    for (int i = web.Navigation.QuickLaunch.Count - 1; i > -1; i--)
                        web.Navigation.QuickLaunch[i].Delete(); // -> Deleting all the links in quick launch
                    web.QuickLaunchEnabled = true;
                    EventLog.WriteEntry("Sharepointfeature","Starter");
                    //Adding Links
                    SPNavigationNodeCollection nodes = web.Navigation.QuickLaunch;
                    var sokNode = new SPNavigationNode("Søk", null, false);
                    nodes.AddAsFirst(sokNode);
                    sokNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til søk");
                    // Personsøk
                    var personSokNode = new SPNavigationNode("Deltakere", "/sites/impact/default.aspx", false); //TODO: fix hardkoding
                    sokNode.Children.AddAsFirst(personSokNode);
                    personSokNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node personsøk");
                    // Udb-søk
                    var udbSokNode = new SPNavigationNode("UDB-Søk", "/sites/impact/udbsok.aspx", false); //TODO: fix hardkoding
                    sokNode.Children.AddAsLast(udbSokNode);
                    udbSokNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node udbsøk");
                    // Kommuner
                    var kommuneNode = new SPNavigationNode("Kommuner", "/sites/impact/kommunesearch.aspx", false); //TODO: fix hardkoding
                    sokNode.Children.AddAsFirst(kommuneNode);
                    kommuneNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node kommunesøk");   
                    // Organisasjoner
                    var virksomhetNode = new SPNavigationNode("Organisasjoner", "/sites/impact/virksomhetsearch.aspx", false); //TODO: fix hardkoding
                    sokNode.Children.AddAsFirst(virksomhetNode);
                    virksomhetNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node kommunesøk");
                    // NIR
                    var nirNode = new SPNavigationNode("Nir", null, false); //TODO: fix hardkoding
                    nodes.AddAsLast(nirNode);
                    nirNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node Nir");
                    // Tilskudd
                    var tilskuddNode = new SPNavigationNode("Tilskudd", "/sites/impact/", false);
                    nodes.AddAsLast(tilskuddNode);
                    tilskuddNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node Tilskudd");
                    // Kjøringsoversikt
                    var showkjoringerNode = new SPNavigationNode("Kjøringsoversikt", "/sites/dev/kjoringoversikt.aspx", false); //TODO: fix hardkoding
                    tilskuddNode.Children.AddAsFirst(showkjoringerNode);
                    showkjoringerNode.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Lagt til node showkjoringerNode");
                    web.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Etter update");
                    //Setting homepage
                    SPFolder folder = web.RootFolder;
                    folder.WelcomePage = "default.aspx";
                    folder.Update();
                    EventLog.WriteEntry("Sharepointfeature", "Etter homepage");
                    web.AllowUnsafeUpdates = false;

    i think you need to debug your code, lookalike the values you trying insert already exist in the database.
    these two IDs (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6).
    i would try to run the select command against the content db.
    SELECT TOP *  FROM [DB Name].[dbo].[NavNodes] where id = '6323df8a-5c57-4d3e-a477-09aa8b66100a'
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Access path difference between Primary Key and Unique Index

    Hi All,
    Is there any specific way the oracle optimizer treats Primary key and Unique index differently?
    Oracle Version
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Sample test data for Normal Index
    SQL> create table t_test_tab(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab start with 1 increment by 1 ;
    Sequence created.
    SQL>  insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
            259  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> Sample test data when using Primary Key
    SQL> create table t_test_tab1(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab1 start with 1 increment by 1 ;
    Sequence created.
    SQL> insert into t_test_tab1 select seq_t_test_tab1.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1727568366
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB1 | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> alter table t_test_tab1 add constraint pk_t_test_tab1 primary key (col1);
    Table altered.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 2995826579
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    59   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| PK_T_TEST_TAB1 | 99999 |   488K|    59   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6867  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> If you see here the even though statistics were gathered,
         * In the 1st table T_TEST_TAB, the table is still using FULL table access after creation of index.
         * And in the 2nd table T_TEST_TAB1, table is using PRIMARY KEY as expected.
    Any comments ??
    Regards,
    BPat

    Thanks.
    Yes, ignored the NOT NULL part.Did a test and now it is working as expected
    SQL>  create table t_test_tab(col1 number not null, col2 number, col3 varchar2(12));
    Table created.
    SQL>
    create sequence seq_t_test_tab start with 1 increment by 1 ;SQL>
    Sequence created.
    SQL> insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  set autotrace traceonly
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6912  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>  create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 4115006285
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    63   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| IDX_T_TEST_TAB | 99999 |   488K|    63   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6881  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>

  • Difference between Unique key and Unique index

    Hi All,
    I've got confused in the difference between unique index & unique key in a table.
    While we create a unique index on a table, its created as a unique index.
    On the other hand, if we create a unique key/constraint on the table, Oracle also creates an index entry for that. So I can find the same name object in all_constraints as well as in all_indexes.
    My question here is that if during creation of unique key/constraint, an index is automatically created than why is the need to create unique key and then two objects , while we can create only one object i.e. unique index.
    Thanks
    Deepak

    This is only my understanding and is not according to any documentation, that is as follows.
    The unique key (constraint) needs an unique index for achieving constraint of itself.
    Developers and users can make any constraint (unique-key, primary-key, foreign-key, not-null ...) to enable,disable and be deferable. Unique key is able to be enabled, disabled, deferable.
    On the other hand, the index is used for performance-up originally, unique index itself doesn't have the concept like constraints. The index (including non-unique, unique) can be rebuilded,enabled,disabled etc. But I think that index cannot be set "deferable-builded" automatic.

  • Difference between unique constraint and unique index

    1. What is the difference between unique constraint and unique index when unique constraint is always indexed ? Which one is better in this case for better performance ?
    2. Is Composite index of 3 columns x,y,z better
    or having independent/ seperate indexes on 3 columns x,y,z is better for better performance ?
    3. It has been very confusing for me to decide which columns to index, I have indexed most foreignkey columns, is it a good idea ? We do lot of selects and DMLS on most of our tables. Is there any query that I can run and find out if indexes are really being used and if they are improving any performance. I have analyzed and computed my indexes using ANALYZE index index_name validate structure and COMPUTE STATISTICS;
    null

    1. Unique index is part of unique constraint. Of course you can create standalone unique index. But is is no point to skip the logical view of business if you spend same effort to achive.
    You create unique const. Oracle create the unique index for you. You may specify index characteristic in unique constraint.
    2. Depends. You can't utilize the composite index if the searching condition is not whole or front part of the indexing key. You can't utilize your index if you query the table for y=2. That is.
    3. As old words in database arena, Index may be good or bad for a table depending on the size of table, number of columns in the table... etc. It is very environmental dependent. In fact, It is part of database nomalization. Statistic is a way oracle use to determine the execution plan.
    Steve
    null

Maybe you are looking for