Auto increment the unique index node

Hello,
I dont know if this has already been discussed before - searching the forums did not return anything interesting.
Sometimes you need to have some relational-like functionalities that you would normally have in a hybrid relational/xml DB, where you could have some flexibility in establishing integrity conditions. To illustrate what I mean, consider the following document:
<?xml version="1.0"?>
<document>
<id>12345876988</id>
<year>2007</year>
<name> Ana </name>
<email> [email protected] </email>
</document>
So lets suppose I need to store millions of documents like that - but I'd like them to have a unique ID and that this ID would be generated in an auto-increment fashion. I can do this by counting all docs in the container and then incrementing by one, everytime I insert. Then I'd set the ID index to be unique, in case there's any accidental repetition. But I believe this is not the best solution - is there an alternative to this?
Also, maybe I'd like to express that not only ID should be unique, but the combination of ID and YEAR should be unique. Is it possible to express this in BDB XML? Again, this would be easy in a relational DB, I would just use ID and YEAR as fields and index them in combination.
Thanks in advance.

Hi,
You can use a Berkeley DB Sequence to assign a unique number. Here is a pointer to the documentation for C++ for example:
http://www.oracle.com/technology/documentation/berkeley-db/db/api_cxx/seq_list.html
You can add the sequence database to the same environment you are using for Berkeley DB XML. That will allow the updates to particpate in the same transaction which can be committed or aborted.
Also, maybe I'd like to express that not only ID should be unique, but the >combination of ID and YEAR should be unique. Is it possible to express this in >BDB XML?For this, a common technique is to use Berkeley DB XML metadata. You can create a metadata attribute which represents the concatenation of ID and YEAR.
The metadata is stored with the document, but not as part of the document
content.
You can also create a unique index on that metadata attribute to enforce uniqueness.
Ron

Similar Messages

  • Error while creating the Unique Index of the Primary Key of an Item

    Hi all,
    I have deployed a new item (CO_CONTRACTUNIT_PRODUCT) in my publication. The deploy appears to be successfull as the item can be seen in the repository through the Mobile Manager.
    The problem occurs when i sync my local DB to get the item offline. While synchronizing, the following error appears, both in the sync window and the log file ol_sync.log.
    "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI"
    However, the debug file gives this other error regarding this table.
    ALL_INDEX:CREATE UNIQUE INDEX "TPCO_CONTRACTUNIT_PRODUCT_PK" ON CO_CONTRACTUNIT_PRODUCT (CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID) -5130Error at C:\ADE\omeprod_ol103021\olite\db\build\win\ocapi\..\..\..\src\ocapi\allindexes.cpp line:329 rc:-5130
    Build date Mar 29 2010
    okErr=(table or view %s.%s not found)
    mess=(CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID)
    AddLog(-5130 "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI")
    But the index that is being created and is giving the error is the index created automatically with the Primary Key of the table, and so nothing has been modified in that.
    The primary key of the table is created with the three columns that are part of the index that is returning the error.
    As I could not solve the error, I tried to drop and re-create the item in the repository, but no luck. As a last option, i tried to remove the item from the repository to be able to sync properly again (just like before creating the item), but the error still happens.
    Another weird point is that i have tried creating the item in another publication of another database (but with almost equal items), and the item could was downloaded to my local DB without any problem, which makes this problem still more bizarre.
    What can it be?
    Any help would be great!
    Roshni

    have you tried unistalling the client and reinstalling it?
    schema evolution changes are not always handled correctly please check thread:
    Modification of publication item into Mobile Server
    i quote from rekounas instructions:
    If you are just adding a field, you should only have to run the alter publication item API call.
    Here is an old note on schema evolution on different scenarios. The names for the APIs that they use are now deprecated. Use the ConsolidatorManager class and call the method alterPublicationItem("PUBLICATION_ITEM_NAME", "SELECT STMT") :
    A) Add column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Change the Oracle8i/9i database schema (add column)
    4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
    5. Start Mobile Server
    6. Execute a sync from the client
    7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
    B) Drop column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Delete column of the base table in the Oracle database schema
    4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
    5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
    6. Start Mobile Server
    7. Execute a sync from the client
    8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
    C) Change column datatype
    Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop/create column (do not use conversion procudures) at the base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
    snapshot definitions.
    D) Drop table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
    E) Add table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Add new base table
    4. Call CreatePublicationItem() and AddPublicationItem() method
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
    F) Changing Primary Keys
    Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop the snapshot using DropPublicationItem() method o
    4. Alter the base table
    5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
    G) To Change a Table Weight =>
    Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
    1. Run MGP to apply any changes in the InQueue to the Oracle databse
    2. Change table weight using SetTemplateItemMetadata() method
    3. Add/change constraint on the the base table which reflects the change in table weight
    4. Synchronize
    gl m8

  • Can we change the fields of database unique index in a customised table?

    Hi all..
    I want to know that can we create or change or delete the database unique index of a customized table?
    In my case, there is a customised table with 4 primary keys with all the records to be maintained thru transaction code SM30.
    There is database unique index maintained for this table which has 2 fields. These 2 fields are out of the 4 primary fields of the table.I hope I have made myself clear!
    Now when I am trying to insert a record in the table it give me a short dump.( It says duplication of records is not allowed)
    The reason being that the new record that I am trying to insert in the database table has those 2 fields for which the unique index is maintained is the same as an already existing record.And the other two fields are different from the already existing record.So overall the combination of the 4 primary fields is different.
    Please tell me how shall I proceed now?
    I also tried to change the Unique index but it is asking me some kind of authrization(You are not authorized to make changes (authorization object S_DEVELOP)).Also I am not sure whether changing the unique index is feasible or not.?
    Thanks.

    hi
    I think you will not be able to do unique indexing withou the help of primary keys,so use all the primary keys into the table field selections  and and then create indexing otherwise dupilication of keys can occur. if you are not able to keep the primary keys then go for non unique key indexing,where you have to add the client field and the any keys of your wish.

  • Unique Index Error while running the ETL process

    Hi,
    I have Installed Oracle BI Applications 7.9.4 and Informatica PowerCenter 7.1.4. I have done all the configuration steps as specified in the Oracle BI Applications Installation and Configuration Guide. While running the ETL process from DAC for Execution Plan 'Human Resources Oracle 11.5.10' some tasks going to status Failed.
    When I checked the log files for these tasks, I found the following error
    ANOMALY INFO::: Error while executing : CREATE INDEX:W_PAYROLL_F_ASSG_TMP:W_PRL_F_ASG_TMP_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_PRL_F_ASG_TMP_U1
    ON
    W_PAYROLL_F_ASSG_TMP
    INTEGRATION_ID ASC
    ,DATASOURCE_NUM_ID ASC
    ,EFFECTIVE_FROM_DT ASC
    NOLOGGING PARALLEL
    with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    EXCEPTION CLASS::: java.lang.Exception
    I found some duplicate rows in the table W_PAYROLL_F_ASSG_TMP with the combination of the columns on which it is trying to create INDEX. Can anyone give me information for the following.
    1. Why it is trying to create the unique index on the combination of columns which may not be unique.
    2. Is it a problem with the data in the source database (means becoz of duplicate rows in the source system).
    How we need to fix this error. Do we need to delete the duplicate rows from the table in the data warehouse manually and re-run the ETL process or is there any other way to fix the problem.

    This query will identify the duplicate in the Warehouse table preventing the Index from being built:
    select count(*), integration_id, src_eff_from_dt from w_employee_ds group by integration_id, src_eff_from_dt having count(*)>1;
    To get the ETL to finish issue this delete to the W_EMPLOYEE_DS table:
    delete from w_employee_ds where integration_id = '2' and src_eff_from_dt ='04-JAN-91';
    To fix it so this does not happen again on another load you need to find the record in the Vision DB, it is in the PER_ALL_PEOPLE_F table. I have a Vision source and this worked:
    select rowid, person_id , LAST_NAME FROM PER_ALL_PEOPLE_F
    where EFFECTIVE_START_DATE = '04-JAN-91';
    ROWID PERSON_ID
    LAST_NAME
    AAAWXJAAMAAAwl/AAL 6272
    Kang
    AAAWXJAAMAAAwmAAAI 6272
    Kang
    AAAWXJAAMAAAwmAAA4 6307
    Lee
    delete from PER_ALL_PEOPLE_F
    where ROWID = 'AAAWXJAAMAAAwl/AAL';

  • Auto-Increment ID for the Sharepoint 2010 List

    Hi,
    I have a requirement to have a SharePoint List with set of fields. One of the field should store the Custom ID field Ex: "2015-001". Numbers should auto increment the values i.e 011, 002 and so on.
    Can someone help me to achieve this requirement via Out-of-box.
    I tried using [ID] field and tried concatenating to bring up the value as "2015-001" but it didn't work and I read that [ID] field does not work as expected with the calculated column.
    I am aware there SP Designer workflow solution, JavaScript solution, List Adding Event Receiver solutions are available. I trying to find if there is any out-of-Box solution available.
    Share your thoughts/suggestion on this requirement

    Thanks Alex.
    We have used Infopath and we were able to achieve the Auto Increment requirement.
    There is another requirement:
    Created By and Created should be stamped together in the list. Ex: Field Name: Audit Trails - XXX,03/06/16/15, 05:08PM.
    I tried creating a field and used formula to concatenate Created By and Created field...but, When I Submit the form in Sharepoint List, Audit Trail Field is empty.
    I tried Submit/Receive data etc..but no luck
    Any Suggestions or thoughts?

  • Regarding auto increment

    how can i auto increment the number in my table while inserting the data

    Good point. If any error occurs during insertion -
    the sequence will be broken. Bucause, even if your
    transaction failed sequence will generate the new
    sequence number. But, as a result of that error -
    insertion won't be taken into place. Thus, break your
    continous chain.It is probably good point (depending on requirements) irregardless of that.
    Sequences ARE NOT for CONTINUOUS number generation.
    Sequences ARE for UNIQUE number generation.
    Continous chain will break also at least rollback (always) and DB restart as well (unless you haven't specified sequence as nocache which generally is silly idea).
    Gints Plivna
    http://www.gplivna.eu

  • Auto increment option

    Is there any option for auto increment the values in the tables. for example if we are inserting 100 data's for first data i should specify 1 and for second data it should specify 2 and goes on.

    Hi,
    Use Sy-tabix in between loop and endloop.
    or sy-index = sy-index + 1.
    Check this Example.
    TABLES:MARA.
              Selection Screen
    SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
    RANGES:S_MATNR FOR MARA-MATNR.
    SELECTION-SCREEN END OF BLOCK B1.
    INITIALIZATION.
      S_MATNR-LOW = '000000000000000001'.
      S_MATNR-HIGH = '0000000000001000'.
      S_MATNR-SIGN = 'E'.
      S_MATNR-OPTION = 'BT'.
      APPEND S_MATNR.
      CLEAR S_MATNR.
                Internal Table
      DATA:BEGIN OF ITAB OCCURS 0,
           MATNR LIKE MARA-MATNR,
           ERSDA LIKE MARA-ERSDA,
           ERNAM LIKE MARA-ERNAM,
           NTGEW LIKE MARA-NTGEW,
           END OF ITAB.
                Start of Selection
    START-OF-SELECTION.
      SELECT MATNR
             ERSDA
             ERNAM
             NTGEW FROM MARA INTO TABLE ITAB WHERE MATNR IN S_MATNR.
      LOOP AT ITAB.
        WRITE:/ SY-TABIX,ITAB-MATNR.
      ENDLOOP.

  • Auto increment trigger

    I have a table with an index and two other columns. The first column is called index.
    What I would like to do is to auto increment the index column on every insertion. I have read some of the documentation and can't make sense of what I am trying to do. Please point me to an example.
    WBR

    I tried these 3 statements to create a auto update column:
    CREATE TABLE TIME_TABLE (
    RECORD_NUM INTEGER NOT NULL,
    DATE_FIELD DATE,
    DESCRIPTION VARCHAR(1000));
    CREATE SEQUENCE TIME_SEQ START WITH 1 INCREMENT BY 1;
    CREATE OR REPLACE
    TRIGGER TIME_TRIGGER BEFORE INSERT ON TIME_TABLE
    FOR EACH ROW
    BEGIN
    SELECT TIME_SEQ.NEXTVAL INTO :NEW.RECORD_NUM FROM DUAL;
    END
    When I execute an insert statement I get the following error:
    An error was encountered performing the requested operation:
    ORA-04098: the trigger ‘TIME.TIME_TRIGGER’ is invalid and failed re-validation.
    So my question is what is wrong with the three statements. I would appreciate some help.

  • Difference between unique constraint and unique index

    1. What is the difference between unique constraint and unique index when unique constraint is always indexed ? Which one is better in this case for better performance ?
    2. Is Composite index of 3 columns x,y,z better
    or having independent/ seperate indexes on 3 columns x,y,z is better for better performance ?
    3. It has been very confusing for me to decide which columns to index, I have indexed most foreignkey columns, is it a good idea ? We do lot of selects and DMLS on most of our tables. Is there any query that I can run and find out if indexes are really being used and if they are improving any performance. I have analyzed and computed my indexes using ANALYZE index index_name validate structure and COMPUTE STATISTICS;
    null

    1. Unique index is part of unique constraint. Of course you can create standalone unique index. But is is no point to skip the logical view of business if you spend same effort to achive.
    You create unique const. Oracle create the unique index for you. You may specify index characteristic in unique constraint.
    2. Depends. You can't utilize the composite index if the searching condition is not whole or front part of the indexing key. You can't utilize your index if you query the table for y=2. That is.
    3. As old words in database arena, Index may be good or bad for a table depending on the size of table, number of columns in the table... etc. It is very environmental dependent. In fact, It is part of database nomalization. Statistic is a way oracle use to determine the execution plan.
    Steve
    null

  • UNIQUE INDEX and PRIMARY KEYS

    Hi Friends,
    I am confused about primary keys.
    What is the purpose of this key again? I know it is used for unique constraints.
    Supposing I have a table with two (2) columns which are each indexed as unique.
    Then they can me both candidate at primary key right?
    So why do I need a primary key again? when I have 2 columns which are uniquely index?
    Thanks a lot

    A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. This constraint does not apply to NULL values except for the BDB storage engine. For other engines, a UNIQUE index allows multiple NULL values for columns that can contain NULL
    The differences between the two are:
    1. Column(s) that make the Primary Key of a table cannot be NULL since by definition; the Primary Key cannot be NULL since it helps uniquely identify the record in the table. The column(s) that make up the unique index can be nullable. A note worth mentioning over here is that different RDBMS treat this differently –> while SQL Server and DB2 do not allow more than one NULL value in a unique index column, Oracle allows multiple NULL values. That is one of the things to look out for when designing/developing/porting applications across RDBMS.
    2. There can be only one Primary Key defined on the table where as you can have many unique indexes defined on the table (if needed).
    3. Also, in the case of SQL Server, if you go with the default options then a Primary Key is created as a clustered index while the unique index (constraint) is created as a non-clustered index. This is just the default behavior though and can be changed at creation time, if needed.
    So, if the unique index is defined on not null column(s), then it is essentially the same as the Primary Key and can be treated as an alternate key meaning it can also serve the purpose of identifying a record uniquely in the table.

  • Goldengate expects a column that is not in the unique constraint

    I do not know golden gate. I am working with a golden gate engineer who doesn't really know oracle. I am the DBA supporting this. This is the issue we are having. Please bare with me if I have trouble explaining it.
    I am pulling from oracle and loading to teradata. I confirmed that the unique index is correct in teradata (don't have access. I asked).
    Oracle 10.2.0.5
    golden gate: 11.1.1.0.29
    error: the name of the schema listed in the error is from teradata. So TERADATA_SCHEMA. represents that.
    Key column my_id is missing from update on table TERADATA_SCHEMA.MYTABLE
    Missing 1 key columns in update for table TERADATA_SCHEMA.MYTABLEbelow is a create table statement. I have altered table and column names. but the structure is the same.
    it does NOT have a primary key. It has a unique key. I am not allowed to add a primary key
    UNIQUE INDEX: UNIQUE_ID
    When we test an updates, golden gate is expecting MY_ID to be sent as well and golden gate abends
    The DDL below includes the partitioning/subpartition, unique index, and supplemental logging command that golden gate runs.
    I have also run the following 2 commands to turn on supplemental logging:
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    ALTER SYSTEM SWITCH LOGFILE;
    CREATE
      TABLE MYTABLE
        "UNIQUE_ID"       NUMBER(10,0) NOT NULL ENABLE,
        "MY_ID"       NUMBER(10,0),
        "MYNUMBER" NUMBER(8,0),
        "TOTALNUMBER"  NUMBER(8,0),
        "USED" NUMBER(8,0),
        "LOTSUSED  NUMBER(8,0),
        "LAST_UPDATE_USER"  VARCHAR2(30 BYTE),
        "LAST_UPDATE_DATE" DATE,
        "MYDATESTAMP" DATE,
        "MYTYPE" NUMBER(2,0) NOT NULL ENABLE,
        "MYTHING"    CHAR(1 BYTE) NOT NULL ENABLE
      PARTITION BY RANGE
        "MYTYPE"
      SUBPARTITION BY LIST
        "MYTHING"
      SUBPARTITION TEMPLATE
        SUBPARTITION "MYTHING_X" VALUES
          'X'
        SUBPARTITION "MYTHING_Z" VALUES
          'Z'
        PARTITION "MYTHING1" VALUES LESS THAN (2) ,
        PARTITION "MYTHING2" VALUES LESS THAN (3) ,
        PARTITION "MYTHING3" VALUES LESS THAN (4) ,
        PARTITION "MYTHING4" VALUES LESS THAN (5) ,
        PARTITION "MYTHING5" VALUES LESS THAN (6) ,
        PARTITION "MYTHING6" VALUES LESS THAN (7) ,
        PARTITION "MYTHING7" VALUES LESS THAN (8) ,
        PARTITION "MYTHING8" VALUES LESS THAN (9) ,
        PARTITION "MYTHING_OTHER" VALUES LESS THAN (MAXVALUE)
    ALTER TABLE MYTABLE  ADD SUPPLEMENTAL LOG GROUP
    "MYGROUP_555"
      "UNIQUE_ID"
    ALWAYS;
    CREATE UNIQUE INDEX MY_IND ON MYTABLE  (
        "UNIQUE_ID"
      ;Edited by: Guess2 on Nov 3, 2011 12:57 PM
    Edited by: Guess2 on Nov 3, 2011 1:21 PM

    GoldenGate expects a primary key, a unique key, or a list of key columns.
    The addition of supplemental logging for the table can be done via SQL, but typically, it is done via the GGSCI interface:
    GGSCI 4> dblogin userid <your DB GoldenGate user>, password <your password?
    GGSCI 5> add trandata schema_owner.table_name
    How Oracle GoldenGate determines the kind of row identifier to useUnless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate selects a
    row identifier to use in the following order of priority:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no function-based
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no function-based
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    NOTE If there are other, non-usable keys on a table or if there are no keys at all on the
    table, Oracle GoldenGate logs an appropriate message to the report file.
    Constructing a key from all of the columns impedes the performance of Oracle
    GoldenGate on the source system. On the target, this key causes Replicat to use
    a larger, less efficient WHERE clause.
    How to specify your own key for Oracle GoldenGate to use
    If a table does not have one of the preceding types of row identifiers, or if you prefer those
    identifiers not to be used, you can define a substitute key if the table has columns that
    always contain unique values. You define this substitute key by including a KEYCOLS clause
    within the Extract TABLE parameter and the Replicat MAP parameter. The specified key will
    override any existing primary or unique key that Oracle GoldenGate finds.>
    "I have altered table and column names. but the structure is the same."
    What column name did you alter?
    The source table table and target table are either identical, or there must be a source definition file created on the source and copied over to the target and referenced in the replicat.
    I don't see why my_id would cause a problem (based on what you posted), unless the tables are different.

  • Using UNUSABLE on Unique Index gives "initially in unusable state" error

    I am doing the following:
    1. Truncate table
    2 Alter PK Index Disable
    2. Alter Unique Index Unusable
    3. Insert /*+ APPEND NOLOGGING*/ into my table
    My problem is, I get an "ORA-26026: unique index ... initially in unusable state" error when I try to do the Insert. If I make the Unique index non-unique it isn't a problem. If I drop the index and recreate it its not a problem - so uniqueness doesn't appear to be the issue. Can you not use UNUSABLE with a Unique index, I couldn't find any reference to this in the 10g docs.
    I am using 10.1.0.4.
    Thanks
    Richard

    26026, 0000, "unique index %s.%s initially in unusable state"
    //* Cause: A unique index is in IU state (a unique index cannot have
    //* index maintenance skipped via SKIP_UNUSABLE_INDEXES).
    //* Action: Either rebuild the index or index partition, or use
    //* SKIP_INDEX_MAINTENANCE if the client is SQL*Loader.

  • Using an DB auto-increment feature on a PK

    Hi,
    I work with an Oracle target DB and I have a trigger which auto-increment the field ID (it's my pk).
    In my interface, I map all the fields and let my ID field empty in order to let Oracle filling it.
    But when I run the interface, I have this error message at step "insert flow into $ table" :
    936 : 42000 : java.sql.SQLException : ORA-00936: expression absente
    If I change my PK, it works so I think the issue is there. Can I easily solve this ?
    Thanks

    Thanks for your answer but didn't work better.
    I solved the issue using the IK SQL Control Append in the flow tab and disabling the PK constraint in the check tab. All other options set as default.
    I don't know if it's the better solution but it works...

  • ORA-01502 error in case of unusable unique index and bulk dml

    Hi, all.
    The db is 11.2.0.3 on a linux machine.
    I made a unique index unusable, and issued a dml on the table.
    Howerver, oracle gave me ORA-01502 error.
    In order to avoid ORA-01502 error, do I have to drop the unique index ,and do bulk dml, and recreate the index?
    Or Is there any other solution without re-creating the unique index?
    create table hoho.abcde as
    select level col1 from dual connect by level <=1000
    10:09:55 HOHO@PD1MGD>create unique index hoho.abcde_dx1 on hoho.abcde (col1);
    Index created.
    10:10:23 HOHO@PD1MGD>alter index hoho.abcde_dx1 unusable;
    Index altered.
    Elapsed: 00:00:00.03
    10:11:27 HOHO@PD1MGD>delete from hoho.abcde where rownum < 11;
    delete from hoho.abcde where rownum < 11
    ERROR at line 1:
    ORA-01502: index 'HOHO.ABCDE_DX1' or partition of such index is in unusable stateThanks in advance.
    Best Regards.

    Hi. all.
    The following is from "http://docs.oracle.com/cd/E14072_01/server.112/e10595/indexes002.htm#CIHJIDJG".
    Is there anyone who can show me a tip to avoid the following without dropping and re-creating an unique index?
    •DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
    Unusable indexes
    An unusable index is ignored by the optimizer and is not maintained by DML. One reason to make an index unusable is if you want to improve the performance of bulk loads. (Bulk loads go more quickly if the database does not need to maintain indexes when inserting rows.) Instead of dropping the index and later recreating it, which requires you to recall the exact parameters of the CREATE INDEX statement, you can make the index unusable, and then just rebuild it. You can create an index in the unusable state, or you can mark an existing index or index partition unusable. The database may mark an index unusable under certain circumstances, such as when there is a failure while building the index. When one partition of a partitioned index is marked unusable, the other partitions of the index remain valid.
    An unusable index or index partition must be rebuilt, or dropped and re-created, before it can be used. Truncating a table makes an unusable index valid.
    Beginning with Oracle Database 11g Release 2, when you make an existing index unusable, its index segment is dropped.
    The functionality of unusable indexes depends on the setting of the SKIP_UNUSABLE_INDEXES initialization parameter.
    When SKIP_UNUSABLE_INDEXES is TRUE (the default), then:
    •DML statements against the table proceed, but unusable indexes are not maintained.
    •DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
    •For non-partitioned indexes, the optimizer does not consider any unusable indexes when creating an access plan for SELECT statements. The only exception is when an index is explicitly specified with the INDEX() hint.
    •For a partitioned index where one or more of the partitions are unusable, the optimizer does not consider the index if it cannot determine at query compilation time if any of the index partitions can be pruned. This is true for both partitioned and non-partitioned tables. The only exception is when an index is explicitly specified with the INDEX() hint.
    When SKIP_UNUSABLE_INDEXES is FALSE, then:
    •If any unusable indexes or index partitions are present, any DML statements that would cause those indexes or index partitions to be updated are terminated with an error.
    •For SELECT statements, if an unusable index or unusable index partition is present but the optimizer does not choose to use it for the access plan, the statement proceeds. However, if the optimizer does choose to use the unusable index or unusable index partition, the statement terminates with an error.Thanks in advance.
    Best Regards.

  • Auto-Increment a String of data

    OK, I see to be going round and round with the same problem.
    I have UDF that contains 12 digits but the first 6 digits will always be the same.
    The next 5 digits are the set of digits I need to auto increment
    The last digit is a check digit that is pre-determined.
    So I need assistance on a formatted search that looks at just the 5 digits and auto-increments those.
    What seems to be the problems is once the UPC gets 12 digits, it only seems to start auto updating the 12 check digit and I do not want the 12th digit even touched.
    I started out with 84573410001
    After I made a update to that number the final value of the UDF is 845734100011
    When a user used the formatted search the next value it should update is 84573410002
    This is the query I got from teh forum a few days ago and it works as long as there is only 11 digits in the UDF, once I add teh 12th digit then it starts to auto increment the 12th digit.
    Thanks,
    Craig

    That was the posting I did the first time around and the Query of:
    SELECT str(CAST(MAX(T.U_UPC) as numeric)+1,12)
    FROM RDR1 T
    Works great when there is only 11 digits to increment but I want to only auto-increment a string of 5 digits with in the value.
    First String (Will always be the same) 845734
    Second String (THIS IS THE SET OF DIGITS I WANT TO AUTO_INCREMENT) Starts at 10001 and goes up.
    Third string - 1 digit that is updated based on a set of values I built into a User Defined table and adds to the first 2 strings based on a comparison.

Maybe you are looking for