Hi i have created a unique index on table on 3 columns.

Hi i have created a unique index on table on 3 columns.I want to know when i have 2 records
which contain same values in 2 fields and the 3rd field contains a null
will unique index allow me to insert these records.

Robert Angel wrote:
This must be one time when null = null. ;)
regards,
Robert.Not really, it is more the case that the non-null columns need to be unique. Your second attempt failed because there was already an index entry with 'a', 'b', and the lack of a value for column c gave Oracle no way to differentiate between the two rows so they are not unique.
A subtle, but conceptually important difference. :-)
John

Similar Messages

  • Creating an unique index instaed of using primary key index

    Hi ,
    I heard in a debate sometimes it's better to create a unique index on a column and using it instaed of using primary key index in oracle.I couldn't understand what the reason propely.
    Can anyone please help me in thsi topic if it is a valid one .
    Thanks in advance

    Hi,
    They are exactly NOT identical.
    1. Unique key can have NULL values where primary keys can't.
    2. Primary key is fundamentally those keys which do not change. I mean updating a primary key is not a good idea.
    SQL> drop table test;
    Table dropped.
    SQL> create table test ( a number(2));
    Table created.
    SQL> ed
    Wrote file afiedt.buf
      1* create unique index test_idx on test(a)
    SQL> /
    Index created.
    SQL> ed
    Wrote file afiedt.buf
      1* insert into test values(NULL)
    SQL> /
    1 row created.
    SQL> drop table test;
    Table dropped.
    SQL>
    SQL> create table test ( a number(2) primary key);
    Table created.
    SQL> insert into test values(NULL);
    insert into test values(NULL)
    ERROR at line 1:
    ORA-01400: cannot insert NULL into ("HR"."TEST"."A")
    SQL>Cheers,
    Edited by: Avinash Tripathi on Nov 24, 2009 11:17 AM

  • Error while creating the Unique Index of the Primary Key of an Item

    Hi all,
    I have deployed a new item (CO_CONTRACTUNIT_PRODUCT) in my publication. The deploy appears to be successfull as the item can be seen in the repository through the Mobile Manager.
    The problem occurs when i sync my local DB to get the item offline. While synchronizing, the following error appears, both in the sync window and the log file ol_sync.log.
    "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI"
    However, the debug file gives this other error regarding this table.
    ALL_INDEX:CREATE UNIQUE INDEX "TPCO_CONTRACTUNIT_PRODUCT_PK" ON CO_CONTRACTUNIT_PRODUCT (CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID) -5130Error at C:\ADE\omeprod_ol103021\olite\db\build\win\ocapi\..\..\..\src\ocapi\allindexes.cpp line:329 rc:-5130
    Build date Mar 29 2010
    okErr=(table or view %s.%s not found)
    mess=(CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID)
    AddLog(-5130 "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI")
    But the index that is being created and is giving the error is the index created automatically with the Primary Key of the table, and so nothing has been modified in that.
    The primary key of the table is created with the three columns that are part of the index that is returning the error.
    As I could not solve the error, I tried to drop and re-create the item in the repository, but no luck. As a last option, i tried to remove the item from the repository to be able to sync properly again (just like before creating the item), but the error still happens.
    Another weird point is that i have tried creating the item in another publication of another database (but with almost equal items), and the item could was downloaded to my local DB without any problem, which makes this problem still more bizarre.
    What can it be?
    Any help would be great!
    Roshni

    have you tried unistalling the client and reinstalling it?
    schema evolution changes are not always handled correctly please check thread:
    Modification of publication item into Mobile Server
    i quote from rekounas instructions:
    If you are just adding a field, you should only have to run the alter publication item API call.
    Here is an old note on schema evolution on different scenarios. The names for the APIs that they use are now deprecated. Use the ConsolidatorManager class and call the method alterPublicationItem("PUBLICATION_ITEM_NAME", "SELECT STMT") :
    A) Add column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Change the Oracle8i/9i database schema (add column)
    4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
    5. Start Mobile Server
    6. Execute a sync from the client
    7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
    B) Drop column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Delete column of the base table in the Oracle database schema
    4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
    5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
    6. Start Mobile Server
    7. Execute a sync from the client
    8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
    C) Change column datatype
    Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop/create column (do not use conversion procudures) at the base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
    snapshot definitions.
    D) Drop table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
    E) Add table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Add new base table
    4. Call CreatePublicationItem() and AddPublicationItem() method
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
    F) Changing Primary Keys
    Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop the snapshot using DropPublicationItem() method o
    4. Alter the base table
    5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
    G) To Change a Table Weight =>
    Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
    1. Run MGP to apply any changes in the InQueue to the Oracle databse
    2. Change table weight using SetTemplateItemMetadata() method
    3. Add/change constraint on the the base table which reflects the change in table weight
    4. Synchronize
    gl m8

  • Spatial index on table with object-column (and inheritance)

    Hi!
    Is it possible to create a spatial index on a table with an object-column (and inheritance) like this:
    CREATE OR REPLACE TYPE feature_type AS OBJECT (
    shape MDSYS.SDO_GEOMETRY
    ) NOT FINAL;
    CREATE OR REPLACE TYPE building_type UNDER feature_type (
    name VARCHAR2(50)
    CREATE TABLE features ( no NUMBER PRIMARY KEY, feature feature_type);
    [...] user_sdo_geom_metadata [...]
    Then
    CREATE INDEX features_idx ON features(feature.shape) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    throws:
    ORA-01418: specified index does not exist
    Curious! :)
    If I define feature_type with "NOT FINAL" option but without subtypes, I get (create index):
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01460: unimplemented or unreasonable conversion requested
    So I think besides object tables also inheritance isn't supported whith oracle spatial!?
    Thanks,
    Michael
    ps:
    We use Oracle9i Enterprise Edition Release 9.0.1.4.0 / Linux.
    Solves Oracle9i 9.2 this problems?

    Hi
    You'll need to be on 9.2 to do this....
    Dom

  • HT4236 I have created a unique folder on my windows PC for syncing with my iphone (pics copied into folder).  As part of syncing my iphone with that windows pc folder, will the photos be deleted from my hard drive?

    When I sync iphone 4s with a windows pc folder, will the sync operation delete the photo from the harddrive?

    No. The Photos ill remain on your PC. They will be copied to the iPad. If you manually delete a photo from the folder on youir computer and sync again, that photo will be removed form the iPad.

  • How to create a form on a table with 3 columns for a PK

    Hi All,
    We have a table that has 3 Columns that form the Primary Key and I would like to create a form based on that table; unfortuanately on the 'Create Form Page' there's only 2 options that identifies the first and second PK columns.
    Is there a way I could add the third PK column.
    Kind regards
    Mel

    Maybe this can help:
    http://apex.oracle.com/pls/otn/f?p=31517:157
    using instead of trigger on a view.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • How create report with data from table and some columns results function ?

    Hi,
    How can i create on apex report region with some columns (of the report) as returned from a table and the other columns as results of plsql functions ?
    for example , I want to create a report like that:
    device last_date error_msg stop/start
    kodak1 06/04/08 null >>
    kodak2 08/03/08 good msg --^--
    kodak3 08/04/08 err msg >>
    3 rows returned
    where the 3 first columns are data returned from the table and the forth column is the result of plsql function (returned for example false) and on that i want to display a button of start ( >> in this example ) or stop ( --^-- in this example)

    Thomas,
    There is no problem here -- this is fully suported scenario.
    1. Bind Table dataSource to Customers node.
    2. Bind individual cell editors to any attribute of customer or any nested node like Address, say create column with InputField as editor, then for "value" property select Customer.Address.Street.
    Your nested nodes (like Address) must be non-singleton, set singleton=false on context designer tab.
    Valery Silaev
    EPAM Systems
    http://www.NetWeaverTeam.com

  • Unique Index Error while running the ETL process

    Hi,
    I have Installed Oracle BI Applications 7.9.4 and Informatica PowerCenter 7.1.4. I have done all the configuration steps as specified in the Oracle BI Applications Installation and Configuration Guide. While running the ETL process from DAC for Execution Plan 'Human Resources Oracle 11.5.10' some tasks going to status Failed.
    When I checked the log files for these tasks, I found the following error
    ANOMALY INFO::: Error while executing : CREATE INDEX:W_PAYROLL_F_ASSG_TMP:W_PRL_F_ASG_TMP_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_PRL_F_ASG_TMP_U1
    ON
    W_PAYROLL_F_ASSG_TMP
    INTEGRATION_ID ASC
    ,DATASOURCE_NUM_ID ASC
    ,EFFECTIVE_FROM_DT ASC
    NOLOGGING PARALLEL
    with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    EXCEPTION CLASS::: java.lang.Exception
    I found some duplicate rows in the table W_PAYROLL_F_ASSG_TMP with the combination of the columns on which it is trying to create INDEX. Can anyone give me information for the following.
    1. Why it is trying to create the unique index on the combination of columns which may not be unique.
    2. Is it a problem with the data in the source database (means becoz of duplicate rows in the source system).
    How we need to fix this error. Do we need to delete the duplicate rows from the table in the data warehouse manually and re-run the ETL process or is there any other way to fix the problem.

    This query will identify the duplicate in the Warehouse table preventing the Index from being built:
    select count(*), integration_id, src_eff_from_dt from w_employee_ds group by integration_id, src_eff_from_dt having count(*)>1;
    To get the ETL to finish issue this delete to the W_EMPLOYEE_DS table:
    delete from w_employee_ds where integration_id = '2' and src_eff_from_dt ='04-JAN-91';
    To fix it so this does not happen again on another load you need to find the record in the Vision DB, it is in the PER_ALL_PEOPLE_F table. I have a Vision source and this worked:
    select rowid, person_id , LAST_NAME FROM PER_ALL_PEOPLE_F
    where EFFECTIVE_START_DATE = '04-JAN-91';
    ROWID PERSON_ID
    LAST_NAME
    AAAWXJAAMAAAwl/AAL 6272
    Kang
    AAAWXJAAMAAAwmAAAI 6272
    Kang
    AAAWXJAAMAAAwmAAA4 6307
    Lee
    delete from PER_ALL_PEOPLE_F
    where ROWID = 'AAAWXJAAMAAAwl/AAL';

  • Unique Index Created Automatically on FK Columns

    Referencing the following document under Forward Engineering, http://www.oracle.com/technetwork/developer-tools/datamodeler/newfeatures30-224506.html
    Unique index is created for One-to-one relationships defined in the Logical model if there is no PK/UK constraint or unique index defined over foreign key columns.
    Why are unique indexes created automatically for foreign key columns? Should "lookups" not be defined in the logical model? I don't understand why it's designed to behave that way.

    Why are unique indexes created automatically for foreign key columns? They are created automatically only for 1:1 relationships in order to support that business rule.
    Should "lookups" not be defined in the logical model? Can you elaborate on that?
    Philip

  • Can an unique index be created on read only cache group

    Hi
    Can an unique index be created on read only cache group
    Regards
    Siva Kumar

    No, I do not think so. Creating a unique index could cause autorefresh operations to fail if the data being refreshed contains duplicate values that would not be allowed by the index. You can create regular indexes on a table in a readonly cache group.
    Chris

  • Create unique index from duplicate rows

    Dear
    I created an index like
    create index tablename_idx on tablename (FIELD1,FIELD2);
    It should be
    create unique index tablename_idx on tablename (FIELD1,FIELD2,filed3);
    How can I delete those duplicate records and create the UNIQUE index ?
    Thanks and regards
    Fahmed

    > How can I delete those duplicate records
    [url http://forums.oracle.com/forums/search.jspa?threadID=&q=delete+duplicate+records&objID=f75&dateRange=all&userID=&numResults=15]http://forums.oracle.com/forums/search.jspa?threadID=&q=delete+duplicate+records&objID=f75&dateRange=all&userID=&numResults=15
    Regards,
    Rob.

  • Difference between Unique key and Unique index

    Hi All,
    I've got confused in the difference between unique index & unique key in a table.
    While we create a unique index on a table, its created as a unique index.
    On the other hand, if we create a unique key/constraint on the table, Oracle also creates an index entry for that. So I can find the same name object in all_constraints as well as in all_indexes.
    My question here is that if during creation of unique key/constraint, an index is automatically created than why is the need to create unique key and then two objects , while we can create only one object i.e. unique index.
    Thanks
    Deepak

    This is only my understanding and is not according to any documentation, that is as follows.
    The unique key (constraint) needs an unique index for achieving constraint of itself.
    Developers and users can make any constraint (unique-key, primary-key, foreign-key, not-null ...) to enable,disable and be deferable. Unique key is able to be enabled, disabled, deferable.
    On the other hand, the index is used for performance-up originally, unique index itself doesn't have the concept like constraints. The index (including non-unique, unique) can be rebuilded,enabled,disabled etc. But I think that index cannot be set "deferable-builded" automatic.

  • Difference between unique constraint and unique index

    1. What is the difference between unique constraint and unique index when unique constraint is always indexed ? Which one is better in this case for better performance ?
    2. Is Composite index of 3 columns x,y,z better
    or having independent/ seperate indexes on 3 columns x,y,z is better for better performance ?
    3. It has been very confusing for me to decide which columns to index, I have indexed most foreignkey columns, is it a good idea ? We do lot of selects and DMLS on most of our tables. Is there any query that I can run and find out if indexes are really being used and if they are improving any performance. I have analyzed and computed my indexes using ANALYZE index index_name validate structure and COMPUTE STATISTICS;
    null

    1. Unique index is part of unique constraint. Of course you can create standalone unique index. But is is no point to skip the logical view of business if you spend same effort to achive.
    You create unique const. Oracle create the unique index for you. You may specify index characteristic in unique constraint.
    2. Depends. You can't utilize the composite index if the searching condition is not whole or front part of the indexing key. You can't utilize your index if you query the table for y=2. That is.
    3. As old words in database arena, Index may be good or bad for a table depending on the size of table, number of columns in the table... etc. It is very environmental dependent. In fact, It is part of database nomalization. Statistic is a way oracle use to determine the execution plan.
    Steve
    null

  • Auto increment the unique index node

    Hello,
    I dont know if this has already been discussed before - searching the forums did not return anything interesting.
    Sometimes you need to have some relational-like functionalities that you would normally have in a hybrid relational/xml DB, where you could have some flexibility in establishing integrity conditions. To illustrate what I mean, consider the following document:
    <?xml version="1.0"?>
    <document>
    <id>12345876988</id>
    <year>2007</year>
    <name> Ana </name>
    <email> [email protected] </email>
    </document>
    So lets suppose I need to store millions of documents like that - but I'd like them to have a unique ID and that this ID would be generated in an auto-increment fashion. I can do this by counting all docs in the container and then incrementing by one, everytime I insert. Then I'd set the ID index to be unique, in case there's any accidental repetition. But I believe this is not the best solution - is there an alternative to this?
    Also, maybe I'd like to express that not only ID should be unique, but the combination of ID and YEAR should be unique. Is it possible to express this in BDB XML? Again, this would be easy in a relational DB, I would just use ID and YEAR as fields and index them in combination.
    Thanks in advance.

    Hi,
    You can use a Berkeley DB Sequence to assign a unique number. Here is a pointer to the documentation for C++ for example:
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_cxx/seq_list.html
    You can add the sequence database to the same environment you are using for Berkeley DB XML. That will allow the updates to particpate in the same transaction which can be committed or aborted.
    Also, maybe I'd like to express that not only ID should be unique, but the >combination of ID and YEAR should be unique. Is it possible to express this in >BDB XML?For this, a common technique is to use Berkeley DB XML metadata. You can create a metadata attribute which represents the concatenation of ID and YEAR.
    The metadata is stored with the document, but not as part of the document
    content.
    You can also create a unique index on that metadata attribute to enforce uniqueness.
    Ron

  • How to create an index for bi-lingual text column

    I have a column that keeps the full file name of the attachment document file of a record. I need to build an index on the column so that we can search the attachment document file. The content of the file will be in English or French, but we don't know which one is in English and which one is in French. How I should build the index so that it will support 'contains' query and 'about' query for both English and French documents? What lexer, wordlist and stoplist I should use?
    Here is something I am thinking:
    lexer: basic_lexer
    stoplist: build a custom stoplist that contains the stopwords for English and French.
    wordlist: use the default for US should be OK since we don't need to support fuzzy and stem search.
    The only problem is that it will only create the theme index properly for the English document but not the French document. This is because the NLS_LANG setting for the session is always in English. Therefore, the English knowledge base and lexicon will be used. I cannot use the THEME_LANGUAGE setting to overide it because I don't know the language of the document. Is there a way to merge the English knowleage base and lexicon with the French ones so that we can create a correct theme index without knowing the language of the document?
    I am also thinking about using the multi_lexer. However, it won't help since we don't the language of the document.
    Can anyone give me some help?
    Thanks,
    Wenjie
    null

    Creating a primary key, by definition, creates a (unique) index. Either inline at table creation, or via an alter table.
    If its a composite PK, the alter table or inline constraint definition, after all the column declarations, is the only way to do it that I'm aware of.
    create table t1 ( t1_pk number,
       t1_desc varchar2(50),
       constraint primary key t1_name_pk (t1_pk) [ using index tablespace <tblspc>]
    -- or can constrain at the column def
    create table t1 ( t1_pk number constraint t1_name_pk primary key ...,
    -- or with an alter table. this won't work if there are non-unique entries
    alter table t1 add ( constraint t1_name_pk primary key ( t1_pk [ index clause ] );The t1_name_pk constraint name can be optional, but if you don't include a constraint name the error messages are difficult to interpret, you get a system generated constraint name.
    "... constraint <schema.>t1_name_pk violated ... " is much more informative than "... constraint <schema.>SYS_CNNNNNNNN violated ...".
    The created index gets the same name used for the constraint as well.

Maybe you are looking for

  • Error in CDS.log

    Hi Nakisa experts, The CDS.log shows a lot of errors similar to the one shown below, when I navigate through the Org Structure in Nakisa. Does anyone know where the problem could be? 24/11/2009 11:39:     ERROR: CommandProcessor.getDataSet : select H

  • My macbook is infected, adware, spyware, and malware. What can i use to clean it?

    What software can I use to clean and protect from adware, spyware and malware?

  • Can anyone explain this: Numeric or Value Error Line 1

    I have a stored procedure that starts out like so: PROCEDURE test_proc(param1 IN VARCHAR2, param2 IN VARCHAR2, param3 OUT SYS_REFCURSOR) IS v_var varchar2(5); BEGIN SELECT * FROM. . . The procedure tested fine in PL/SQL Developer. When calling from O

  • Can I change indesign chinese CS2 menus to english?

    I've got a user running indesign cs2 chinese and she doesn't want the menus in chinese. Is this possible?

  • BULK COLLECT Issues in 10g

    I have a procedure that runs a function which uses BULK COLLECT INTO on a 9i database. When I try to run the same procedure with the same function on 10g, it fails. I can't figure out how to find the exact error message but have narrowed it down to t