Datatype of PRIMARY KEY

Namaskar,
i just read a question in an Interview..it asked..
What is the Datatype of PRIMARY KEY Oracle Internally uses?
1.Binary Integer
2.Varchar2
3.Number
is it Binary integer?
Thanks in Advance!!!

select owner, DATA_TYPE, count(*)
from dba_constraints join dba_cons_columns
using (owner,table_name,constraint_name)
join dba_tab_columns
using (owner,table_name,column_name)
where constraint_type='P'
group by owner,data_type
order by 1,2;
OWNER      DATA_TYPE         COUNT(*)
DBSNMP     DATE                     2
DBSNMP     NUMBER                   1
DBSNMP     RAW                      9
DBSNMP     VARCHAR2                 2
EXFSYS     NUMBER                   2
EXFSYS     VARCHAR2                22
RMAN       NUMBER                  31
SCOTT      NUMBER                   2
SYS        CHAR                     2
SYS        DATE                     3
SYS        NUMBER                 523
SYS        RAW                     34
SYS        TIMESTAMP(3)             1
SYS        TIMESTAMP(6)            12
SYS        TIMESTAMP(9)             1
SYS        VARCHAR2                92
SYSTEM     NUMBER                 172
SYSTEM     RAW                      8
SYSTEM     ROWID                    1
SYSTEM     VARCHAR2                70as you can see, oracle uses lots of different data types as primary key

Similar Messages

  • Primary Keys and VARCHAR2 vs CHAR datatypes???

    Could someone please tell me if there is documentation concerning the performance implications of creating a Primary Key using a VARCHAR2 datatype vs a CHAR datatype?

    Well with a CHAR datatype, a value of ‘abc’ equates to ‘abc ’.
    So that if the value of ‘abc’ already existed in the PK in the table and someone tried to Insert another row with a value of ‘abc ‘, it would return a unique constraint violation (the rule from the user being that a value with or without trailing spaces is the same value).
    But if the PK column is defined as VARCHAR2, both the 'abc' and the 'abc ' could be Inserted into the Table because in the nonpadded comparison they are not equal.
    Would defining the column as a VARCHAR2 with a Check Constraint to RTRIM the column be a better way to go?

  • Case In-Sensitive primary key (NVARCHAR2 datatype) in Oracle 10g

    I have primary keys in my database which are of NVARCHAR2 type. By default data is inserted into these columns as case sensitive which means insertion of both 'ABCD' and 'Abcd' are allowed.
    Can I change the settings in Oracle so that the primary keys work case in-sensitively? I wanted Oracle to throw an error if someone is inserting 'Abcd' and if there is already a record with primary key 'ABCD'.
    Note: I am using Entity Framework.

    Some ideas:
    One method would be to place a before insert trigger on the data and upper it.
    You could add another column, populate via a trigger as upper(), then build a unique index on this column.
    See the following Oracle documentation on case insensitive searchs and comparisons
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch5lingsort.htm#i1008800
    Oracle support documents:
    Develop Global Application: Case-insensitive searches #342960.1
    How to use Contains As Case-Sensitive And -Insensitive On The Same Column #739868.1
    How To Implement Case Insensitive Query in BC4J ? #337163.1
    HTH -- Mark D Powell --

  • What is faster to search an alpha numeric primary key or number datatype

    hi all,
    I am looking for the answer to a question. What provides a faster and or efficient search for a primary key in any RDBMS is it alpha numeric or number? and why?I would appreciate you replies please.

    Several years ago an Oracle support analyst posted the results of a test for a single column PK with where he compared the results for using a number data type verse a character data type. The analyst results showed you needed more than 100K accesses to measure any significant difference where the number data type showed a small performance advantage. Note that Oracle stores the number data type internally as a form of scientific notation and library math is necessary to work with the numbers. Most of the claims for using a number instead of character go back to the days where native binary words were used as the key. The test only covered signle column keys.
    Use the data type that matches your natural business keys and call it a day. If you try to substitute an artificial numeric key you are likely to have to go through a business character value to locate the artificial key so any performance benefit you think you can get by using a numeric key will be imaginary due to increased overhead of maintaining multiple indexes.
    You can set up and run your own tests.
    HTH -- Mark D Powell --

  • How can we export the Primary key values (along with other data) from an Advantage database?

    One of our customers is moving from our application (which uses Advantage Database Server) to another application (which uses other database technology). They have asked us to help export their data, so that they can migrate it to another database system. So far, we have used the Advantage Data Architect (ARC32) "Export Table Structures as Code" functionality to generate SQL. We used the "Include existing data" option. The SQL contains the necessary code to recreate the tables and indexes. The customer's IT staff will alter the SQL statements as necessary for their new system.
    However, there is an issue with the Primary Keys in these table. The resulting INSERT statements use AutoInc as the type for the Primary Key in each Table. These INSERT statements contains "DEFAULT" for the value of each of these AutoInc fields. The customer would like to output an integer value for each of these Primary Key values in order to maintain referential integrity in their new system.
    So far, I have not found any feature of ARC32 that allows us to export the Primary Key values. We had been using an older version of ARC32, since our application does not use the latest version of ADS. I did download the latest version of ARC32 (11.10), but it does not appear to include any new functionality that would facilitate doing this sort of export.
    Can somebody tell me if there is such a feature in ARC32?
    Or, is there is another Advantage tool to facilitate what we are trying to accomplish?
    If there are no Advantage tools to provide such functionality, what else would you suggest?

    George,
      It sounds like the approach you are using is the correct one. This seems to be the cleanest solution to me especially since the customer is able to modify the generated SQL statements for their new system.
      In order to preserve the AutoInc values I would recommend altering the table and changing the field datatype from AutoInc to Integer. Then export the table as code which will export the actual values. After the tables have been created on the new system they can change the field datatype back to an AutoInc type if necessary.
    Regards,
    Chris Franz

  • Problem with JPA Implementations and SQL BIGINT in primary keys

    I have a general Question about the mapping of the SQL datatype BIGINT. I discovered, that there are some different behaviour depending on the JPA implementation. I tested with TopLink Essentials (working) and with Hibernate (not working).
    Here is the case:
    Table definition:
    /*==============================================================*/
    /* Table: CmdQueueIn */
    /*==============================================================*/
    create table MTRACKER.CmdQueueIn
    CmdQueueInId bigint not null global autoincrement,
    Type int,
    Cmd varchar(2048),
    CmdState int,
    MLUser bigint not null,
    ExecutionTime timestamp,
    FinishTime timestamp,
    ExecutionServer varchar(64),
    ScheduleString varchar(64),
    RetryCount int,
    ResultMessage varchar(256),
    RecordState int not null default 1,
    CDate timestamp not null default current timestamp,
    MDate timestamp not null default current timestamp,
    constraint PK_CMDQUEUEIN primary key (CmdQueueInId)
    The java class for this table has the following annotation for the primary key field:
    @Id
    @GeneratedValue(strategy=GenerationType.IDENTITY)
    @Column(name = "CmdQueueInId", nullable = false)
    private BigInteger cmdQueueInId;
    When using hibernate 3.2.1 as JPA provider I get the following exception:
    avax.persistence.PersistenceException: org.hibernate.id.IdentifierGenerationException: this id generator generates long, integer, short or string
    at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:629)
    at org.hibernate.ejb.AbstractEntityManagerImpl.persist(AbstractEntityManagerImpl.java:218)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:176)
    at $Proxy16.persist(Unknown Source)
    at com.trixpert.dao.CmdQueueInDAO.save(CmdQueueInDAO.java:46)
    at com.trixpert.test.dao.CmdQueueInDAOTest.testCreateNewCmd(CmdQueueInDAOTest.java:50)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at junit.framework.TestCase.runTest(TestCase.java:154)
    at junit.framework.TestCase.runBare(TestCase.java:127)
    at
    Caused by: org.hibernate.id.IdentifierGenerationException: this id generator generates long, integer, short or string
    at org.hibernate.id.IdentifierGeneratorFactory.get(IdentifierGeneratorFactory.java:59)
    at org.hibernate.id.IdentifierGeneratorFactory.getGeneratedIdentity(IdentifierGeneratorFactory.java:35)
    at org.hibernate.id.IdentityGenerator$BasicDelegate.getResult(IdentityGenerator.java:157)
    at org.hibernate.id.insert.AbstractSelectingDelegate.performInsert(AbstractSelectingDelegate.java:57)
    at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2108)
    at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2588)
    at org.hibernate.action.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:48)
    at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:248)
    at org.hibernate.event.def.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:290)
    at org.hibernate.event.def.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:180)
    at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:108)
    at org.hibernate.event.def.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:131)
    at org.hibernate.event.def.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:87)
    at org.hibernate.event.def.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:38)
    at org.hibernate.impl.SessionImpl.firePersist(SessionImpl.java:618)
    at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:592)
    at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:596)
    at org.hibernate.ejb.AbstractEntityManagerImpl.persist(AbstractEntityManagerImpl.java:212)
    ... 34 more
    This means, that their ID generator does not support java.math.BigInteger as datatype.
    But the code works if I take TopLink essentials as JPA Provider.
    Looking at the spec shows the following:
    In chapter 2.1.4 "If generated primary keys are used, only integral types will be portable." Integral datatypes are byte, short, int, long and char. This would mean, that the Hibernate implementation fits the spec but there seem to be a problem in general with BIGINT datatypes.
    I use a SYBASE database. There it is possible to declare a UNSIGNED BIGINT. The range of numbers is therefore 0 - 2^64 - 1. Since in Java a long is always signed it would mean its range is from -2^63 -1 to 2^63 -1. So a mapping of BIGINT to java.lang.long could result in an overflow.
    The interesting thing is, that I used NetBeans to reverse engineer an existing database schema. It generated for all Primary Keys of Type BIGINT automatically a java.math.BigInteger. But for other fields (not being keys) it mapped BIGINTs to java.lang.long.
    It looks like there are some problems with either the spec itself or the implementation of it. While TopLink seems to handle the problem correctly, Hibernate doesn't. But Hibernate seems to fulfill the spec.
    Is anybody familiar with the Spec reading this and can elaborate a little about this situation?
    Many thanks for your input and feedback.
    Tom

    Not sure if I clearly understand your issue, would be good if you can explain it a bit more clearly.
    "I select a value from LOV and this value don't refresh in the view"
    If you mean ViewObject, check if autoSubmit property is set to true.
    Amit

  • Error while creating the Unique Index of the Primary Key of an Item

    Hi all,
    I have deployed a new item (CO_CONTRACTUNIT_PRODUCT) in my publication. The deploy appears to be successfull as the item can be seen in the repository through the Mobile Manager.
    The problem occurs when i sync my local DB to get the item offline. While synchronizing, the following error appears, both in the sync window and the log file ol_sync.log.
    "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI"
    However, the debug file gives this other error regarding this table.
    ALL_INDEX:CREATE UNIQUE INDEX "TPCO_CONTRACTUNIT_PRODUCT_PK" ON CO_CONTRACTUNIT_PRODUCT (CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID) -5130Error at C:\ADE\omeprod_ol103021\olite\db\build\win\ocapi\..\..\..\src\ocapi\allindexes.cpp line:329 rc:-5130
    Build date Mar 29 2010
    okErr=(table or view %s.%s not found)
    mess=(CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID)
    AddLog(-5130 "ERROR",POL-5130,"11/09/2010 11:43:52","table or view %s.%s not found:CO_CONTRACTUNIT_PRODUCT,CO_CONTRACTID,OD_PRODUCTID,CO_CONTRACTUNITID","DB_ROSHNI")
    But the index that is being created and is giving the error is the index created automatically with the Primary Key of the table, and so nothing has been modified in that.
    The primary key of the table is created with the three columns that are part of the index that is returning the error.
    As I could not solve the error, I tried to drop and re-create the item in the repository, but no luck. As a last option, i tried to remove the item from the repository to be able to sync properly again (just like before creating the item), but the error still happens.
    Another weird point is that i have tried creating the item in another publication of another database (but with almost equal items), and the item could was downloaded to my local DB without any problem, which makes this problem still more bizarre.
    What can it be?
    Any help would be great!
    Roshni

    have you tried unistalling the client and reinstalling it?
    schema evolution changes are not always handled correctly please check thread:
    Modification of publication item into Mobile Server
    i quote from rekounas instructions:
    If you are just adding a field, you should only have to run the alter publication item API call.
    Here is an old note on schema evolution on different scenarios. The names for the APIs that they use are now deprecated. Use the ConsolidatorManager class and call the method alterPublicationItem("PUBLICATION_ITEM_NAME", "SELECT STMT") :
    A) Add column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Change the Oracle8i/9i database schema (add column)
    4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
    5. Start Mobile Server
    6. Execute a sync from the client
    7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
    B) Drop column
    1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Delete column of the base table in the Oracle database schema
    4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
    5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
    6. Start Mobile Server
    7. Execute a sync from the client
    8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
    C) Change column datatype
    Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop/create column (do not use conversion procudures) at the base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
    snapshot definitions.
    D) Drop table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop base table
    4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
    E) Add table
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Add new base table
    4. Call CreatePublicationItem() and AddPublicationItem() method
    5. Start Mobile Server. This automatically resumes application
    6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
    F) Changing Primary Keys
    Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
    1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
    2. Stop Mobile Server listener
    3. Drop the snapshot using DropPublicationItem() method o
    4. Alter the base table
    5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
    6. Start Mobile Server. This automatically resumes application
    7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
    G) To Change a Table Weight =>
    Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
    1. Run MGP to apply any changes in the InQueue to the Oracle databse
    2. Change table weight using SetTemplateItemMetadata() method
    3. Add/change constraint on the the base table which reflects the change in table weight
    4. Synchronize
    gl m8

  • CMP Entity Bean With Primary Key of int

    Is it valid to have a CMP Entity Bean (v1.1) where the primary key is defined as the primitive 'int' instead of a true Java class?
    Thanks in advance,
    Marc

    Hi,
    Ur primary key in the EJB cant be a primitive data type. If the primary key in the table is a int,
    then u will hv [bold] java.lang.Integer [bold] as the datatype in Entity Bean. If u hv further queries write me at [email protected]
    Kesavan.

  • Gettin error while setting column as primary key

    Hi I am trying to extract data from XML file
    and create one custom table
    Its working fine when I try following code
    CREATE TABLE xx_PER_ALL_VACANCIES
      AS (SELECT x.* FROM hr_api_transactions t, xmltable('//PerAllVacanciesEORow' passing xmltype(transaction_document) columns location_id NUMBER path 'LocationId', organization_id NUMBER path 'OrganizationId', name VARCHAR2(100) path 'Name', creation_date VARCHAR2(15) path 'CreationDate',PeopleGroupId  NUMBER path 'PeopleGroupId',REQUISITION_ID  NUMBER(15)path 'RequisitionId',COMMENTS LONG() path 'Comments')
      x
    WHERE t.transaction_ref_table = 'PER_ALL_VACANCIES');but when I try to make Name column as primary key then it gives an error
    following is code
    CREATE TABLE xx_PER_ALL_VACANCIES
      AS (SELECT x.* FROM hr_api_transactions t, xmltable('//PerAllVacanciesEORow' passing xmltype(transaction_document) columns location_id NUMBER path 'LocationId', organization_id NUMBER path 'OrganizationId', name NOT NULL VARCHAR2(100) path 'Name', creation_date VARCHAR2(15) path 'CreationDate',PeopleGroupId  NUMBER path 'PeopleGroupId',REQUISITION_ID  NUMBER(15)path 'RequisitionId',COMMENTS LONG() path 'Comments')
      x
    WHERE t.transaction_ref_table = 'PER_ALL_VACANCIES');following is error
    Error at Command Line:2 Column:214
    Error report:
    SQL Error: ORA-00902: invalid datatype
    00902. 00000 - "invalid datatype"
    *Cause:   
    *Action:also it gives error when I takes data type as long() for column Comments
    Error at Command Line:2 Column:392
    Error report:
    SQL Error: ORA-00907: missing right parenthesis
    00907. 00000 - "missing right parenthesis"
    *Cause:   
    *Action:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    LONG is not ( and never will be, so don't bother asking) supported by XMLTable. You should use CLOB or BLOB.
    SQL> drop table HR_API_TRANSACTIONS
      2  /
    Table dropped.
    SQL> drop table xx_PER_ALL_VACANCIES
      2  /
    Table dropped.
    SQL> create table HR_API_TRANSACTIONS
      2  (
      3    TRANSACTION_REF_TABLE VARCHAR2(32),
      4    TRANSACTION_DOCUMENT  CLOB
      5  )
      6  /
    Table created.
    SQL> CREATE TABLE xx_PER_ALL_VACANCIES
      2  AS
      3  (
      4    SELECT x.*
      5      FROM hr_api_transactions t,
      6           xmltable
      7           (
      8              '//PerAllVacanciesEORow'
      9              passing xmltype(transaction_document)
    10              columns location_id NUMBER        path 'LocationId',
    11              organization_id     NUMBER        path 'OrganizationId',
    12              name                VARCHAR2(100) path 'Name',
    13              creation_date       VARCHAR2(15)  path 'CreationDate',
    14              PeopleGroupId       NUMBER        path 'PeopleGroupId',
    15              REQUISITION_ID      NUMBER(15)    path 'RequisitionId',
    16              COMMENTS            LONG()        path 'Comments'
    17           )  x
    18     WHERE t.transaction_ref_table = 'PER_ALL_VACANCIES'
    19  )
    20  /
                COMMENTS            LONG()        path 'Comments'
    ERROR at line 16:
    ORA-00907: missing right parenthesis
    SQL> CREATE TABLE xx_PER_ALL_VACANCIES
      2  AS
      3  (
      4    SELECT x.*
      5      FROM hr_api_transactions t,
      6           xmltable
      7           (
      8              '//PerAllVacanciesEORow'
      9              passing xmltype(transaction_document)
    10              columns location_id NUMBER        path 'LocationId',
    11              organization_id     NUMBER        path 'OrganizationId',
    12              name                VARCHAR2(100) path 'Name',
    13              creation_date       VARCHAR2(15)  path 'CreationDate',
    14              PeopleGroupId       NUMBER        path 'PeopleGroupId',
    15              REQUISITION_ID      NUMBER(15)    path 'RequisitionId',
    16              COMMENTS            LONG          path 'Comments'
    17           ) x
    18     WHERE t.transaction_ref_table = 'PER_ALL_VACANCIES'
    19  )
    20  /
    ERROR at line 19:
    ORA-00997: illegal use of LONG datatype
    SQL> CREATE TABLE xx_PER_ALL_VACANCIES
      2  AS
      3  (
      4    SELECT x.*
      5      FROM hr_api_transactions t,
      6           xmltable
      7           (
      8              '//PerAllVacanciesEORow'
      9              passing xmltype(transaction_document)
    10              columns location_id NUMBER        path 'LocationId',
    11              organization_id     NUMBER        path 'OrganizationId',
    12              name                VARCHAR2(100) path 'Name',
    13              creation_date       VARCHAR2(15)  path 'CreationDate',
    14              PeopleGroupId       NUMBER        path 'PeopleGroupId',
    15              REQUISITION_ID      NUMBER(15)    path 'RequisitionId',
    16              COMMENTS            CLOB          path 'Comments'
    17           )  x
    18     WHERE t.transaction_ref_table = 'PER_ALL_VACANCIES'
    19  )
    20  /
    Table created.
    SQL>

  • What index is suitable for a table with no unique columns and no primary key

    alpha
    beta 
    gamma
    col1
    col2
    col3
    100
    1
    -1
    a
    b
    c
    100
    1
    -2
    d
    e
    f
    101
    1
    -2
    t
    t
    y
    102
    2
    1
    j
    k
    l
    Sample data above  and below is the dataype for each one of them
    alpha datatype- string 
    beta datatype-integer
    gamma datatype-integer
    col1,col2,col3 are all string datatypes. 
    Note:columns are not unique and we would be using alpha,beta,gamma to uniquely identify a record .Now as you see my sample data this is in a table which doesnt have index .I would like to have a index created covering these columns (alpha,beta,gamma) .I
    beleive that creating clustered index having covering columns will be better.
    What would you recommend the index type should be here in this case.Say data volume is 1 milion records and we always use the alpha,beta,gamma columns when we filiter or query records 
    what index is suitable for a table with no unique columns and primary key?
    col1
    col2
    col3
    Mudassar

    Many thanks for your explanation .
    When I tried querying using the below query on my heap table the sql server suggested to create NON CLUSTERED INDEX INCLUDING columns    ,[beta],[gamma] ,[col1] 
     ,[col2]     ,[col3]
    SELECT [alpha]
          ,[beta]
          ,[gamma]
          ,[col1]
          ,[col2]
          ,[col3]
      FROM [TEST].[dbo].[Test]
    where   [alpha]='10100'
    My question is why it didn't suggest Clustered INDEX and chose NON clustered index ?
    Mudassar

  • The column in the table do not match an existing primary key

    I've got two tables tbl_Workshop and tbl_Material
    tbl_Workshop has columns workshopID, workshopTitle, materialID
    tbl_Material has materialID, name, workshopTitle
    when I'm trying to create a relationship between the workshopTitle of tbl1 and tbl2, it gives me an error that says the column in the table do not match an existing primary key.
    What could be the reason for this error and how to overcome it.
    ps. The datatypes and names of both the table's column match.

    Have you created primary key on workshopTitle column in tbl_Workshop
    You can add foreign key relationship from tbl_Material.workshopTitle
    to tbl_Workshop.workshopTitle
     only if latter is a primary key of the table.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Auto Increment Primary Keys

    I am trying to migrate from SQL Server. I want to setup tables which generate sequencial Primary keys on adding records to the table. I have a VB application which is using ADO with an ODBC connection. How does one use the Oracle Enterprise Console Manager to set a Primary Key to increment on the adding of new records automatically. In SQL Server one just needs to set an Identity, Identity Seed and Identitiy Increment. In Access there is a datatype called AutoNumber. Help would be greatly appreciated. I am using 9i with a standalone Enterprise Console Manager. Thanks in Advance.

    You first have to create a sequence and then the trigger below:
    Instead of data in single quotes(') you must specify the appropriate for your db names
    CREATE OR REPLACE TRIGGER 'Schema'.'trigger_name' BEFORE
    INSERT ON 'table_name'
    FOR EACH ROW
    BEGIN
    SELECT 'sequence_name'.nextval into :new.'column_name' from dual;
    END
    ;

  • ADF and Primary Key

    Hi!
    I'm using PostgreSQL as the DB and set for PK an attribute named ID that's of type serial. It's auto-incremented when the row is go to be inserted.
    I've read in other posts like this one -> Primary key during Insert
    to set type to DBSequence, but later the compiler gives me an error saying DBSequence doesn't exist.
    How can I get this working? Or how can I put a value in ID in the application's code that's de MAX(ID) +1?
    Thanks

    Your first image - setting the type of the id as a DBSequence is correct. What you are saying by doing this is that the database is automatically setting the value of your primary key. This means that when the Entity object does DML, it doesn't require the user to supply the value, it doesn't set it itself, and it retrieves the value as soon as the insert is complete. In Oracle, it would use a "RETURNING" clause - I don't know if Postgres SQL does this.
    Given that DBSequence is saying that the DATABASE is setting the value, I'm not sure what you are trying to do in your second image. You seem to have a Java object that has a getter and setter for the column. I'm not sure what the purpose of this object is, especially if Postgres SQL is setting the id, not the application.
    When you create an Entity Object in ADF Business components, it can optionally write the Java classes that implement the object, so that you can override certain functionality. Choose the "Java" menu option on the Entity Object definition screen and check the appropriate checkboxes to get it to write Java that you can change. If your Entity is named MyEntity, JDeveloper will generate MyEntityImpl,java. If you edit this file, you will see that it already has a getter and a setter for your id. I think that the datatype that it will assign is "Number", not "DBSequence". If you really need to change the getter and setter, do it here.

  • FillSchema picks too many primary key columns

    I don't know whether this is an ODP.NET error or a Microsoft error.
    Oracle9i Release 9.2.0.1.0
    ODP.NET 9.2.0.4
    .NET Framework 1.1
    Create a table with one primary key column and one unique column and name the primary key column like the unique column with the suffix "_ID":
    CREATE TABLE t_bib_uebertrag_kap(
    &nbsp kapazitaet_id NUMBER(8) NOT NULL
    &nbsp&nbsp CONSTRAINT pk_kapa primary key,
    &nbsp kapazitaet VARCHAR2(15) NOT NULL
    &nbsp&nbsp CONSTRAINT a_un_kapa unique,
    &nbsp geschwindigkeit NUMBER(12) NOT NULL,
    &nbsp bemerkung VARCHAR2(255)
    After calling FillSchema on this table the PrimaryKey property of the DataTable contains 2 columns: KAPAZITAET_ID and KAPAZITAET.
    The same happens with other tables and similar column names
    R. Lüthke

    Tony, I'm gonna find time to try this because I was thinking it probably would be a double check.
    BUT... Oracle is written in C, and I imagine the extra check as a simple if comparison for the new value versus a constant NULL value. Kind of like checking for end of string. It's already doing things like checking datatypes and that as it inserts data, and I don't think this additional check (if it even exists) would add any significant overhead.
    But unless I find a much larger performance hit than I expect, I'm gonna stick with creating the NOT NULL's explicitly. Good data modelling trumps small performance tricks for me.
    My main worry is if/when somebody removes the primary key constraint (such as the example above where they might do a CREATE TABLE x AS SELECT * FROM y). Or if the data model gets updated to where the primary key becomes just a unique key, then the NOT NULL goes away when it shouldn't. I don't like basic table and column properties changing.
    Okay.. test done. Ran about 31,000 records from dba_tables through an insert into two tables, both with primary keys and one with explicit NOT NULLs. No measurable difference in stats detected.
    With NOT NULL and PRIMARY KEY:
    call count cpu elapsed disk query current rows
    Parse 1 0.35 0.35 0 0 0 0
    Execute 1 1.88 1.83 0 24792 35887 30954
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 2.24 2.18 0 24792 35887 30954
    Now with only PRIMARY KEY.
    call count cpu elapsed disk query current rows
    Parse 1 0.37 0.36 0 0 0 0
    Execute 1 1.87 1.83 0 24792 35887 30954
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 2.24 2.19 0 24792 35887 30954

  • Unique or primary key on timestamp with timezone

    Hi,
    I have been experimenting with a date column in a primary key, or actually I tried using a timestamp with time zone in a primary key.
    While researching whether there was a way to avoid ORA-02329, I found the following:
    K15> create table dumdum
      2    (datum date not null
      3    ,naamp varchar2( 30 ) not null);
    Table created.
    K15>
    K15> alter table dumdum
      2    add constraint d_pk
      3        primary key
      4          (datum, naamp)
      5    using index;
    Table altered.
    K15>
    K15> select ind.index_type
      2  from   user_indexes ind
      3  where  ind.index_name = 'D_PK';
    INDEX_TYPE
    NORMAL
    1 row selected.
    K15>
    K15> insert into dumdum
      2    (datum
      3    ,naamp )
      4  select sysdate - (level/1440)
      5  ,      'nomen nescio'
      6  from   dual
      7  connect by level < 1000
      8  ;
    999 rows created.
    K15>
    K15> analyze index d_pk validate structure;
    Index analyzed.
    K15> analyze table dumdum compute statistics;
    Table analyzed.
    K15>
    K15> select naamp
      2  from   dumdum
      3  where  datum > to_date('16-06-2011 15.46.16', 'dd-mm-yyyy hh24.mi.ss' )
      4 
    K15> For the last select statement I get the following "explain plan":
    SELECT STATEMENT  CHOOSE
              Cost: 2  Bytes: 247  Cardinality: 13       
         1 INDEX RANGE SCAN UNIQUE D_PK
                    Cost: 3  Bytes: 247  Cardinality: 13  This behavior lived up to my expectations.
    Then, I tried this:
    K15> create table dumdum
      2    (datum date not null
      3    ,naamp varchar2( 30 ) not null);
    Table created.
    K15>
    K15> alter table dumdum
      2    add constraint d_pk
      3        primary key
      4          (datum, naamp)
      5    using index;
    Table altered.
    K15>
    K15> alter table dumdum
      2        modify datum timestamp(6) with time zone;
    Table altered.
    K15>
    K15> select ind.index_type
      2  from   user_indexes ind
      3  where  ind.index_name = 'D_PK';
    INDEX_TYPE
    NORMAL
    1 row selected.
    K15>
    K15> insert into dumdum
      2    (datum
      3    ,naamp )
      4  select sysdate - (level/1440)
      5  ,      'nomen nescio'
      6  from   dual
      7  connect by level < 1000
      8  ;
    999 rows created.
    K15>
    K15> analyze index d_pk validate structure;
    Index analyzed.
    K15> analyze table dumdum compute statistics;
    Table analyzed.
    K15>
    K15> select naamp
      2  from   dumdum
      3  where  datum > to_date('16-06-2011 15.46.16', 'dd-mm-yyyy hh24.mi.ss' )
      4
    K15> So, at first glance, the alter table statement to change the datatype from DATE to TIMESTAMP seems like a way of fooling Oracle. But the explain plan reveals a different story:
    SELECT STATEMENT  CHOOSE
              Cost: 4  Bytes: 1,25  Cardinality: 50       
         1 TABLE ACCESS FULL DUMDUM
                    Cost: 4  Bytes: 1,25  Cardinality: 50  I was only fooling myself. :-0
    But I wasn't done with my research:
    K15> create table dumdum
      2    (datum timestamp(6) with time zone not null
      3    ,naamp varchar2( 30 ) not null);
    Table created.
    K15>
    K15> create unique index d_ind
      2      on dumdum
      3           (datum, naamp);
    Index created.
    K15>
    K15>
    K15> select ind.index_type
      2  from   user_indexes ind
      3  where  ind.index_name = 'D_IND';
    INDEX_TYPE
    FUNCTION-BASED NORMAL
    1 row selected.
    K15>
    K15> insert into dumdum
      2    (datum
      3    ,naamp )
      4  select systimestamp - (level/1440)
      5  ,      'nomen nescio'
      6  from   dual
      7  connect by level < 1000
      8  ;
    999 rows created.
    K15>
    K15> analyze index d_ind validate structure;
    Index analyzed.
    K15> analyze table dumdum compute statistics;
    Table analyzed.
    K15>
    K15> select naamp
      2  from   dumdum
      3  where  datum > to_date('16-06-2011 15.56.16', 'dd-mm-yyyy hh24.mi.ss' )
      4
    K15>Now, my explain plan looks fine:
    SELECT STATEMENT  CHOOSE
              Cost: 2  Bytes: 1,25  Cardinality: 50       
         1 INDEX RANGE SCAN UNIQUE D_IND
              Cost: 3  Bytes: 1,25  Cardinality: 50  Why is Oracle so adamant about not allowing a timestamp with time zone in a unique key? And, given their position on the matter, where does their tolerance for a unique index come from?
    By the way, if I had a say in it, I would not allow anything that even remotely looks like a date to be part of a primary key, but that's another discussion.
    Thanks,
    Remco
    P.S. All this is on Oracle9i Enterprise Edition Release 9.2.0.8.0. Is it different on 10g or 11g?

    See if this helps. You can create primary key for TIMESTAMP WITH LOCAL TIME ZONE datatype.
    SQL>CREATE TABLE Mytimezone(Localtimezone TIMESTAMP WITH LOCAL TIME ZONE primary key, Location varchar2(20) );
    Table created.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch4datetime.htm#i1006169
    TIMESTAMP WITH LOCAL TIME ZONE Datatype
    TIMESTAMP WITH LOCAL TIME ZONE is another variant of TIMESTAMP. It differs from TIMESTAMP WITH TIME ZONE as follows: data stored in the database is normalized to the database time zone, and the time zone offset is not stored as part of the column data. When users retrieve the data, Oracle returns it in the users' local session time zone. The time zone offset is the difference (in hours and minutes) between local time and UTC (Coordinated Universal Time, formerly Greenwich Mean Time).
    Thanks
    http://swervedba.wordpress.com/

Maybe you are looking for