What index is suitable for a table with no unique columns and no primary key

alpha
beta 
gamma
col1
col2
col3
100
1
-1
a
b
c
100
1
-2
d
e
f
101
1
-2
t
t
y
102
2
1
j
k
l
Sample data above  and below is the dataype for each one of them
alpha datatype- string 
beta datatype-integer
gamma datatype-integer
col1,col2,col3 are all string datatypes. 
Note:columns are not unique and we would be using alpha,beta,gamma to uniquely identify a record .Now as you see my sample data this is in a table which doesnt have index .I would like to have a index created covering these columns (alpha,beta,gamma) .I
beleive that creating clustered index having covering columns will be better.
What would you recommend the index type should be here in this case.Say data volume is 1 milion records and we always use the alpha,beta,gamma columns when we filiter or query records 
what index is suitable for a table with no unique columns and primary key?
col1
col2
col3
Mudassar

Many thanks for your explanation .
When I tried querying using the below query on my heap table the sql server suggested to create NON CLUSTERED INDEX INCLUDING columns    ,[beta],[gamma] ,[col1] 
 ,[col2]     ,[col3]
SELECT [alpha]
      ,[beta]
      ,[gamma]
      ,[col1]
      ,[col2]
      ,[col3]
  FROM [TEST].[dbo].[Test]
where   [alpha]='10100'
My question is why it didn't suggest Clustered INDEX and chose NON clustered index ?
Mudassar

Similar Messages

  • Conflict resolution for a table with LOB column ...

    Hi,
    I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
    Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
    the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
    I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
    to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
    be very much appreciated.

    Hi,
    I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
    before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
    which I hope make sense and are relevant:
    1.Does an apply process detect update conflicts on LOB columns?
    2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
    in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
    3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
    but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
    database from the custom handler?
    4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
    Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
    Thanks again.

  • Difference between an XMLType table and a table with an XMLType column?

    Hi all,
    Still trying to get my mind around all this XML stuff.
    Can someone concisely explain the difference between:
    create table this_is_xmltype_tab of xmltype;and
    create table this_is_tab_w_xmltpe_col(id number, document xmltype);What are the relative advantages and disadvantages of each approach? How do they really differ?
    Thanks,
    -Mark

    There is another pointer Mark, that I realized when I was thinking about the differences...
    If you would look up in the manual regarding "xdb:annotations" you would learn about a method using an XML Schema to generate out of the box your whole design in terms of physical layout and/or design principles. In my mind this should be the preferred solution if you are dealing with very complex XML Schema environments. Taking your XML Schema as your single point design layout, that during the actual implementation automatically generates and builds all your needed database objects and its physical requirements, has great advantages in points of design version management etc., but...
    ...it will create automatically an XMLType table (based on OR, Binary XML of "hybrid" storage principles, aka the ones that are XML Schema driven) and not AFAIK a XMLtype column structure: so as in "our" case a table with a id column and a xmltype column.
    In principle you could relationally relate to this as:
    +"I have created an EER diagram and a Physical diagram, I mix the content/info of those two into one diagram." "Then I _+execute+_ it in the database and the end result will be an database user/schema that has all the xxxx amount of physical objects I need, the way I want it to be...".+
    ...but it will be in the form of an XMLType table structure...
    xdb:annotations can be used to create things like:
    - enforce database/company naming conventions
    - DOM validation enabled or not
    - automatic IOT or BTree index creation (for instance in OR XMLType storage)
    - sort search order enforced or not
    - default tablenames and owners
    - extra column or table property settings like for partitioning XML data
    - database encoding/mapping used for SQL and binary storage
    - avoid automatic creation of Oracle objects (tables/types/etc), for instance, via xdb:defaultTable="" annotations
    - etc...
    See here for more info: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#ADXDB4519
    and / or for more detailed info:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030452
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030995
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#CHDCEBAG
    ...

  • How many SECONDARY INDEXES are created for CLUSTER TABLES?

    how many SECONDARY INDEXES are created for CLUSTER TABLES?
    please explain.

    There seems to be some kind of misunderstanding here. You cannot create a secondary index on a cluster table. A cluster table does not exist as a separate physical table in the database; it is part of a "physical cluster". In the case of BSEG for instance, the physical cluster is RFBLG. The only fields of the cluster table that also exist as fields of the physical cluster are the leading fields of the primary key. Taking again BSEG as the example, the primary key includes the fields MANDT, BUKRS, BELNR, GJAHR, BUZEI. If you look at the structure of the RFBLG table, you will see that it has primary key fields MANDT, BUKRS, BELNR, GJAHR, PAGENO. The first four fields are those that all cluster tables inside BSEG have in common. The fifth field, PAGENO, is a "technical" field giving the sequence number of the current record in the series of cluster records sharing the same primary key.
    All the "functional" fields of the cluster table (for BSEG this is field BUZEI and everything beyond that) exist only inside a raw binary object. The database does not know about these fields, it only sees the raw object (the field VARDATA of the physical cluster). Since the field does not exist in the database, it is impossible to create a secondary index on it. If you try to create a secondary index on a cluster table in transaction SE11, you will therefore rightly get the error "Index maintenance only possible for transparent tables".
    Theoretically you could get around this by converting the cluster table to a transparent table. You can do this in the SAP dictionary. However, in practice this is almost never a good solution. The table becomes much larger (clusters are compressed) and you lose the advantage that related records are stored close to each other (the main reason for having cluster tables in the first place). Apart from the performance and disk space hit, converting a big cluster table like BSEG to transparent would take extremely long.
    In cases where "indexing" of fields of a cluster table is worthwhile, SAP has constructed "indexing tables" around the cluster. For example, around BSEG there are transparent tables like BSIS, BSAS, etc. Other clusters normally do not have this, but that simply means there is no reason for having it. I have worked with the SAP dictionary for over 12 years and I have never met a single case where it was necessary to convert a cluster to transparent.
    If you try to select on specific values of a non-transparent field in a cluster without also specifying selections for the primary key, then the database will have to do a serial read of the whole physical cluster (and the ABAP DB interface will have to decompress every single record to extract the fields). The performance of that is monstrous -- maybe that was the reason of your question. However, the solution then is (in the case of BSEG) to query via one of the index tables (where you are free to create secondary indexes since those tables are transparent).

  • What technlogy is good for ABAP developer with some littile knowdge of java

    Hi Experts,
    What technlogy is good for ABAP developer with some littile knowdge of java IN SAP NETWEAVER.
    Can any one guide me for good technology for todays market.
    Thanks
    Edited by: sdnhelp on Jul 15, 2009 1:50 PM
    Edited by: sdnhelp on Jul 15, 2009 1:51 PM
    Edited by: sdnhelp on Jul 15, 2009 1:51 PM

    Hi,
    you can choose PI of netweaver.
    Regards,
    Muralidhar

  • Counting rows for db tables with 500 million+ entries

    Hi
    SE16 transaction timesout in foreground when showing number of entries for db tables with 500million+ entries. In background this takes too long.
    I am writing a custom report to get number of records of a table and using OPEN CURSOR concept to determine number of records.
    Is there any other efficient way to read number of records from such huge tables.
            OPEN CURSOR l_cursor FOR
            SELECT COUNT(*)
               FROM (u_str_param-p_tabn) WHERE (l_tab_cond).  " u_str_param-p_tabn is the table name in input and l_tab_cond is a
              DO.                                                                              " dynamic where condition
                FETCH NEXT CURSOR l_cursor INTO l_new_count.
               PACKAGE SIZE p_pack.
                IF sy-subrc NE 0.
                  EXIT.
                ELSE.
                  l_tot_cnt = l_tot_cnt + l_new_count. " l_tot_cnt will contain number of records at end of loops
                  CLEAR l_new_count.
                ENDIF.
              ENDDO.
              CLOSE CURSOR l_cursor.

    Hello,
    For sure it is a huge number of entries!!!
    Is any key-fields?
    Use a variable to keep a counter of the key-fields , for example row_table and also a low and high variable .
    Try to use a do-loop and in this loop use:
    do .
    low_row  = row_table.
    low_high = row_table + 200.000.:increse the select every 200.000 entries
    Do  "select statement into table"  based on the key-fields 
    For example : if the key-field is the docnr,
    Do a "select into table itab where docnr >low_row
                                            and docnr < low_high"
    Count the lines of itab and keep them in a variable.
    free the itab.
    enddo
    Good luck.
    Antonis

  • Database adjustment for custom table with 200 entries

    Hi,
    We have a custom table which as 3 fields as composite primary key. we want to drop the last field from composite primiary key. Would it be required to do a manual database adjustment or would it automatically adjust the database if I make this change and trasport it.
    Please advise.
    Regards
    Kasi

    Hi,
        you have to adjust database manually using
        se11-utilities-database utiity-> click on
        activate and adjust database button .
        if you have table mainitainence defined delete
        existing table mainitainence and create a
        new mainitainence and assign this to request.
        so that when request is transported
        correction to databse are made in quality 
        and production as well.
        select standrad recording routine radio in table
        table mainitainence generator to move table
        contents to quality and production by assigning
        to request.    
    Regards
    Amole

  • Importing a table with a BLOB column is taking too long

    I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
    1 - set buffer to 500 Mb
    2 - pre-created the table and turned off logging
    3 - set indexes=N
    4 - set constraints=N
    5 - I have 10 online redo logs with 200 MB each
    6 - Even turned off logging at the database level with disablelogging = true
    It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
    For your info:
    Computer: Sun v490 with 16 CPUs, solaris 10
    memory: 10 Gigabytes
    SGA: 4 Gigabytes

    Legatti,
    I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
    Thanks for your reply.

  • Cartesian of data from two tables with no matching columns

    Hello,
    I was wondering – what’s the best way to create a Cartesian of data from two tables with no matching columns in such a way, so that there will be only a single SQL query generated?
    I am thinking about something like:
    for $COUNTRY in ns0: COUNTRY ()
    for $PROD in ns1:PROD()
    return <Results>
         <COUNTRY> {fn:data($COUNTRY/COUNTRY_NAME)} </COUNTRY>
         <PROD> {fn:data($PROD/PROD_NAME)} </PROD>
    </Results>
    And the expected result is combination of all COUNTRY_NAMEs with all PROD_NAMEs.
    What I’ve noticed when checking query plan is that DSP will execute two queries to have the results – one for COUNTRY_NAME and another one for PROD_NAME. Which in general results in not the best performance ;-)
    What I’ve noticed also is that when I add something like:
    where COUNTRY_NAME != PROD_NAME
    everything is ok and there is only one query created (it's red in the Query plan, but still it's ok from my pov). Still it looks to me more like a workaround, not a real best approach. I may be wrong though...
    So the question is – what’s the suggested approach for such queries?
    Thanks,
    Leszek
    Edited by xnts at 11/19/2007 10:54 AM

    Which in general results in not the best performanceI disagree. Only for two tables with very few rows, would a single sql statement give better performance.
    Suppose there are 10,000 rows in each table - the cross-product will result in 100 million rows. Sounds like a bad idea. For this reason, DSP will not push a cross-product to a database. It will get the rows from each table in separate sql statements (retrieving only 20,000 rows) and then produce the cross-product itself.
    If you want to execute sql with cross-products, you can create a sql-statement based dataservice. I recommend against doing so.

  • My phone was sitting on the table, with nothing turned on, and all of a sudden it sounds like there is air coming out of my speakers.

    My phone was sitting on the table, with nothing turned on, and all of a sudden it sounds like there is air coming out of my speakers.  It does this for 1 minute straight, then will stop. When I pick it up, then set it back down, it does it again.
    HELP!!!

    No nothing will show up on the screen.. If I push the 'home' center button it continues doing it, same with the sleep/wake button.
    I thinking it may be static out of my speakers --but I don't understand why all of a sudden it is dong this. I have had no problems with this phone, until now.

  • How to read an internal table with more than  one (2 or 3) key field(s).

    how to read an internal table with more than  one (2 or 3) key field(s). in ecc 6.0 version

    hi ,
    check this..
    report.
    tables: marc,mard.
    data: begin of itab occurs 0,
          matnr like marc-matnr,
          werks like marc-werks,
          pstat like marc-pstat,
          end of itab.
    data: begin of itab1 occurs 0,
          matnr like mard-matnr,
          werks like mard-werks,
          lgort like mard-lgort,
          end of itab1.
    parameters:p_matnr like marc-matnr.
    select matnr
           werks
           pstat
           from marc
           into table itab
           where matnr = p_matnr.
    sort itab by matnr werks.
    select matnr
           werks
           lgort
           from mard
           into table itab1
           for all entries in itab
           where matnr = itab-matnr
           and werks = itab-werks.
    sort itab1 by matnr werks.
    loop at itab.
    read table itab1 with key matnr = itab-matnr
                              werks = itab-werks.
    endloop.
    regards,
    venkat.

  • Tabular Form based on table with lots of columns - how to avoid scrollbar?

    Hi everybody,
    I'm an old Forms and VERY new APEX user. My problem is the following: I have to migrate a form application to APEX.
    The form is based on a table with lots of columns. In Forms you can spread the data over different tab pages.
    How can I realize s.th similar in APEX? I definitely don't want to use a horizontal scroll bar...
    Thanks in advance
    Hilke

    If the primary key is created by the user themselves (which is not recommended, you should have another ID which the user sees which would be the varchar2 and keep the primary key as is, the user really shouldn't ever edit the primary key) then all you need to do is make sure that the table is not populated with a primary key in the wizard and then make sure that you cannot insert a null into your varchar primary key text field.
    IF you're doing it this way I would make a validation on the page which would run off a SQL Exists validation, something along the lines of
    SELECT <primary key column>
    FROM <your table>
    WHERE upper(<primary key column>) = upper(<text field containing user input>);
    and if it already exists, fire the validation claiming that it already exists and to come up with a new primary key.
    Like I said if you really should have a primary key which the database refers to each individual record itself and then have an almost pseudo-primary key that the user can use. For example in the table it would look like this:
    TABLE1
    table_id (this is the primary key which you should NOT change)
    user_table_id (this is the pretend primary key which the user can change)
    other_columns
    etc
    etc
    hope this helps in some way

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • Is it possible to search for multiple folders with the same name and...

    Is it possible to search for multiple folders with the same name and then select them all and change the permissions on just those folders .i.e. Search for the budget folders in all client folders and lock them down to just the project managers. Without having to go to each folder and apply the permissions.

    user11919409 wrote:
    Is it possible to create a Clone database with the same name of source db using RMAN ...
    yes
    >
    DB version is 11.2.0.2
    Is it possible to clone a 11.2.0.2 database to 11.2.0.3 home location directly on a new server . If it starts in a upgrade mode , it is ok ....yes
    Handle:     user11919409
    Status Level:     Newbie (10)
    Registered:     Dec 7, 2009
    Total Posts:     102
    Total Questions:     28 (22 unresolved)
    why do you waste time here when you rarely get any answers to your questions?

  • Will there be support for Adobe Ideas with the Adobe Ink and Slide?

    Will there be support for Adobe Ideas with the Adobe Ink and Slide?

    We're working on an update to Ideas to add support for Ink & Slide.  Stay tuned for further information as to timing.
    Kim
    Adobe Ideas Team

Maybe you are looking for