Changes of a base table PK/unique index

After alter a new column to a base table PK on the server, recreate the snapshot and run MGP the new unique index value did not update table C$ALL_INDEXES in client database conscli.odb. After sync two unique indexes were created for current table in lite.odb, one new and one old that still remains in table C$ALL_INDEXES in the client database conscli.odb.
Please, help me resolve this problem!

Drop and recreate the publication item using the APIs, then reset the metadata. That should fix the issue.

Similar Messages

  • Tables without unique index

    Hi,
    I am getting following warnings in db02:
    Tables without unique index
    STATS_RFC
    STATS_RFC_OLD.
    In se16 the status sows Table STATS_RFC_OLD\ STATS_RFC  is not active in the Dictionary. In se11 does not exist.
    Kindly suggest.
    Regards,
    Rahul.

    HI,
    desc SAPR3P.STATS_RFC
    Name                                      Null?    Type
    STATID                                             VARCHAR2(30)
    TYPE                                               CHAR(1)
    VERSION                                            NUMBER
    FLAGS                                              NUMBER
    C1                                                 VARCHAR2(30)
    C2                                                 VARCHAR2(30)
    C3                                                 VARCHAR2(30)
    C4                                                 VARCHAR2(30)
    C5                                                 VARCHAR2(30)
    N1                                                 NUMBER
    N2                                                 NUMBER
    N3                                                 NUMBER
    N4                                                 NUMBER
    N5                                                 NUMBER
    N6                                                 NUMBER
    N7                                                 NUMBER
    N8                                                 NUMBER
    N9                                                 NUMBER
    N10                                                NUMBER
    N11                                                NUMBER
    N12                                                NUMBER
    D1                                                 DATE
    R1                                                 RAW(32)
    R2                                                 RAW(32)
    CH1                                                VARCHAR2(1000)
    No fields in STATS_RFC_OLD.
    Regards,
    Rahul.

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Insert in table with  unique index

    Hi
    I Create a table save a factor for to calculate date, but other 2 columns are key table
    CREATE TABLE TMP_FATOR
      SETID      VARCHAR2(5 BYTE)                   NOT NULL,
      COMPANYID  VARCHAR2(15 BYTE)                  NOT NULL,
      FATOR      NUMBER
    CREATE UNIQUE INDEX IDX_TMP_FATOR ON TMP_FATOR
    (SETID, COMPANYID)
    NOLOGGINGI want to insert in table , but skip errors , I tried with
    declare
      i  number;
    begin
       i:=1;
               EXECUTE IMMEDIATE 'TRUNCATE TABLE SYSADM.TMP_FATOR';
       BEGIN
             INSERT INTO /*+ APPEND*/ SYSADM.TMP_FATOR
                    SELECT  T1.SETID,
                            T1.COMPANYID,
                             SYSADM.pkg_ajusta_kenan.fnc_fator_dias_desconto(T1.SETID,T1.COMPANYID) fator
           FROM SYSADM.PS_LOC_ITEM_SN T1;          
          EXCEPTION
               WHEN DUP_VAL_ON_INDEX THEN
                NULL;
           WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE(SQLERRM);                    
          END;
          COMMIT;
    end;But did not work
    Why ?

    The determinisic keyword is just part of the
    declaration whether declaring a standalone function
    or a packaged function.
    SCOTT @ nx102 Local> create package test_pkg
    2  as
    3    function determin_foo( p_arg in number )
    4      return number
    5      deterministic;
    6  end;
    7  /
    Package created.
    Elapsed: 00:00:00.34
    1  create or replace package body test_pkg
    2  as
    3    function determin_foo( p_arg in number )
    4      return number
    5      deterministic
    6    is
    7    begin
    8      return p_arg - 1;
    9    end;
    0* end;
    SCOTT @ nx102 Local> /
    Package body created.
    Elapsed: 00:00:00.14JustinCan I to have other procedures and functions inside pacckage ?

  • Change an existing base table

    Can i change the base table on which a block is based on if the new base table
    has all the same columns?

    At runtime you may use set_block_property with query_datasource_name and dml_data_target_name. These work only when the form status is 'NEW'. Therefore you should use these in the when-new-form-instance to set the base table depending on some global var or some parameter passed to the form. Or, to be sure that works, you may use these immediately after a clear_form.

  • Index in data base table

    Hi all.
    I need to create a Ztable ,it has 2 key fields and 2 index fields.
    what is the use of index fields ? how is it diff from key fields?
    how can i create index for my Ztable?
    To be reward all helpfull answer.
    JNJ

    Hi,
    Proceed as follows to create a secondary index on a table:
    In the field maintenance screen for the table, choose Goto --> Indexes.
    1. If you went to the field maintenance screen of the table in display mode, only correct the index (and not the table).
    If indexes already exist on the table, a list of these indexes is displayed. Choose Create. A dialog box appears in which you must enter the three-place index identifier. If there are no indexes, go directly to the dialog box.
    2. Enter the index identifier and choose Continue.
    You will go to the maintenance screen for indexes.
    3. Enter an explanatory short text in the field Short text.
    4. Choose TabFields.
    A list of all the fields of the table is displayed.
    5.Select the fields which you want to copy to the index.
    6.Choose Copy.
    The selected fields are copied to the index.
    7. If the values in the index fields already uniquely identify each record of the table, select Unique index.
    A unique index is automatically created on the database during activation because a unique index also has a functional meaning (prevents double entries of the index fields).
    8. If it is not a unique index, leave Non-unique index selected. In this case you can use the corresponding radio buttons to define whether the index should be created automatically on the database for all database systems, for selected database systems or for no database system.
    9. If you chose For selected database systems, you must specify these systems.
    You have two possibilities here:
    List of inclusions: The index is only created automatically during activation for the database systems specified in the list. The index is not created on the database for the other database systems.
    List of exclusions: The index is not created automatically on the database during activation for the specified database systems. The index is automatically created on the database for the other database systems.
    Click on the arrow symbol behind the radio buttons. A dialog box appears in which you can define up to 4 database systems. Use the corresponding radio buttons to decide whether this list should be treated as a list of inclusions or exclusions.
    Activate the index with Index ® Activate. The activation log tells you about the flow of the activation. Call it with Utilities ® Act.log. If an error occurred when activating the secondary index, you will automatically go to this log.
    The secondary index is automatically created on the database during activation if the corresponding table has already been created there and index creation was not excluded for the database system.
    If possible, check whether the database uses the index you created for selection. For more information see Checking whether an Index is Used.
    Best Regards,
    Rajesh.
    Please reward points if found helpful.

  • Insert with unique index slow in 10g

    Hi,
    We are experiencing very slow response when a dup key is inserted into a table with unique index under 10g. the scenario can be demonstrated in sqlplus with 'timing on':
    CREATE TABLE yyy (Col_1 VARCHAR2(5 BYTE) NOT NULL, Col_2 VARCHAR2(10 BYTE) NOT NULL);
    CREATE UNIQUE INDEX yyy on yyy(col_1,col_2);
    insert into yyy values ('1','1');
    insert into yyy values ('1','1');
    the 2nd insert results in "unique constraint" error, but under our 10g the response time is consistently in the range of 00:00:00.64. The 1st insert only took 00:00:00.01. BTW, if no index or non-unique index then you can insert many times and all of them return fast. Under our 9.2 DB the response time is always under 00:00:00.01 with no-, unique- and non-unique index.
    We are on AIX 5.3 & 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production.
    Has anybody seen this scenario?
    Thanks,
    David

    It seems that in 10g Oracle simply is doing something more.
    I used your example and run following script on 9.2 and 10.2. Hardware is the same i.e. these are two instances on the same box.
    begin
      for i in 1..10000 loop
        begin
          insert into yyy values ('1','1');
        exception when others then null;
        end;
      end loop;
    end;
    /on 10g it took 01:15.08 and on 9i 00:47.06
    Running trace showed that in 9i there was difference in plan of following recursive sql:
    9i plan:
    select c.name, u.name
    from
    con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
      c.owner# = u.user#
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    10000      0.43       0.43          0          0          0           0
    Execute  10000      1.09       1.07          0          0          0           0
    Fetch    10000      0.23       0.19          0      20000          0           0
    total    30000      1.76       1.70          0      20000          0           0
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  NESTED LOOPS 
          0   NESTED LOOPS 
          0    TABLE ACCESS BY INDEX ROWID CDEF$
          0     INDEX RANGE SCAN I_CDEF4 (object id 53)
          0    TABLE ACCESS BY INDEX ROWID CON$
          0     INDEX UNIQUE SCAN I_CON2 (object id 49)
          0   TABLE ACCESS CLUSTER USER$
          0    INDEX UNIQUE SCAN I_USER# (object id 11)10g plan
    select c.name, u.name
    from
    con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
      c.owner# = u.user#
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    10000      0.21       0.20          0          0          0           0
    Execute  10000      1.20       1.31          0          0          0           0
    Fetch    10000      2.37       2.59          0      20000          0           0
    total    30000      3.79       4.11          0      20000          0           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  HASH JOIN  (cr=2 pr=0 pw=0 time=301 us)
          0   NESTED LOOPS  (cr=2 pr=0 pw=0 time=44 us)
          0    TABLE ACCESS BY INDEX ROWID CDEF$ (cr=2 pr=0 pw=0 time=40 us)
          0     INDEX RANGE SCAN I_CDEF4 (cr=2 pr=0 pw=0 time=27 us)(object id 53)
          0    TABLE ACCESS BY INDEX ROWID CON$ (cr=0 pr=0 pw=0 time=0 us)
          0     INDEX UNIQUE SCAN I_CON2 (cr=0 pr=0 pw=0 time=0 us)(object id 49)
          0   TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)So in 10g it had hash join instead of nested loop join at least for this particular select. Probably time to gather stats on sys tables?
    The difference in time wasn't so big though 4.11 vs 1.70 so it doesn't explain all the time taken.
    But you can probably check whether you haven't more difference.
    Also you can download Thomas Kyte runstats_pkg and run it on both environments to compare whether some stats or latches haven't very big difference.
    Gints Plivna
    http://www.gplivna.eu

  • Can we change the fields of database unique index in a customised table?

    Hi all..
    I want to know that can we create or change or delete the database unique index of a customized table?
    In my case, there is a customised table with 4 primary keys with all the records to be maintained thru transaction code SM30.
    There is database unique index maintained for this table which has 2 fields. These 2 fields are out of the 4 primary fields of the table.I hope I have made myself clear!
    Now when I am trying to insert a record in the table it give me a short dump.( It says duplication of records is not allowed)
    The reason being that the new record that I am trying to insert in the database table has those 2 fields for which the unique index is maintained is the same as an already existing record.And the other two fields are different from the already existing record.So overall the combination of the 4 primary fields is different.
    Please tell me how shall I proceed now?
    I also tried to change the Unique index but it is asking me some kind of authrization(You are not authorized to make changes (authorization object S_DEVELOP)).Also I am not sure whether changing the unique index is feasible or not.?
    Thanks.

    hi
    I think you will not be able to do unique indexing withou the help of primary keys,so use all the primary keys into the table field selections  and and then create indexing otherwise dupilication of keys can occur. if you are not able to keep the primary keys then go for non unique key indexing,where you have to add the client field and the any keys of your wish.

  • CC&B 2.3.1 - Custom indexes for base tables

    Hi,
    We are seeing a couple of statements in the database that could improve its performance with new custom indexes on base tables. Questions are:
    - can we create new indexes on base tables ?
    - is there any recommendations about naming, characteristics and location for this indexes ?
    - is there any additional step to do in CC&B in order to use the index (define metadata or ...) ?
    Thanks.
    Regards.

    Hi,
    if it necessary You can crate custom index.
    In this situation You should follow naming convention from Database Design Standards:
    Indexes
    Index names are composed of the following parts:
    +[X][C/M/T]NNN[P/S]+
    +•     X – letter X is used as a leading character of all base index names prior to Version 2.0.0. Now the first character of product owner flag value should be used instead of letter X. For client specific implementation index in Oracle, use CM.+
    +•     C/M/T – The second character can be either C or M or T. C is used for control tables (Admin tables). M is for the master tables. T is reserved for the transaction tables.+
    +•     NNN – A three-digit number that uniquely identifies the table on which the index is defined.+
    +•     P/S/C – P indicates that this index is the primary key index. S is used for indexes other than primary keys. Use C to indicate a client specific implementation index in DB2 implementation.+
    Some examples are:
    +•     XC001P0+
    +•     XT206S1+
    +•     XT206C2+
    +•     CM206S2+
    Warning!  Do not use index names in the application as the names can change due to unforeseeable reasons
    There is no additional metadata information for indexes in CI_MD* tables - because change of indexes does not influence generated Java code.
    Hope that helps.
    Regards,
    Bartlomiej

  • Create Unique Index On Flow does not work for table names 23 characters

    I have a "create unique index on flow table" step that is dynamically generated by the IKM.
    The index name that is generated by the IKM is based on the table name except that the created index name is prefixed with "I$_" and ends with "_idx". Obviously, since Oracle table names can not exceed 30 characters in length, the index creation step will fail if the base table name exceeds 23 characters.
    I have tried to substring the index name generation step in the IKM so that it only uses the first 23 characters of the table name, but have not had any luck with using the "substring" command together with snpRef.getTable call.
    This is the section of the IKM that I desire to change:
    - <Field name="Txt" type="java.lang.String">
    - <![CDATA[
    create unique index      <%=snpRef.getTable("L","INT_NAME","W")%>_idx
    on          <%=snpRef.getTable("L","INT_NAME","W")%> (<%=snpRef.getColList("", "[COL_NAME]", ", ", "", "UK")%>)
    <%=snpRef.getUserExit("FLOW_TABLE_OPTIONS")%>
    ]]>
    </Field>
    I would like to change the above to something similar to the following (note the only change is the addition of substring(1,23))
    - <Field name="Txt" type="java.lang.String">
    - <![CDATA[
    create unique index <%=snpRef.getTable("L","INT_NAME","W")*.substring(1,23)*%_idx
    on          <%=snpRef.getTable("L","INT_NAME","W")%> (<%=snpRef.getColList("", "[COL_NAME]", ", ", "", "UK")%>)
    <%=snpRef.getUserExit("FLOW_TABLE_OPTIONS")%>
    ]]>
    </Field>
    Any help greatly appreciated. Thanks.

    As the index is temporary, just like the I$ talbel, the easiest way is to replace the table name with some unique identifier like the session is:
    bq. I$_&lt;%=odiRef.getSession("SESS_NO")%&gt;_idx
    If for some reason that is not unique enough, add the NNO:
    bq. I$_&lt;%=odiRef.getSession("SESS_NO")%&gt;&lt;%=odiRef.getSession("NNO")%&gt; \\ _idx                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Create unique index on flow table

    Hi
    I always get this error 'Create unique index on flow table ERROR" while implementing SCD2. Please help me.I have made an Update Key which is a combination of 4 columns that want a new row to be inserted if there is a change in data.One of the above columns is the EMPLOYEE ID,
    My surrogate key is the table's primary key but I have not defined it as a key in mapping and also I have turned off the Primary Key Constraint Option in Controls Panel as NO.
    Where am I wrong? Also please tell me what to take care off while making Natural Key ?

    There are two solutions:
    1. Only use uppercase in the table name
    2. go in Topology>Physical Architecture and edit the technology, then click on
    the "Language" tab and set "Object Delimiter" to empty.
    Thanks,
    Sutirtha

  • Base table changes

    i have a procedure which contains external table with 100 columns..
    data from this table is tranferred to temp table..
    any errors are caught here and if there are no errors then base table is merged with temp tables data..
    my problem is columns get added / removed and even data types are getting changed many times for bas table..
    what would be alternative for this???
    while inserting i have selected columns from ext table and then inserted coulmn wise in temp table
    regards
    avinash

    I agree with Blu. It makes no sense to hack permanent or base tables like that.
    Just how is the existing software (client software, queries, PL/SQL code, etc) to know how to deal with these new columns and data type changes? What about data integrity? Constraints? Foreign keys? Indexes? Etc. Etc.
    Relational design is not just a good idea. IT IS THE FUNDAMENTAL PRINCIPLE OF RELATIONAL DATABASES.

  • Indexed views using indexes on base table

    Hi all,
    CREATE VIEW Sales.vOrders
    WITH SCHEMABINDING
    AS
    SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Revenue,
    OrderDate, ProductID, COUNT_BIG(*) AS COUNT
    FROM Sales.SalesOrderDetail AS od, Sales.SalesOrderHeader AS o
    WHERE od.SalesOrderID = o.SalesOrderID
    GROUP BY OrderDate, ProductID;
    GO
    --Create an index on the view.
    CREATE UNIQUE CLUSTERED INDEX IDX_V1
    ON Sales.vOrders (OrderDate, ProductID);
    GO
    --This query can use the indexed view even though the view is
    --not specified in the FROM clause.
    SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Rev,
    OrderDate, ProductID
    FROM Sales.SalesOrderDetail AS od
    JOIN Sales.SalesOrderHeader AS o ON od.SalesOrderID=o.SalesOrderID
    AND ProductID BETWEEN 700 and 800
    AND OrderDate >= CONVERT(datetime,'05/01/2002',101)
    GROUP BY OrderDate, ProductID
    ORDER BY Rev DESC;
    In the above code block, Sales.SalesOrderDetail and Sales.SalesOrderHeader are base tables.
    Say suppose there are some indexes on some of the columns of these base tables. Are these indexes used when we write a query in which indexed view is mentioned
    in the from clause?
    Thanks, Srikar

    SO far as its a indexed view it wont use the indexes on base tables when you use it in a query as indexed view is persisted and exists as a physical object. SO it doent require definition to be substituted and data to be retrieved from the base objects.
    The indexes will come handy while populating the indexed view.
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Index not using the base table

    Hi,
    In which scenario, a query will only use the index and not the base table. Please give me some example.
    Thanks,
    Santhosh
    Edited by: Santhosh on Oct 23, 2012 2:45 AM

    Chancal,
    not always,
    SQL> desc temp;
    Name                                                                                                      Null?    Type
    EMPNO                                                                                                              NUMBER(4)
    ENAME                                                                                                              VARCHAR2(10)
    JOB                                                                                                                VARCHAR2(9)
    MGR                                                                                                                NUMBER(4)
    HIREDATE                                                                                                           DATE
    SAL                                                                                                                NUMBER(7,2)
    COMM                                                                                                               NUMBER(7,2)
    DEPTNO                                                                                                             NUMBER(2)
    SQL> select empno from temp;
         EMPNO
          7369
          7499
          7521
          7566
          7654
          7698
          7782
          7788
          7839
          7844
          7876
          7900
          7902
          7934
          1057
    15 rows selected.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  3qt0w20pqj162, child number 0
    select empno from temp
    Plan hash value: 3800668828
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |   1 |  TABLE ACCESS FULL| TEMP |    15 |    60 |     2   (0)| 00:00:01 |
    13 rows selected.
    SQL> alter table temp modify(empno not null);
    Table altered.
    SQL> select empno from temp;
         EMPNO
          1057
          7369
          7499
          7521
          7566
          7654
          7698
          7782
          7788
          7839
          7844
          7876
          7900
          7902
          7934
    15 rows selected.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  3qt0w20pqj162, child number 0
    select empno from temp
    Plan hash value: 472861760
    | Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |          |       |       |     1 (100)|          |
    |   1 |  INDEX FULL SCAN | IDX_TEMP |    15 |    60 |     1   (0)| 00:00:01 |
    13 rows selected.

  • Hi i have created a unique index on table on 3 columns.

    Hi i have created a unique index on table on 3 columns.I want to know when i have 2 records
    which contain same values in 2 fields and the 3rd field contains a null
    will unique index allow me to insert these records.

    Robert Angel wrote:
    This must be one time when null = null. ;)
    regards,
    Robert.Not really, it is more the case that the non-null columns need to be unique. Your second attempt failed because there was already an index entry with 'a', 'b', and the lack of a value for column c gave Oracle no way to differentiate between the two rows so they are not unique.
    A subtle, but conceptually important difference. :-)
    John

Maybe you are looking for

  • Windows 10 Preview load SL 9926 (upgrading from windows 8.1) Error 80070032

    Follow the log from "WindowsUpdate" file: After a long period sending "data" setup fails with error 80070032 2015-02-19 11:54:56:451 708 15d0 Agent  * Added update {4B9F6CAB-3750-47A4-9944-57EEA40AC186}.102 to search result 2015-02-19 11:54:56:451 70

  • Having some trouble

    I need to create a class with two methods, getPairs and waitAMonth, that will determine rabbit population. I have this so far. public class RabbitPopulation      public RabbitPopulation()           double rabbitPairs=0;      public RabbitPopulation(d

  • Data transfer of Selected sets (QS51) using Function modules

    Hi All, I am doing data migration of Selected sets (TCODE QS51) using function module. I have tried using BDC recording but it is not working in background.In foreground it is giving information error on second screen as field  RQSKT-AUSWAHLMGE is no

  • Part # for T400 ultrabay battery, and where to buy?

    The Lenovo Web site does not seem to sell an UltraBay battery that's compatible with the T400. Or maybe it's compatible with one of them, and the T400 isn't listed for some reason. What is the part number for the UltraBay battery that works with the

  • SAP Portal Migration from 6.0 to 7.0 - ESS/MSS

    Hi, We are planning to upgrade Portal from 6.0 to 7.0 along with ECC to 6.0. We have many customization done in ESS/MSS and we want to keep the same customization in the newer version of portal. Can you please let us know how to achieve this ? I know