Indexes which bigger than their tables

hi,
is it possible to find indexed which are bigger sized than their tables using a single query?
I do not want to use a cursor.

user8954613 wrote:
is it possible to find indexed which are bigger sized than their tables using a single query?
I do not want to use a cursor.Then it is not possible.
All SQL are parsed, stored and executed as cursors.
If you do not want to use cursors, you cannot by implication use SQL.
Now what you meant were likely you do not want to use PL/SQL to parse and execute the SQL cursor? In which case it is important to get the definitions and terminology right.
Refer to Oracle® Database Concepts:
>
When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution. The parse call opens or creates a cursor, which is a handle for the session-specific private SQL area that holds a parsed SQL statement and other processing information.

Similar Messages

  • Why is my index bigger than my table ?

    Hi
    I am using locally managed tablespaces, with system allocation on 9.2.0.8
    I have a table with some 32 columns and a 3 column index (unique key).
    The 3 columns indexed are not null and all number(12) datatype.
    The index is bigger than the table by a factor of almost 10.
    I am not using compression of any kind.
    Any ideas ?
    Cheers

    If the table has insert or delete activity on it then the index is being maintained and it is this maintenance that can lead to an index growing larger than the base table when the length of the indexed columns are small compared to the row length.
    If one of the 3 key columns is a sequence number or generally follows an ever increasing value pattern and the delete process does not delete all values within the leaf blocks then the index will have a tendency to grow large realtive to the table.
    Worse is if any of the three key values are allowed to be updated after insertion. That then becomes a delete and insert into the index.
    If the tablespace was originally a dictionary managed tablespace and was converted to locally managed auto-allocate via sys.dbms_space_admin.tablespace_migrate_to_local then the storage behavior is different from if the tablespace was created auto-allocate from day one.
    Calculate the expected size of the index. Rebuild or re-create it as free space ditakes and then compare the resulting size to the expected size.
    HTH -- Mark D Powell --

  • Index size (row_nums) is bigger than the tables row

    Hi everyone,
    I'm encountering some strange problems with the CBO in Oracle 10.2.0.3 - it's telling me that I have more rows in the indexes than there are rows in the tables.
    I've tried all combinations of dbms_stats and analyse and cannot understand how the CBO comes up with such numbers. I've even done a "delete statistics" and
    Re-analysed the table and indexes but it doesn't help.
    The command I used is variations of the following:
    exec
    DBMS_STATS.GATHER_TABLE_STATS(ownname=>'MBS',tabname=>'READINGTOU', -
    estimate_percent=>dbms_stats.auto_sample_size,method_opt=>'FOR COLUMNS PROCESSSTATUS',degree=>2);
    EVEN TRIED
    exec sys.dbms_utility.analyze_schema('MBS','ESTIMATE', estimate_percent => 15);
    I've even used estimate_percent of 50 and still getting lower numbers for the table.
    Initially I was afraid that since the index is larger than the table, the index would never be used. So the question is, does it really matter that the indexes' num_rows is bigger than the tables' num_rows? What is the consequence of this? And how do I get the optimizer to correct the differences in the stats. The table is 30G in size and growing, so a COMPUTE is out of the question.
    but have the same problem in dev..and i did the COMPUTE in dev...get the same thing... I have more rows in the indexes than there are rows in the tables
    Edited by: user630084 on Mar 11, 2009 10:45 AM

    Is your issue that you are having problems with the execution plans of queries referencing these objects? Or is your problem that you are observing more num_rows in the index than in the table when you query the data dictionary?
    If it's the latter then there's really no concern (unless the estimates are insanely inconsistent). The statistics are estimates and as such, will not be 100% accurate, though they should do a reasonable job of representing the data in your system (when they don't, then you have an issue, but we've seen nothing to indicate that as of yet).

  • Index size bigger than table name? why?

    I have a table student_enrollwment_item_tbl with primary key "pk_stu_enroll_item" - STU_ENROLL_ID, TASK_ID, PART_ID, ITEM_ID.
    Table structure is as following:
    Name Null? Type
    STU_ENROLL_ID NOT NULL NUMBER
    ITEM_ID NOT NULL VARCHAR2(15)
    PART_ID NOT NULL NUMBER(2)
    TASK_ID NOT NULL VARCHAR2(10)
    QUESTION_NO NOT NULL VARCHAR2(25)
    FLASH_NO NOT NULL NUMBER(3)
    ITEM_NO NUMBER(3)
    The table is 1856 MB in size, while the index is 2730 MB in size. I am surprised since 'size of index > size of table'. Why will it happen?

    1) As seen from the result of the following sql, the PCT_FREE is 10. It's not bad.
    select index_name, table_name, ini_trans, max_trans, initial_extent, min_extents, max_extents,
    freelists, freelist_groups, pct_free, leaf_blocks from all_indexes
    where table_name = 'STUDENT_ENROLLMENT_ITEM_TBL';
    INDEX_NAME TABLE_NAME INI_TRANS MAX_TRANS INITIAL_EXTENT MIN_EXTENTS MAX_EXTENTS FREELISTS FREELIST_GROUPS PCT_FREE LEAF_BLOCKS
    pk_stu_enroll_item STUDENT_ENROLLMENT_ITEM_TBL 2 255 379125760 1 2147483645 1 1 10 323428
    2) The pattern is like this:
    I regards it as being not sequential, but with a lot of distinct values.
    STU_ENROLL_ID ITEM_ID PART_ID TASK_ID QUESTION_NO FLASH_NO ITEM_NO
    10005085 C31001008 1 C310010 8 9 8
    10005085 C31001009 1 C310010 9 10 9
    10005085 C31001010 1 C310010 10 11 10
    10005086 0 0 C310010 0 0 0
    10005086 0 1 C310010 0 1 0
    10005086 C31001001 1 C310010 1 2 1
    10005086 C31001002 1 C310010 2 3 2
    10005086 C31001003 1 C310010 3 4 3
    10005086 C31001004 1 C310010 4 5 4
    10005086 C31001005 1 C310010 5 6 5
    10005086 C31001006 1 C310010 6 7 6
    10005086 C31001007 1 C310010 7 8 7
    10005086 C31001008 1 C310010 8 9 8
    10005086 C31001009 1 C310010 9 10 9
    10005086 C31001010 1 C310010 10 11 10
    10005055 C31001005 1 C310010 5 6 5
    10005055 C31001006 1 C310010 6 7 6
    10005055 C31001007 1 C310010 7 8 7
    10005055 C31001008 1 C310010 8 9 8
    3) Not many deletes have been ran in the table as I know.
    I still cannot figure out the reason. Please help. Thanks.

  • Order by column  has index which results in full table scan

    I have a query on table A which has index for column B.In the query i am using order by column B .In explain plan it shows that the full table scan is performed for table A without picking up the index for B .How can i ensure that index is picked up in the explain plan.Please help its urgent.
    Pandu

    Please help its urgent.Contact Oracle Support in that case.
    By making that remark you have just made Blushadow go out to lunch (again) now.... ;)
    Depending on your query a full scan could be 100% appropriate here.
    But since you didn't post your DB_version, optimizer settings, execution plan etc. there's not much more to say, really, besides:
    "Full scans aren't always evil".
    See:
    [When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=3299435]
    [How to post a SQLstatement Tuning Request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]

  • Why Index size is bigger than table size?

    Dear All,
    I found in my database my tables sizes is coming around 30TB (All Tables in Database). and my index size for the same is 60TB. This is data ware housing environment.
    How the index size and table size are differing?
    Why they are differing? why index size is bigger than table size?
    How to manage the size?
    Please give me clear explanation and required information on the above.
    Regards
    Suresh

    There are many reasons why the total space allocated indexes could be larger than the total space allocated to tables. Sometimes it's a mark of good design, sometimes it indicates a problem. In your position your first move is to spend as little time as possible in deciding whether your high-level summary is indicative of a problem, so you need to look at a little more detail.
    As someone else pointed out - are you looking at the sizes because you are running out of space, or because you have a perceived performance problem. If not, then your question is one of curiosity.
    If it's about performance then you should be looking for code (either through statspack/AWR or sql_trace) that is performing badly and use the analysis of that code to help you identify suspect indexes.
    If it's about space, then you need to do some simple investigations aimed at finding a few indexes that can be "shrunk" or dropped. Pointers for this are:
    select
            table_owner, table_name, count(*)
    from
            dba_indexes
    group by
            table_owner, table_name
    having
            count(*) > 2   -- adjust to keep the output short
    order by
            count(*) desc;This tells you which tables have the most indexes - check the sizes of the tables and indexes and then check the index definitions for the larger tables with lots of indexes.
    Second quick check - join dba_tables to dba_indexes by table_name, and report the table blocks and index leaf blocks in desending order of leaf block count. Look for indexes which are very big, and also bigger than their underlying tables. There are special cases (and bugs) that can cause indexes to be much bigger than they need to be ... this report may identify a couple of anomalies that could benefit from an emergency fix followed (possibly) by a strategic fix.
    Regards
    Jonathan Lewis

  • Index size increases than table size

    Hi All,
    Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
    Thanks in advance
    sherief

    hi,
    The size of a index depends how inserts and deletes occur.
    With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
    When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
    This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
    With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
    If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
    If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
    here is a exemple how index size can become larger than table size:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as admin
    SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
    Table created
    SQL> create index rich_i on rich(c1);
    Index created
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> delete from rich where mod(c1,2)=0;
    29475 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> insert into rich select rownum+100000, 'qq' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1703936 208 13
    INDEX 2097152 256 16
    SQL> insert into rich select rownum+200000, 'aa' from all_objects;
    58952 rows inserted
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> delete from rich where mod(c1,2)=0;
    58952 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> insert into rich select rownum+300000, 'hh' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 4063232 496 31
    SQL> alter index rich_i rebuild;
    Index altered
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 2752512 336 21
    SQL>

  • Bitmap index or Composite index better on a huge table

    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.
    Regars,
    Madhu K.

    Dear,
    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.First of all, bitmap indexes are not recommended for write intensive OLTP applications due to the locking threat they can produce in such a kind of applications.
    You told us that this table is never updated; I suppose it is not deleted also.
    Second, bitmap indexes are suitable for columns having low cardinality. The question is how can we define "low cardinality", you said that you have 100,000 distincts group_no on a table of 100,000,000 rows.
    You have a cardinality of 100,000/100,000,000 =0,001. Group_no column might be a good candidate for a bitmap index.
    You said that order_no is unique so you have a very high cardinality on this column and it might not be a candidate for your bitmap index
    Third, your query where clause involves only the group_no column so why are you including both columns when testing the bitmap and the b-tree index?
    Are you designing such a kind of index in order to not visit the table? but in your case the table is made only of those two columns, so why not follow Hermant advise for an Index Organized Table?
    Finally, you can have more details about bitmap indexes in the following richard foot blog article
    http://richardfoote.wordpress.com/2008/02/01/bitmap-indexes-with-many-distinct-column-values-wotsuh-the-deal/
    Best Regards
    Mohamed Houri

  • Why full index scan is faster than full table scan?

    Hi friends,
    In the where clause of a query,if we give a column that contains index on it,then oracle uses index to search data rather than a TABLE ACCESS FULL Operation.
    Why index searching is faster?

    Sometimes it is faster to use index and sometimes it is faster to use full table scan. If your statistics are up to date Oracle is far more likely to get it right. If the query can be satisfied entirely from the index, then an index scan will almost always be faster as there are fewer blocks to read in the index than there would be if the table itself were scanned. However if the query must extract data from the table when that data is not in te index, then the index scan will be faster only if a small percentage of the rows are to be returned. Consiter the case of an index where 40% of the rows are returned. Assume the index values are distributed evenly among the data blocks. Assume 10 rows will fit in each data block thus 4 of the 10 rows will match the condition. Then the average datablock will be fetched 4 times since most of the time adjacent index entries will not be in the same block. The number of single datablock fetches will be about 4 times the number of datablocks. Compare this to a full table scan that does multiblock reads. Far fewer reads are required to read the entire table. Though it depends on the number of rows per block, a general rule is any query returning more than about 10% of a table is faster NOT using an index.

  • How can i join more than 20 tables which contains more than 5 lacks records

    how can i join more than 20 tables which contains more than 5 lacks records

    If you're trying to join 20 tables I would check:
    - Are all the joins necessary. It's easy sometimes to just join to another table because you're unsure as to whether it's required.
    - What sort of application is it? 20 joins seems a lot to me. Are you trying to achieve too much with one query? Is it possible to break the problem down?
    - If it is necessary to join so many tables then force the use of hash joins in the query, especially if you're processing a lot of data and want the best throughput. If you want a quicker response, this will not be a appropriate.

  • Why are 'saved as' .psd files so much bigger than original raw nef files?

    I was under the impression that original raw files were the biggest possible. I appear to be very wrong. Why are 'saved as' .psd files so much bigger than original raw nef files?
    I'm beginning to think that saving them as psd is a bad idea.
    Yes, though I've heard all the arguments of keepng the original raw files (For ex. Did you throw away the negatives when you were using film) I se eno purpose in keeping them. Once I've made the initial adjustments--cropping, color correction etc. I don't feel a need to ever go back and never do. Most of my work is done in Photoshop and I like it that way--but suddenly finding myself with such huge files doesn't appeal to me at all--and other formats like tif...well never mind for now.

    Good point made c.pfaffenbichler however, my thinking is this--there is time spent on the raw file and then there is much more time spent on (usually a psd) the file once in Photoshop. For me to then go back to the orignal raw file, after having worked on it on PS would mean getting rid of all the work (larger amount of work, time wise and artistic wise) done on PS which seems pointless. Although the psd file does show your layers and stuff it only shows the end results of that layer. It does not show from where to where you pointed your brush, from what point to what point you changed the color or part of an image etc. etc.Anyhow I understand why most people keep their raw files, but this is the main reason why I do not. It would mean hours of work on an image you already worked on (and usually were satisfied with) to perhaps make some minor alteration. Also please note that though I was noce a pro photog, no I do it mostly for fun. Getting the exact red in my Coca Cola can has never been of importance. On the other hand, if there were a way of working on a raw file within Photoshop and keep it (save it as) a raw file equivalent, then I would absolutely do so.

  • CBO picking incorrect indexes or doing Full Scans ( table/indexes)

    Database version - 10.2.0.4
    OS : Solaris 5.8
    Storage: SAN
    Application : PeopleSoft Financials
    DB size : 450 gb
    DB server : 12 CPU ( 900 Mghz each ), 36 GB RAM
    ASMM - sga_target_size = 5 gb
    Locally managed tablespaces - MANUAL
    - db_file_multiblock_read_count - not set explicitly in spfile ( implicitly defaulted to 128 )
    - other optimizer related parameters are set to default
    - system_statistics - CPUSPEEDNW=456.282722513089, IOSEEKTIM =10, IOTFRSPEED=4096     
    - dictionary object system stats were last gather in Nov 09
    - stats on schema objs are gathered every night using custom script, but I have to say there are some histograms on some tables which were gathered by PS admins
    begin
    dbms_stats.gather_schema_stats(
    ownname=> 'SCHEMANM' ,
    cascade=> DBMS_STATS.AUTO_CASCADE,
    estimate_percent=> DBMS_STATS.AUTO_SAMPLE_SIZE,
    degree=> 10,
    no_invalidate=> DBMS_STATS.AUTO_INVALIDATE,
    granularity=> 'AUTO',
    method_opt=> 'FOR ALL COLUMNS SIZE 1',
    options=> 'GATHER STALE');
    end;
    Details :
    We are experiencing erratic database performance. It was upgraded from 9i to 10g along with PS software upgrade. A job that runs on one day in 12 hrs(that is high in itself) would take more than 24 hrs another day.
    We have Test and Staging envs on other servers which do not have performance issues of production magnitude. The test, staging dbs are clones of prod, created by ftp'ing files from one location to another, not sure if that makes any difference but just pointing out.
    On Prod db some symptoms which baffle me are :
    1) sql executing for over 40 minutes, check the plan and it is using an "incorrect" index, idx1. Hint sql with "correct" index, idx2, and it runs in few seconds. result set <400 rows. Cost with hint with idx2 is HIGHER. This scenario is still understandable as CBO is likely to pick a lower cost.
    - But why is there so much discrepancy in cost and response time?
    - what is influencing the CBO to pick an index which is obviously not the fastest response time?
    2) sql plan shows FTS , execution time , runs forever . But a hint with index in this case shows the cost is LOWER and response time is in seconds, so why is CBO not even evaluating this index path? Because as I understand the CBO, it will always pick the lower cost plan.
    What is influencing the CBO. Is it system stats? or any other database parameter? or the hardware ? the large SGA?? histogram ?? Stats gathering?? is there is any known issue with DBMS_STATS.AUTO_SAMPLE_SIZE and cardinaility?
    Where should I put my focus?
    What am I looking for ?
    I do have Jonathan Lewis's Cost-Based Oracle Fundamentals which I have just started so perhaps I will be enlightened but while I read it, it would be of immense help to get some advice from forum gurus.
    At this time I am not posting any exec plan or code, but I can do so if required.
    Thanks,
    JC

    I would like to re-renter this to Database-general Forum as I have not received any response yet. I did not find an option to move so I marked this as answered and will re-enter in the other forum. Hope that's ok.
    Thanks,
    JC

  • LOB segment size is 2 times bigger than the real data

    here's an interesting test:
    1. I created a tablespace called "smallblock" with 2K blocksize
    2. I created a table with a CLOB type field and specified the smallblock tablespace as a storage for the LOB segment:
    SCOTT@andrkydb> create table t1 (i int, b clob) lob (b) store as
    t1_lob (chunk 2K disable storage in row tablespace smallblock);
    3. I insert data into the table, using a bit less than 2K of data for the clob type column:
    SCOTT@andrkydb> begin
    2 for i in 1..1000 loop
    3 insert into t1 values (mod(i,5), rpad('*',2000,'*'));
    4 end loop;
    5 end;
    6 /
    4. Now I can see that I have an average of 2000 bytes for each lob item:
    SCOTT@andrkydb> select avg(dbms_lob.getlength(b)) from t1;
    AVG(DBMS_LOB.GETLENGTH(B))
    2000
    and that all together they take up:
    SCOTT@andrkydb> select sum(dbms_lob.getlength(b)) from t1;
    SUM(DBMS_LOB.GETLENGTH(B))
    2000000
    But when I take a look at how much is the LOB segment actually taking, I get a result, which is being a total mystery to me:
    SCOTT@andrkydb> select bytes from dba_segments where segment_name = 'T1_LOB';
    BYTES
    5242880
    What am I missing? Why is LOB segment is being ~2 times bigger than it is required by the data?
    I am on 10.2.0.3 EE, Solaris 5.10 sparc 64bit.
    Message was edited by:
    Andrei Kübar

    thanks for the link, it is good to know such thing is possible. Although I don't really see how can it help me..
    But you know, you were right regarding the smaller data amounts. I have tested with 1800 bytes of data and in this case it does fit just right.
    But this means that there is 248 bytes wasted (from my, as developer, point of view) per block! But if there is such an overhead, then I must be able to estimate it when designing the data structures. And I don't see anywhere in the docs a single word about such thing.
    Moreover, if you use NCLOB type, then only 990 bytes fits into a single 2K chunk. So the overhead might become really huge when you go over to gigabyte amounts...
    I have a LOB segment for a nclob type field in a production database, which is 5GB large and it contains only 2,2GB of real data. There is no "deleted" rows in it, I know because I have rebuilt it. So this looks like a total waste of disk space... I must say, I'm quite disappointed with this.
    - Andrei

  • How to delete the double records connected to one or more than one tables in SQL 2008?

    Hi
    Can anyone please help me with the SQL query. I Im having a table called People with columns names: personno., lastname, firstname and so on. The personno. is having duplicate records,so all the duplicate records i have written with "double" in
    the beginning of the numbers. I tried deleting these double records but they are linked to one or more than one tables. I have to find out, all the tables blocking the deleting of double person. And then create select statements which creates update statements
    in order to replace the current id of double person with substitute id. (The personno. is in the form of id's in the database)
    Thanks

    You should not append "double" in the personno. When we append it will not be able to join or relate to other table. Keep the id as it is and use another field(STATUS) to mark as duplicate. Also we will require another field(PRIMARYID) against
    those duplicate rows i.e the main or the primary personno.
    SELECT * FROM OtherTable a INNER JOIN
    (SELECT personno, status, primaryid FROM PEOPLE WHERE status = 'Duplicate') b
    ON a.personno = b.personno
    UPDATE OtherTable SET personno = b.primaryid
    FROM OtherTable a INNER JOIN
    (SELECT personno, status, primaryid FROM PEOPLE WHERE status = 'Duplicate') b
    ON a.personno = b.personno
    NOTE: Please take backup before applying the query. This is not tested.
    Regards, RSingh

  • How can I get the "Open" and "Save As" dialog boxes to open at larger than their default size?

    How can I get the "Open" and "Save As" dialog boxes to open at larger than their default size?  I would like them to open at the size they were previously resized like they used to in previous operating systems.  They currently open at a very small size and the first colum is only a few letters wide necessitating a resize practically every time one wants to use it.  Any help would be appreciated.

    hi Prasanth,
    select werks matnr from ZVSCHDRUN into table it_plant.
    sort it_plant by matnr werks.
    select
            vbeln
            posnr
            matnr
            werks
            erdat
            kbmeng
            vrkme
            from vbap
            into table it_vbap
            for all entries in it_plant
            where matnr = it_plant-matnr and
                  werks = it_plant-werks.
    and again i have to write one more select query for vbup.
    am i right?

Maybe you are looking for