Should DML transaction be so slow?

we are running Oracle 10.2.0.1.0 on powerful enough Xeon machines (1-4 Xeons, 8-16GB RAM) and 64-bit Windows 2003. Applications is strictly data warehousing, which means no primary/foreign keys on fact tables, and no small transactions. Facts tables contain 10-20 million rows.
Now the problem. If I delete 1-2 million rows from 20 million rows table, it takes about an hour. Should it take so long? Updates and inserts are no so long - inserting 100K rows takes 1 minute. Is not it too slow? Maybe there are some settings we can adjust to optimize DB for data warehousing rather than for OLTP?

<Should DML transaction be so slow? >
Probably not, but its hard to say. Deleting 2M rows in an hour sounds slow. You need to get execution plans on your deletes to see what is happening.
It sounds like you're deleting across partitions in the same statement. You could try deleting rows for each partition with individual statements to see if that's faster.

Similar Messages

  • DML Transactions

    Hi,
    I came across the following statement::
    " If the exception is handled in the calling procedure, all DML statements in the calling procedure and in the called procedure remain as part of the transaction. If the exception is unhandled in the calling procedure, the calling procedure terminates and the exception propagates to the calling environment. All the DML statements in the calling procedure and the called procedure are rolled back along with any changes to any host variables. The host environment determines the outcome for the unhandled exceptions."
    and my doubt is::
    ============================================================
    if the exception is handed in the called procedure are the DML transactions belong to different transactions??
    thanks,
    Muthu Kumar

    No. If the exception is handled by either the caller or the callee, there are no transactional impacts (assuming that the exception handler does not rollback or does a rollback to an savepoint rather than rolling back the entire transaction).
    Justin

  • Transactional replication very slow with indexes on Subscriber table

    I have setup Transactional Replication for one of our databases where one table with about 5mln records is replicated to a Subsriber database. With every replication about 500-600.000 changed records are send to the Subscriber.
    Since one month I see very strange behaviour when I add about 10 indexes to the Subscriber table. As soon as I have added the indexes replication speed becomes extremely slow (almost 3 hours for 600k records). As soon as I remove the indexes the replication
    is again very fast, about 3 minutes for the same amount of records.
    I've searched a lot on the internet to solve this issue but can't find any explaination for this strange behaviour after adding the indexes. As far as I know it doesn't have to be a problem to add indexes to a Subscriber table, and it hasn't been before on
    another replication configuration we use.
    Some information from the Replication Log:
    With indexes on the Subscriber table
    Total Run Time (ms) : 9589938 Total Work Time : 9586782
    Total Num Trans : 3 Num Trans/Sec : 0.00
    Total Num Cmds : 616245 Num Cmds/Sec : 64.28
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 9580752 
    Time Spent on Commits (ms): 2687 Commits/Sec : 0.00
    Time to Apply Cmds (ms) : 9586782 Cmds/Sec : 64.28
    Time Cmd Queue Empty (ms) : 5499 Empty Q Waits > 10ms: 172
    Total Time Request Blk(ms): 5499 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 10378 Cmds/Sec : 59379.94
    Time Cmd Queue Full (ms) : 9577919 Full Q Waits > 10ms : 6072
    Without indexes on the Subscriber table
    Total Run Time (ms) : 89282 Total Work Time : 88891
    Total Num Trans : 3 Num Trans/Sec : 0.03
    Total Num Cmds : 437324 Num Cmds/Sec : 4919.78
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 86298 
    Time Spent on Commits (ms): 282 Commits/Sec : 0.03
    Time to Apply Cmds (ms) : 88891 Cmds/Sec : 4919.78
    Time Cmd Queue Empty (ms) : 1827 Empty Q Waits > 10ms: 113
    Total Time Request Blk(ms): 1827 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 2812 Cmds/Sec : 155520.63
    Time Cmd Queue Full (ms) : 86032 Full Q Waits > 10ms : 4026
    Can someone please help me with this issue? Any ideas? 
    Pim 

    Hi Megens:
    Insert statement might be slow with not only indexes and few others things too
    0) SQL DB Blocking during inserts
    1) If any insert triggers are existed
    2) Constraints - if any
    3) Index fragmentation
    4) Page splits / fill factor
    Without indexes inserts will be fast because, each time when new row going to insert to the table, SQL Server will do
    1) it will check for the room, if no room page splits will happen and record will placed at right place
    2) Once the record updated all the index should be update
    3) all these extra update work will cause can make insert statement bit slow
    Its better to have index maintenance jobs frequently to avoid fragmentation.
    If every thing is clear on SQL Server Side, you need look up on DISK IO, N/W Latency between the servers and so on
    Thanks,
    Thanks, Satish Kumar. Please mark as this post as answered if my anser helps you to resolves your issue :)

  • SM58 Transaction Recorded Very Slow

    Dear Experts,
    I have many tRFC in PI system in SM58 with status transaction recorded. This seems to be because there were an unusual data (file to idoc) from 1 interface that suddenly sends more data than usual. This happens at midnight, since then we have a queue at in SM58 with many  transaction recorded (until tonight).
    Strange things is actually that when I try to execute the LUW (F6), the transaction could be processed successfully, even though sometimes after pressing F6, the processing time could take some time (marked by the loading cursor). When the processing time is long, I just stop the transaction, re-open SM58, then the transaction that was executed before will be in the "executing" status and then I execute LUW for the transaction again and it never takes a long time to execute it.
    Trying to execute LUWs and ticking the recorded status would end up no transactions executed.
    Checking in SMQS for the destination, it is rarely that the actual connection reach the maximum connection. Seeing QRFC resource in SMQS, the Resource Status shows OK.
    Going to SM51 then Server Name > Information > Queue Information, there are no waiting requests.
    Actually the transactions are do processed, it just they are processed very slow and this impact to the business.
    What can I do when this happens? How to really re-process those recorded transactions?
    Could this be because the receiver resources? How to check for this?

    Dear Experts,
    According to this link,
    http://wiki.scn.sap.com/wiki/pages/viewpage.action?original_fqdn=wiki.sdn.sap.com&pageId=145719978
    "Transaction recorded" usually happens when A. processing idocs or B. BW loads.
    A.If it occurs when processing idocs you will see function module "IDOC_INBOUND_ASYNCH" mentioned in SM58.
    Check also that the idocs are being processed in the background and not in the foreground.
    Ensure that background processing is used for ALE communications.
    Report RSEOUT00 (outbound)can be configured to run very specifically for the high volume message types on their system. Schedule regular runs of report RESOUT00 it can be run for
    IDoc Type and\or Partner etc..
    To set to background processing for Outbound idocs do the following:
    -> go to transaction WE20 -> Select Partner Select Outbound Message Type and change the processing method from
    "Transfer IDoc Immedi." to "Collect IDocs".
    Reading that explanations, should the setting of IDoc processing to background (Collect IDocs) is done in PI or the receiver?
    If the IDocs is processed collectively, will it make the sending IDoc process faster from PI to the receiver system? What is the explanation that if the IDoc is processed collectively, it would make the IDoc sending to be faster?
    Why should we use RSOUT00 report when we already have SM58, and we can execute LUWs in SM58?
    Thank you,
    Suwandi C.

  • DML,Transactions and index updates

    Hi,
    Its known adding indexes slows down the DML on the table. i.e. every time table data changes, the index has to be recalculated. What i am trying to understand is whether the index is recalculated as soon as oracle sees the change?
    To elaborate, lets say i have a table abc with 4 columns, column1, column2, column3 and column4. I have two indexes; one unique on column1 and another non unique index on column2.
    So when i am trying to update column4, which is not indexed, will there be any transactional data generated for this operation? Will it be generated if i am updating column2 ( with non-unique index) ?
    What i am interested to know is how transactions boundaries impact the calculation of index. Will oracle always generate transactional entries and recalculate affected indexes even before the transaction is committed and the data change is made permanent ?

    user9356129 wrote:
    Hi,
    Its known adding indexes slows down the DML on the table. i.e. every time table data changes, the index has to be recalculated. Yes, but only when involved (i.e. indexed) columns are changed.
    And, indexes are not "recalculated". Assuming the index is of type B-tree (by farout the most commonly used type), then the "B-tree is maintained". How that's done can be found in elementary computer science materials which you can probably find using Google.
    What i am trying to understand is whether the index is recalculated as soon as oracle sees the change?
    To elaborate, lets say i have a table abc with 4 columns, column1, column2, column3 and column4. I have two indexes; one unique on column1 and another non unique index on column2.
    So when i am trying to update column4, which is not indexed, will there be any transactional data generated for this operation?You'll need to clarify what you mean by "transactional data". But in this case the block(s) that hold(s) the table row(s) of which you have updated column4, will be changed, in memory, to reflect your update. And as column4 is not involved in any index, no index blocks will be changed.
    Will it be generated if i am updating column2 ( with non-unique index) ?In this case not only table-blocks will be changed to reflect your update, but also index-blocks (that hold B-tree information) will be changed (in memory).
    >
    What i am interested to know is how transactions boundaries impact the calculation of index. Will oracle always generate transactional entries and recalculate affected indexes even before the transaction is committed and the data change is made permanent ?Yes (to the part following 'and' of the latter sentence. I don't know what you mean by "transactional entries").
    Toon

  • Flashback and transaction query very slow

    Hello. I was wondering if anyone else has seen transaction queries be really slow and if there is anything I can do to speed it up? Here is my situation:
    I have a database with about 50 tables. We need to allow the user to go back to a point in time and "undo" what they have done. I can't use flashback table because multiple users can be making changes to the same table (different records) and I can't undo what the other users have done. So I must use the finer granularity of undoing each transaction.
    I have not had a problem with the queries, etc. I basically get a cursor to all the transactions in each of the tables and order them backwards (since all the business rules must be observed). However, getting this cursor takes forever. From that cursor, I can execute the undo_sql. In fact, I once had a cursor that did "union all" on each table and even if the user only modified 1 table, it took way too long. So now I do a quick count based on the ROWSCN (running 10g and tables have ROWDEPENDANCIES) being in the time needed to find out if this table has been touched. Based on that, I can create a cursor only for the tables that have been touched. This helps. But it is still slow especially compared to any other query I have. And if the user did touch a lot of tables, it is still way too slow.
    Here is an example of part of a query that is used on each table:
    select xid, commit_scn, logon_user, undo_change#, operation, table_name, undo_sql
    from flashback_transaction_query
    where operation IN ('INSERT', 'UPDATE', 'DELETE')
      and xid IN (select versions_xid
                  from TABLE1
                  versions between SCN p_scn and current_scn
                  where system_id = p_system_id)
      and table_name = UPPER('TABLE1')Any help is greatly appreciated.
    -Carmine

    Anyone?
    Thanks,
    -Carmine

  • DVD Burn Speed - should I go Fast or Slow?

    Please help! I am getting conflicting advice regarding the best speed to burn a DVD Video (in DVDSP, iDVD, Disk Utility etc) that will be used as a duplication master. I have read online that you should avoid slowing down the burn too much (I think because the disc dyes etc are 'optimised' for faster speeds) and that you will run into problems if you burn an 8x disc at 1 or 2x. However, a technician assures me that you should slow the burn right down and that professional duplicators and production houses never burn a DVD at its rated speed! Who is correct? Any help or links appreciated. Thanks in advance.

    Burning too slow is often just as bad as burning too fast.
    It's possible this may apply to the latest / newest DVD Burners on the market but not necessarily to the Burners that have been on the market for some time now like the Pioneer 103 for example. I own three separate burners, all Pionneer (103, 107, and 110U). The newest S-Drive I own is a Pioneer 110U in an ext. FW Enclosure. Originally my G4 733 came with a Pioneer 103. It still works but it now lives in an older G4 PM as a backup ... Still works great (at SLOWER BURN SPEEDS)!
    But slower is better on this particular early model /S-Drive. And the same applies to the Pioneer 107. However, it does not apply to the latest Pioneer burners like the 110U, 111, nor the 112.
    As pointed out in the above article you mentioned media can and often does contribute /influence the actual write speed that the drive will default to when burning a DVD. Very good article btw and thanx for bringing it to our attention.
    Contrary to what is written in the above article though .... "slower is better" provided your burner is also an earlier model (and perhaps not so if you have the latest/ newer burner/s).

  • Transaction EA60 is slow

    Hi,
    Transaction EA60 - spooling of bills, is taking too much time compared to before. It is actually selecting 100,000 records from table erdk. The indexes are as follows:
    1. Print date [DRUCKDAT]
    2. Contract account [vkont]
    3. Reconciliation Key for General Ledger
    4. Reason for creating print document [ERGRD]
    5. Line items are archived [ ITEMS_ARCHIVED ] , Number of print document [ OPBEL ]
    6. Posting Date in the Document [ BUDAT ]
    7. Business Partner Number [ PARTNER ]
    8. Number of the substitute FI-CA document [ ABWBL ]
    Should there be an issue on the index... should I create new? if yes then on which field(s) ?

    Hi,
    I guess this is a standard Transaction. Clean up all the older spools. Also archive old data from tables.

  • Is parallel DML transactionally equivalent to non parellel DML?

    So I've got a whole bunch of insert into select statements where I have parallel hints on my inserts.
    if I have parent object A and child object B
    I fetch all the A's I want to move and use FORALL (with FETCH LIMIT ie.batches)
    to insert all the A's in the parent table. I also store the pk's of batch A so that I can use those in a join to identify the B's for this batch of A's.
    INSERT /*+ parallel(arch,4) */ INTO ARCHIVED_A arch
    SELECT *
    FROM NONARCHIVED_A non_arch
    WHERE EXISTS (
    SELECT 1
    FROM ids i
    WHERE non_arch.ot_id = i.ot_id);
    something like that.
    I am finding that this works fine when I don't use parallel DML
    whenever I use parallel DML I end up with the A's archived but no B's.

    From: http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/usingpe.htm#i1006876
    A session that is enabled for parallel DML may put transactions in the session in a special mode: If any DML statement in a transaction modifies a table in parallel, no subsequent serial or parallel query or DML statement can access the same table again in that transaction. This means that the results of parallel modifications cannot be seen during the transaction.
    Serial or parallel statements that attempt to access a table that has already been modified in parallel within the same transaction are rejected with an error message.So it's not quite "transactionally equivalent", in the sense that you cannot read the changes made by a parallel DML statement: you must commit them first.
    Could your code be loosing the error message (that you will get if you try to read your changes) in some when-others handler?

  • Should my laptop be so slow?

    I have a Compaq 615 with these specs: http://pastebin.com/iHbLzsZU
    I have the MATE DE installed but my computer is again very slow. For example, when I have Firefox and a terminal open, the computer becomes very slow even when switching windows. Is that normal?

    giwrg98 wrote:My lapto gets indeed quite hot and how can I realize if my fan is working?
    The same way you know that vacuum cleaner is working - use your ears Also look at BIOS, most of them report fan RPMs. But this is serious thing, 95 C on CPU is not good, not normal and can't be leaved as is. It can potentially damage your hardware and definitely will shorten it's lifespan.
    giwrg98 wrote:EDIT: I don't know  if this helps, but I have installed gnome 3 and in startx, there is an error and in gdm I tells me that it failed to load gnome session.
    I don't know what exactly are you doing and what did you do already. What kind of error? People on forum are not mind-readers. startx by default should put you into bare twm (3 very old school windows with greenish borders on black background). What you want to do is add "gdm" to DAEMONS array in /etc/rc.conf and login using gdm login manager. I really suggest to go through beginners guide* more carefully - there really is all you need to know and do, explained easy and step by step.
    *and other wiki pages like gnome, nvidia/ati etc. Don't be intimidated, beginnings are not always smooth but if you are right kind of computer user Arch will open whole new world for you and will be your best companion.
    Last edited by masteryod (2012-06-14 21:45:37)

  • Should KSLD be very very slow?

    I have been trying to use KSLD, but I find that it is so slow
    that it is close to being unusable.
    Does this mean that I have configured something incorrectly
    or is this what I should expect?
    I would like to use this tool if it can be a little faster.
    Thank you

    Google Chrome browser is very very slow to open up.
    No problems here. Worked fast on 9841 and now on 9860.
    I am using the stable channel with version: 38.0.2125.104
    -- SvenC

  • Should Migration Assistant be this slow?

    Hey gang-
    Just got a brand new late-2013 iMac to replace my mid-2007 iMac.
    Migration Assistant is currently in progress, but I'm amazed at how slow it is going.   It started about 30 minutes ago and is showing 36 hours 45 minutes remaining!   The source computer is a mid-2007 iMac and the new machine is a 27" late-2013 iMac Core i7.   The drive on the old iMac is 650 GB capacity with about 600 GB data total.   The new iMac has the 3TB fusion drive.
    The 2007 iMac is in Target Disk mode and connected via its Firewire 800 port to the new iMac's Thunderbolt port using an Apple Thunderbolt to Firewire adapter.
    I've done many such migrations over the years and never seen one this slow.   Any thoughts?
    Thanks!
    Dave

    Thanks everyone!
    As it turned out, the Migration actually took just over 6 hours.   For the first two hours, the 35+ hour estimate persisted.   Then suddenly it changed to about 3 hours.   Then back to 30+ hours!   Finally the estimate seemed to fall in line with the progress bar for about the last hour.   Not too bad I suppose for about 600GB of data and the move from Lion to Mavericks.
    All in all the migration seems to have worked very well.  I guess the bottom line is don't be too concerned by the time counter unless the progress bar seems stalled for a long time as well.
    Thanks again!
    Dave

  • Slow slow slow...   should Creator 2 be that slow?

    Can someone tell if what I'm experiencing is typical?
    I'm experiencing very long delays in doing practically anything using Sun Java Studio Creator 2. I had no problems with the previous version - it was awesome. Right now when the application is up, and I press "Create New Project" from the Welcome screen, it will take at least 4 minutes before I get the screen (sometimes much longer). Using Window's Task Manager, I notice that the CPU usage is hovering around 0, 1 and 2% with little spikes now and then. I got lots of Mem to play with....
    Win 2K operating system
    P4 dual CPU at 3.2GHz
    and 2 G of RAM
    I'm seriously considering going back to the previous release...
    Any thoughts or feedback?

    Sorry.... couldn't get back in, and forgot my password.
    I have uninstalled Creator 2. Re-installed it, and I'm getting the same problems. I have cleaned up the registry, and deleted the original directories both user/.creator and /sun/creator2... Please note that with the previous version (prior to Creator 2) I had no problems and things were running quite smoothly.
    As for the processors, they are basically idle for most of my wait. From the display I still have lots of memory to play (MEM Usage was around 500000, so I still have 1.5 G to play with...)
    So...... what did change between the Creator 1 and 2 to make things so slow??? I did create a thread dump.... Not familiar with how to read it. I did notice a few "Inactive RequestProcessor thread..." If someone wants to look at it, let me know.
    BTW, this is a corporate desktop environment, so I can't do much about the operating system. It is also a desktop, so no laptop battery problems here!
    Thanks for all the comments and help suggestions!

  • PutAll() with Transactions is slow

    Hi Guys,
    I am looking into using transactions, but they seem to be really slow - perhaps I am doing something silly. I've written a simple test to measure the time taken by putAll() with and without transactions and putAll() with transactions takes nearly 4-5 times longer than without transactions.
    Results:
    Average time taken to insert without transactions 210ms
    Average time taken to insert WITH transactions 1210ms
    Test code:
    public class TestCoherenceTransactions {
         private static final int MAP_SIZE = 5000;
         private static final int LIST_SIZE = 15;
         private static final int NO_OF_ITERATIONS = 50;
         public static void main (String[] args) {
              NamedCache cache = CacheFactory.getCache("dist-cache");          
              Map dataToInsert = new HashMap();
              for (int i=0;i < MAP_SIZE; i++) {
                   List value = new ArrayList();
                   for (int j = 0; j < LIST_SIZE; j++) {
                        value.add(j+"QWERTYU");     
                   dataToInsert.put(i,value);
              long timeTaken = 0;
              long startTime = 0;
              for (int i = 0; i < NO_OF_ITERATIONS; i++) {     
                   cache.clear();     
                   startTime = System.currentTimeMillis();
                   cache.putAll(dataToInsert);
                   timeTaken += System.currentTimeMillis() - startTime;     
              System.out.println("Average time taken to insert without transactions " + timeTaken/NO_OF_ITERATIONS );
              timeTaken = 0;
              for (int i = 0; i < NO_OF_ITERATIONS; i++) {
                   cache.clear();     
                   startTime = System.currentTimeMillis();
                   TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
                   mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
                   mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
                   Collection txnCollection = Collections.singleton(mapTx);
                   mapTx.begin();
                   mapTx.putAll(dataToInsert);
                   CacheFactory.commitTransactionCollection(txnCollection, 1);
                   timeTaken += System.currentTimeMillis() - startTime;     
              System.out.println("Average time taken to insert WITH transactions " + timeTaken/NO_OF_ITERATIONS);
              System.out.println("cache size " + cache.size());
    Am I misssing something obvious?? Can't understand why transactions are really slow. Any pointers would be very much appreciated.
    Thanks
    K

    Hi,
    TransactionMap is this slow because it locks and unlocks all entries (one by one as there is no lock-many funcitonality) you modify (or even read if you used at least REPEATABLE_READ isolation level) during the lifetime of the transaction if you are using CONCUR_PESSIMISTIC or CONCUR_OPTIMISTIC. Since each lock and unlock operation needs a network call (and that network call needs to be backed up from the primary node to the backup node in a distributed cache), the more entries you need to lock/unlock, the more latency is introduced due to this locking unlocking.
    Best regards,
    Robert

  • Custom Transaction running slow on R/3 4.7 ,Oracle 10.2.0.4

    Hi,
    We have a custom transaction for checking the roles related to an transaction.
    We have the same copy of the program in DEE, QAS and PRD(obvious).
    The catch is in DEE the transaction runs very slow, whereas in QAS and PRD the transaction is normal.
    In the debugger mode i found in DEE while accessing AGR_1016 it takes an long time.
    I checked the indexes, tables and extents and noticed that in DEE the extents, size and the table size(physical size) is more than in PRD.
    Whereas, the records are more in PRD and less in DEE.
    Please guide me on what all checks i can perform to find out where exactly the problem is?
    Regards,
    Siddhartha.

    Hi,
    Thanks for the reply.
    The details are given below.
    Please let me know if u need any more information.
    /* THE ACCESS PATH */
    SELECT STATEMENT ( Estimated Costs = 2 , Estimated #Rows = 1 )                                                                               
    5  7 FILTER                                                            
      Filter Predicates                                                                               
    5  6 NESTED LOOPS                                                  
    ( Estim. Costs = 1 , Estim. #Rows=1 )                       
      Estim. CPU-Costs = 14,882 Estim. IO-Costs = 1                                                                               
    5  4 NESTED LOOPS                                              
    ( Estim. Costs = 1 , Estim. #Rows = 1 )                   
      Estim. CPU-Costs = 14,660 Estim. IO-Costs = 1                                                                               
    1 INDEX RANGE SCAN UST12~0                              
    ( Estim. Costs = 1 , Estim. #Rows = 1 )               
    Search Columns: 4                                     
    Estim. CPU-Costs = 9,728 Estim. IO-Costs = 1          
    Access Predicates Filter Predicates                   
    5  3 TABLE ACCESS BY INDEX ROWID AGR_1016                  
    ( Estim. Costs = 1 , Estim. #Rows = 4 )               
    Estim. CPU-Costs = 4,932 Estim. IO-Costs = 1                                                                               
    2 INDEX RANGE SCAN AGR_1016^0                       
    Search Columns: 2                                 
    Estim. CPU-Costs = 3,466 Estim. IO-Costs = 0      
    Access Predicates Filter Predicates                                                                               
    5 INDEX UNIQUE SCAN UST10S~0                                
    Search Columns: 5                                         
    Estim. CPU-Costs = 221 Estim. IO-Costs = 0                
    Access Predicates Filter Predicates                                                                               
    The details of the index UST12~0   
    UNIQUE     Index   UST12~0                      
    Column Name                     #Distinct       
    MANDT                                   7
    OBJCT                                    1,339
    AUTH                                       6,274
    AKTPS                                    2
    FIELD                                      762
    VON                                         6,112
    BIS                                           246
    Last statistics date                           02.08.2008
    Analyze Method                                Sample 80,739 Rows
    Levels of B-Tree                               2
    Number of leaf blocks                      9,588
    Number of distinct keys                   722,803
    Average leaf blocks per key           1
    Average data blocks per key          1
    Clustering factor                                42,765
    Table   AGR_1016                                
    Last statistics date                  24.09.2008
    Analyze Method                       Sample 5,102 Rows
    Number of rows                        5,102
    Number of blocks allocated    51
    Number of empty blocks         0
    Average space                        2,612
    Chain count                              0
    Average row length                 48
    Partitioned                               NO
    NONUNIQUE  Index   AGR_1016~001                 
    Column Name                     #Distinct       
    MANDT                                          4
    PROFILE                                    4,848
    COUNTER                                  10
    Last statistics date                  24.09.2008
    Analyze Method                 Sample 5,102 Rows
    Levels of B-Tree                               1
    Number of leaf blocks                      24
    Number of distinct keys                   5,101
    Average leaf blocks per key             1
    Average data blocks per key            1
    Clustering factor                              1,449
    UNIQUE     Index   AGR_1016^0                   
    Column Name                     #Distinct       
    MANDT                                          4
    AGR_NAME                                   4,725
    COUNTER                                       10
    Last statistics date                  24.09.2008
    Analyze Method                 Sample 5,102 Rows
    Levels of B-Tree                               1
    Number of leaf blocks                         30
    Number of distinct keys                    5,102
    Average leaf blocks per key                    1
    Average data blocks per key                    1
    Clustering factor                          1,410
    If any more info is required..Please tell me.
    Siddhartha

Maybe you are looking for

  • Report generation toolkit conditional format - how to?

    Hi We are using the Report generation toolkit (2010 version), and have a question regarding this tool. I see functions for setting the number format, but not a conditional format. Does this not exist? http://zone.ni.com/reference/en-XX/help/372120A-0

  • Can We have multiple SLEDs maintained in a single Batch

    Hi all Can we have multiple SLEDs maintained within a single batch. This SLED would be maintained at Quant level. One batch will have multiple quants and would be maintained in separate bins. However in LX27 i should see both the SLED for the same ba

  • Error using database link

    Before creating the link, I went into my init.ora file and made sure the GLOBAL_NAMES entry was false. I then created a database link using the following: create public database link cindy_link connect to system identified by syspswd using 'PRECISEI'

  • OXC4EB841A ERROR CODE ON WINDOWS 7...

    I WAS PRINTING AND GOT THIS ERROR CODE OXC4EB841A AND IT WILL NOT GO AWAY.  HP DOES NOT HAVE A SOLUTION TO THE PROBLEM.  PERHAPS MY PRINTER JUST DIED!!!!  HP DOESN'T KNOW ANYTHING ABOUT THIS ERROR CODE, AND WHY NOT IT IS THEIR DEVICE THAT I PURCHASED

  • How to hide command prompt?

    hi, i start my java application from a .bat file, and i'd like to hide the command prompt window that appears togeter my program form. anyone can help me? this is the batch file content: echo off cd "my folder" java main