Indexes stats have no data

Hi,
I have following settings in a database
How to interpret this info regarding INDEX:
SQL> select * from index_stats
no rows selected
statistics_level                     string      TYPICAL
timed_os_statistics                  integer     0
timed_statistics                     boolean     TRUE
db_block_size                        integer     8192
10g Enterprise Edition Release 10.2.0.2.0 - 64biindex good / fragmented /or ..
Thanks a ton

Hi,
shall i rebuild this index or not...When done properly, rebuilding or coaleseing an index is 100% safe.
Have you considered an index coalesce instead?
But first, what is your motive?
- To reclaim disk space after a major delete?
- To improve performance?
- Some other reason?
Please read this carefully, it explains the issues:
http://www.dba-oracle.com/t_index_rebuilding_issues.htm
Article is talking about index_statsYes, index_stats collects addiitonal details about an index when using the validate index command.
This is used mostly for justifying an index rebuild, and it's dangerous because it consumes mucho resources and may cause lock issues . . .
BTW, that link is just part one of a 4 part series, make sure to read the other parts:
http://jonathanlewis.wordpress.com/2009/09/15/index-explosion-4/#comment-34512
Hope this helps . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference"
http://www.rampant-books.com/t_oracle_tuning_book.htm
"Time flies like an arrow; Fruit flies like a banana".

Similar Messages

  • Wat is the use of "DATA var-name LIKE SY-INDEX" statement

    Hi to all,
           is there any use of "DATA <var-name> LIKE SY-INDEX"  statement in ABAP, do the variable <var-name> be changed with that of SY-INDEX when we declare like this.
             Could u give me a fast response,
                                            Thank you,
                                                 Srinivasa Rao k.

    hi check this example..
    data: v_index type sy-index value 10 .
    do 15 times .
    if sy-index = v_index .
    write:/ 'the system index is 10 '.
    exit.
    endif.
    enddo.
    the main use of the index is used in the hr programs..
    the user wants the emp salary and previous months   salary then you need to use the index .
    if the table had 10 records .then you need to catch the record of  index1(current) and index 2(for previous month)..
    select pernr ansal  from pa0008 into table it_pa0008 where pernr in s_pernr and begda in s_begda .
    loop at it_pa0008.
    case sytabix.
    when 1 .
    it_final-cursal =  it_pa0008-ansal.
    when 2 .
    it_final-prevsal =  it_pa0008-ansal.
    endcase .
    endloop.
    regards,
    venkat.

  • Simple index (How to view data(all columns) of an index in toad)

    Hi All,
    I am training myself on sql tuning and over the years I have seen ppl creating many indexes, today I am trying to learn various types of indexes and just curious to see physical data in an index but I am not able to do so on toad, I know if I have some index def like:
    create index employees_employee_id on employees(employee_id)
    my index would have two columns of information : first rowid and employee_ids in sorted order, right ? <- if my understanding is not right could any1 please correct me here.
    PS: The major problem is to see physical index data in toad, please help me with it.
    Many thanks
    Rahul

    There is no SELECT query that could be written over an index. But a SELECT query on a TABLE can be in such a way that Oracle only scans the index and give you the information
    Here is an example.
    SQL> create table t
      2  (
      3    no integer
      4  );
    Table created.
    SQL> create index t_idx on t(no);
    Index created.
    SQL> insert into t       
      2  select level
      3    from dual connect by level <= 10;
    10 rows created.
    SQL> commit;
    Commit complete.
    SQL> alter table t modify no not null;
    Table altered.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true)
    PL/SQL procedure successfully completed.
    SQL> select rowid, no from t;
    ROWID                      NO
    AAFrZaABMAAAiKKAAA          1
    AAFrZaABMAAAiKKAAB          2
    AAFrZaABMAAAiKKAAC          3
    AAFrZaABMAAAiKKAAD          4
    AAFrZaABMAAAiKKAAE          5
    AAFrZaABMAAAiKKAAF          6
    AAFrZaABMAAAiKKAAG          7
    AAFrZaABMAAAiKKAAH          8
    AAFrZaABMAAAiKKAAI          9
    AAFrZaABMAAAiKKAAJ         10
    10 rows selected.
    SQL> select * from table(dbms_xplan.display_cursor);
    PLAN_TABLE_OUTPUT
    SQL_ID  35mwb1b3fpfrh, child number 0
    select rowid, no from t
    Plan hash value: 3354442786
    | Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |       |       |       |     1 (100)|          |
    |   1 |  INDEX FULL SCAN | T_IDX |    10 |    30 |     1   (0)| 00:00:01 |
    13 rows selected.
    Actually in this case i have queried the INDEX

  • Keeping stats up to date for partitioned tables

    Hi,
    Oracle version 10.2.0.4
    I have a partioned table. I would like to keep stats up-to-date.
    Can I just run a single command to update table stats, indexes and partitions please?
    exec dbms_stats.gather_table_stats(user, 'TABLE', cascade=>true)or I also need to run exec dbms_stats.gather_table_stats(user, 'TABLE', granularity=>partition)
    thanks,
    Ashok
    Edited by: 902986 on 27-Oct-2012 11:06
    Edited by: 902986 on 27-Oct-2012 11:07

    thanks
    yes there were many indexes on the original non-partitioned table and I have created another table partitioned and now populating it with the data from the original table. the new table is partitioned on a date range column for all years before 2012, then for 2012, 2013 and so forth.
    the indexes are all created locally bar a unique index (as per original table), created globally to enforce uniqueness across the table itself. the search will always look to year to date say 1st jan 2012 tilll today for risk analysis. the partition is on that date column and there is also a local index on that date column as well, to avoid table scan (tested with disabling that index, predictably did table scan and was less efficient).
    in a DW environment, I don't see much value in having global index bar for primary key/unique constraint. I do realise that if the query crosses more than one partition, say 2 partitions, there will be two b-tree local index scans rather than one, but that would be rare (from the way they query the table).
    therefore my plan is to perform a full table stats with cascade=>true and measure the time it takes and plan to do the same if the maintenance window allows it.
    thanks again for your help
    Edited by: 902986 on 28-Oct-2012 13:24

  • Tables & Index not showing any data

    Hi All,
    I have done the db2 upgradtion from 9.1 to 9.7 with fix pack 4.I am unable to see table size after upgrade. I have checked all the sap collector job running fine.
    db02>history>tables& index not showing any data
    Regrads,
    Mnai

    Please make sure RSCOL00 running or execute it manually.
    Did you try to run the stats in DB13 ?

  • Content index state: Failed

    Hi.
    I have been restoring the entire Windows after a bigger problem in the past. However, there were data for Exchange server at a completely different disk which was not affected.
    But now I can not load up OWA and when you look in the ECP so says the following under databases "Content index state: Failed"
    I have tried to repair it but without any success.
    The operation couldn't be performed because object '0747478411\MAIL-SERVER01' couldn't be found on 'server'.
        + CategoryInfo          : NotSpecified: (:) [Update-MailboxDatabaseCopy], ManagementObjectNotFoundException
        + FullyQualifiedErrorId : DD4CF704,Microsoft.Exchange.Management.SystemConfigurationTasks.UpdateDatabaseCopy
        + PSComputerName        : mail-server01
    Update-MailboxDatabaseCopy "0747478411\MAIL-SERVER01" -CatalogOnly
    All i have in owa now is
    something went wrong
    Sorry, we can't get that information right now. Please try again later. If the problem continues, contact your helpdesk.

    Hi Jesper,
    Thank you for your question.
    By my understanding, we should not simply recover window system, we should performance a disaster recover for Exchange by the following link:
    https://technet.microsoft.com/en-us/library/dd876880(v=exchg.150).aspx
    If there are any questions regarding this issue, please be free to let me know. 
    Best Regard,
    Jim
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Jim Xu
    TechNet Community Support

  • Still have unlimited data plan?

    If you have been a responsible user like me and have not abused the "unlimited" data plan and are now given the choice of buying full price phones or getting the 2GB downgrade I have a question for you.  Since you are being treated like you are part of the problem why not become part of it?  I didn't, until now, stream videos or movies but since I have unlimited data I started.  I don't even care what the movie is as long as it is HD so it hogs more data.  I will do it until they kick me out as a customer. 
    My data usage is normally less than 2gb but going up.  I didn't anticipate ever using over 5 or 6GB in the next year or so but that is dependent on new applications.  Garmin now has smartlink for live traffic updates that uses data, I don't know how much it may use and before I didn't care.  I have had unlimited data since it was first offered.  I didn't do the videos and movies because I thought that if I burn through 60 to 100GB a month it would surely come to an end.  Stupid on my part!  How dare I think Verizon would give a **** about me for trying keep my data usage down but still pay for the unlimited just in case I went over.  So my reward is getting told they are doing me a favor by simplifying the plan by limiting my data to 2GB.  Or I pay full retail for a phone and keep unlimited data.  Or I keep the same phone and keep unlimited data.
    So now I have a new plan, stream HD movies anytime that it is possible.  Buy old Razr's on Ebay and keep my unlimited data and burn all the data I can.  I am writing to the Public Service Commission of Nebraska regarding this and the regular poor customer service that I just quit complaining about in the past.  If they get enough complaints the Legislature will reintroduce the bill to give the State regulatory authority over the wireless carriers.  Will it fix anything?  Probably not but it will be another thorn in Verizon's side.  I will say anytime the State has gotten involved, even thought they have no authority, Verizon has fixed the issue because they DON'T WANT OVERSIGHT!
    If you feel you have been slighted on the unlimited data plan write to your States Public Service Commission.  If they get enough complaints the wireless companies will have to deal with regulatory authority.  What are they going to do, raise the bill some more?  If so Walmart offers a cheap flip phone from Straight talk.  Not the best deal but still better than giving your money monthly to a company that hates it customers and looks for every opportunity to fleece them for every dime they can.
    >>Profanity and vulgar language removed to comply with the Verizon Wireless Terms of Service <<
    Message was edited by: Verizon Moderator

    yes; just make sure it's a Verizon-branded phone with a clean ESN

  • How to get the previous state of my data after issuing coomit method

    How to get the previous state of some date after issuing commit method in entity bean (It should not use any offline storage )

    >
    Is there any way to get the state apart from using
    offline storage ?As I said the caller keeps a copy in memory.
    Naturally if it is no longer in memory then that is a problem.
    >
    and also what do you mean by auditlog?
    You keep track of every change to the database by keeping the old data. There are three ways:
    1. Each table has a version number/delete flag for each record. A record is never updated nor deleted. Instead a new record is created with a new version number and with the new data.
    2. Each table has a duplicate table which has all of the same columns. When the first table is modified the old data is moved to the duplicate table.
    3. A single table is used which has columns for 'table', 'field', 'data' and 'activity' (update, delete). When a change is made in any table then this table is updated. This is generally of limited useability due to the difficulty in recovering the data.
    All of the above can have a user id, timestamp, and/or additional information which is relevant to the data being changed.
    Note that ALL of this is persisted storage.
    I am not sure what this really has to do with "offline storage" unless you are using that term to refer to backed up data which is not readily available.

  • Balance Sheet and Income Statement for Plan data

    Hi Everyone,
    in our Company  we maintain Plan in COPA for next year that is for 2009 , we have finished all the Planing activities and now (users asked me), they want to have Balance Sheet and Income Statement for the data we have in Plan version , I am not sure what they are Talking about, please help me ,
    where can I set up that requirement ?  and what is actually Balance Sheet and Income Statement for Plan data?
    Thx
    Niki

    Is the issue resolved?
    What did you do?
    Thanks
    Naveen

  • PSA Request struck in yellow state if no data is available

    If there is no data available in the source system then the PSA reuest in the Datasource is not updated as sucess rather it is struck in yellow state.
    However, if there is data available then there is no issue.
    We are running a delta load through a Process chain and daily we have to manually set the satus to sucess for the chain to continue its execution.
    Any help would be appreciated

    Time out time(TOT)
    This settings are useful when we have huge data and load will take more time. in general default  time out time maintained some number(ex 7hours). 
    when that time was crossed we get time out time error for info pack. in that case we need to increase the wait time for info pack.
    Treatment for warning(TFW)  - if we receive any warning during the load in default request will be red.  in generally warnings are allowed. once you check what is the warning, after that accordingly warning message you can make request to green.

  • Why do I have Calendar Date Dimension in my Logical Layer?

    If I look in my logical layer and then looking at the models, each of my Fact tables have their corresponding dates such as Order Date, Date Updated, Date Shipped etc.  This is OK. But each single Fact, I see a Calendar Date which is a duplicate of another date dimension.  What I mean if I look in Fact A, I will have Date Transaction, Date Shipped, Date Ordered and Date Calendar.  Date Calendar has the same join relationship as Date Shipped.    Another Fact table Date Calendar has a same relationship as Date Transaction.
    Is this a common modelling practice, is this only used for ease of use for business end users?  Do you know why would would do this and what the benefit is?
    Thank you!

    Thanks for your replies,  I understand the Alias concept.  Let me write it a different way.
    Dim_Date = Physical Table
    W_DATE_ORDERED_D, W_DATE_SHIPPED_D, W_DATE_TRANSACTION_D and W_DATE_UPDATED_D are all alias that come from my DIM_DATE Physical Table.   Each of these Aliases are joined to my fact tables.
    FACTTABLE1.ORDERDATEID joined to W_DATE_ORDERED_D.DATEID AND FACTTABLE1.SHIPPEDDATEID joined to W_DATE_SHIPPED_D.DATEID
    FACTTABLE2.TRANSACTIONDATEID joined to W_DATE_TRANSACTION_D.DATEID AND FACTTABLE2.UPDATEDDATEID joined to W_DATE_UPDATED_D.DATEID
    The above part I understand fully,  what I also have is this:
    FACTTABLE1.ORDERDATEID joined to W_DATE_CALENDAR_D.DATEID
    and
    FACTTABLE2.TRANSACTIONDATEID joined to W_DATE_CALENDAR_D.DATEID
    Documentation states that calendar dates are used more for business logical reasons for the end users, and that the Calendar dates joined to the fact date ids, the fact date ids are the more commonly used dates for reporting.
    Does that make sense?
    Thank you

  • Database passive content index state stays FailedAndSuspended even after reseeding

    I have a mailbox database with two copies in a database availability group, the active copy is healthy and its content index state reports healthy as well, the passive copy is healthy but its content index state reports FailedAndSuspended.
    I tried the: Update-MailboxDatabaseCopy <DBName>\<ServerName> -CatalogOnly more than one time, it completes (says 100KB written) but the state never changes.
    I even tried to manually stop the service>delete CI>start the service, the CI folder was recreated (about 1 GB) but still the state is still FailedAndSuspended.
    Please Help! 

    I found the following warning in the event log,
    Event ID: 1009
    The indexing of mailbox database Main-EX02-DB1 encountered an unexpected exception. Error details: Microsoft.Exchange.Search.Core.Abstraction.OperationFailedException: The component operation has failed. ---> Microsoft.Exchange.Search.Core.Abstraction.CatalogReseedException:
    The component operation has failed.
       at Microsoft.Exchange.Search.Engine.SearchFeedingController.DetermineFeederStateAndStartFeeders()
       at Microsoft.Exchange.Search.Engine.SearchFeedingController.InternalExecutionStart()
       at Microsoft.Exchange.Search.Core.Common.Executable.InternalExecutionStart(Object state)
       --- End of inner exception stack trace ---
       at Microsoft.Exchange.Search.Core.Common.Executable.EndExecute(IAsyncResult asyncResult)
       at Microsoft.Exchange.Search.Engine.SearchRootController.ExecuteComplete(IAsyncResult asyncResult)
    I checked online, and found one website talking about the "ContentSubmitters" group, which I created in the Exchange Security Groups OU without any members but with stated permissions:
    https://abdullrhmanfarram.wordpress.com/2013/06/16/event-id-1009-content-index-status-of-the-mailbox-databases-failed/

  • Vdbench 5.04 slave host reports have no data

    I am running VDBench 5.04 in a master/slave environment. I have one slave running windows 2008 R2, and 1-6 slaves running Windows 2008 R2. When I run a job to more than one host the job runs and completes with no errors, but the hosts reports have no data. They say host summary, have a link to the histogram, have a link to the slave summary report, a link to the run definition report and then states the job end time, starting RD info; I/O rate; etc, but the detailed interval data is not there for any except sometimes one for the slave hosts. It is not always the same one with data. The Summary.html file has a summary, I am guessing for all the slaves averaged? How do I get the interval data for all the slaves. I will need this detailed data for my reporting.
    Any help would be VERY much appreciated.
    DW

    Thanks for your help. Here is the content of the files you requested.
    DW
    parmfile.html
    * Contents of parameter file: C:\vdbench\example5-multi.txt
    * Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
    * Author: Henk Vandenbergh.
    * Example 5: Simple multi-host parameter file.
    * This test does a three second 4k read test from two hosts against the same file.
    * The 'vdbench=' parameter is only needed when Vdbench resides in a different directory on the remote system.
    hd=default,vdbench=C:\vdbench,user=user
    hd=one,system=192.203.3.1,shell=vdbench
    hd=two,system=192.203.3.2,shell=vdbench
    hd=three,system=192.203.4.1,shell=vdbench
    hd=four,system=192.203.4.2,shell=vdbench
    hd=five,system=192.203.5.1,shell=vdbench
    hd=six,system=192.203.5.2,shell=vdbench
    hd=seven,system=192.203.9.1,shell=vdbench
    hd=eight,system=192.203.9.2,shell=vdbench
    hd=nine,system=192.203.10.1,shell=vdbench
    hd=ten,system=192.203.10.2,shell=vdbench
    hd=eleven,system=192.203.11.1,shell=vdbench
    hd=twelve,system=192.203.11.2,shell=vdbench
    sd=sd1,host=*,lun=\\.\PhysicalDrive1,threads=1
    wd=wd1,sd=sd1,xfersize=8192,rdpct=75,seekpct=100
    rd=run1,wd=wd1,iorate=25,elapsed=300,interval=60
    *rd=rd1,wd=wd1,el=3,in=1,io=10
    logfile.html
    13:19:06.819 Vdbench distribution: vdbench50402
    13:19:06.819
    13:19:06.819 input argument scanned: '-fexample5-multi.txt'
    13:19:06.835 input argument scanned: '-o./testout'
    13:19:06.835 java.vendor Oracle Corporation
    13:19:06.835 java.home C:\Program Files (x86)\Java\jre7
    13:19:06.835 java.vm.specification.version 1.7
    13:19:06.835 java.vm.version 24.60-b09
    13:19:06.835 java.vm.vendor Oracle Corporation
    13:19:06.835 java.specification.version 1.7
    13:19:06.835 java.class.version 51.0
    13:19:06.835 user.name Administrator
    13:19:06.835 user.dir C:\vdbench
    13:19:06.835 java.class.path C:\vdbench\;C:\vdbench\classes;C:\vdbench\vdbench.jar
    13:19:06.835 os.name Windows Server 2008 R2
    13:19:06.835 os.arch x86
    13:19:06.835 os.version 6.1
    13:19:06.835 sun.arch.data.model 32
    13:19:07.100 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.9.2-17-141222-13.19.06.679 -l eight-0 -p 5570 
    13:19:07.116 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.11.1-20-141222-13.19.06.679 -l eleven-0 -p 5570 
    13:19:07.131 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.5.1-14-141222-13.19.06.679 -l five-0 -p 5570 
    13:19:07.162 Successfully connected to the Vdbench rsh daemon on host 192.203.5.1
    13:19:07.162 RSH Connection to 192.203.5.1 using port 5560 successful
    13:19:07.162 Successfully connected to the Vdbench rsh daemon on host 192.203.11.1
    13:19:07.162 RSH Connection to 192.203.11.1 using port 5560 successful
    13:19:07.162 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.4.2-13-141222-13.19.06.679 -l four-0 -p 5570 
    13:19:07.194 Successfully connected to the Vdbench rsh daemon on host 192.203.9.2
    13:19:07.194 RSH Connection to 192.203.9.2 using port 5560 successful
    13:19:07.209 Successfully connected to the Vdbench rsh daemon on host 192.203.4.2
    13:19:07.209 RSH Connection to 192.203.4.2 using port 5560 successful
    13:19:07.209 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.10.1-18-141222-13.19.06.679 -l nine-0 -p 5570 
    13:19:07.225 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.3.1-10-141222-13.19.06.679 -l one-0 -p 5570 
    13:19:07.256 Successfully connected to the Vdbench rsh daemon on host 192.203.3.1
    13:19:07.256 RSH Connection to 192.203.3.1 using port 5560 successful
    13:19:07.256 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.9.1-16-141222-13.19.06.679 -l seven-0 -p 5570 
    13:19:07.272 Successfully connected to the Vdbench rsh daemon on host 192.203.10.1
    13:19:07.272 RSH Connection to 192.203.10.1 using port 5560 successful
    13:19:07.287 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.5.2-15-141222-13.19.06.679 -l six-0 -p 5570 
    13:19:07.318 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.10.2-19-141222-13.19.06.679 -l ten-0 -p 5570 
    13:19:07.318 Successfully connected to the Vdbench rsh daemon on host 192.203.9.1
    13:19:07.318 RSH Connection to 192.203.9.1 using port 5560 successful
    13:19:07.350 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.4.1-12-141222-13.19.06.679 -l three-0 -p 5570 
    13:19:07.365 Successfully connected to the Vdbench rsh daemon on host 192.203.4.1
    13:19:07.365 RSH Connection to 192.203.4.1 using port 5560 successful
    13:19:07.381 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.11.2-21-141222-13.19.06.679 -l twelve-0 -p 5570 
    13:19:07.396 Successfully connected to the Vdbench rsh daemon on host 192.203.10.2
    13:19:07.396 RSH Connection to 192.203.10.2 using port 5560 successful
    13:19:07.412 Starting slave: C:\vdbench\vdbench SlaveJvm -m 192.203.2.3 -n 192.203.3.2-11-141222-13.19.06.679 -l two-0 -p 5570 
    13:19:07.428 Successfully connected to the Vdbench rsh daemon on host 192.203.11.2
    13:19:07.428 RSH Connection to 192.203.11.2 using port 5560 successful
    13:19:07.443 Successfully connected to the Vdbench rsh daemon on host 192.203.3.2
    13:19:07.443 RSH Connection to 192.203.3.2 using port 5560 successful
    13:19:07.459 Successfully connected to the Vdbench rsh daemon on host 192.203.5.2
    13:19:07.459 RSH Connection to 192.203.5.2 using port 5560 successful
    13:19:08.832 Slow getMessage: signon 0 133718 SEND_SIGNON_INFO_TO_MASTER
    13:19:08.878 Slow getMessage: rsh_to_client 0 133233 RSH_STDERR_OUTPUT 
    13:19:08.894 Slow getMessage: rsh_to_client 1 133576 RSH_STDERR_OUTPUT 
    13:19:08.894 Slow getMessage: rsh_to_client 2 133576 RSH_STDERR_OUTPUT 
    13:19:08.894 Slow getMessage: rsh_to_client 3 133576 RSH_STDERR_OUTPUT 
    13:19:08.910 Slow getMessage: rsh_to_client 4 133592 RSH_STDERR_OUTPUT 
    13:19:08.910 Slow getMessage: rsh_to_client 5 133592 RSH_STDOUT_OUTPUT 
    13:19:08.925 Slow getMessage: rsh_to_client 6 133607 RSH_STDOUT_OUTPUT 
    13:19:09.034 Slow getMessage: rsh_to_client 0 133639 RSH_STDERR_OUTPUT 
    13:19:09.050 Slow getMessage: rsh_to_client 1 133624 RSH_STDERR_OUTPUT 
    13:19:09.050 Slow getMessage: rsh_to_client 2 133624 RSH_STDERR_OUTPUT 
    13:19:09.066 Slow getMessage: rsh_to_client 3 133640 RSH_STDERR_OUTPUT 
    13:19:09.066 Slow getMessage: rsh_to_client 4 133640 RSH_STDERR_OUTPUT 
    13:19:09.066 Slow getMessage: rsh_to_client 5 133640 RSH_STDOUT_OUTPUT 
    13:19:09.081 Slow getMessage: rsh_to_client 6 133655 RSH_STDOUT_OUTPUT 
    13:19:09.097 Slow getMessage: four-0 1 134045 SEND_SIGNON_SUCCESSFUL 
    13:19:09.097 Slave four-0 connected
    13:19:09.128 Slave five-0 connected
    13:19:09.144 Slow getMessage: signon 0 133780 SEND_SIGNON_INFO_TO_MASTER
    13:19:09.268 Slave three-0 connected
    13:19:09.300 Slow getMessage: signon 0 133838 SEND_SIGNON_INFO_TO_MASTER
    13:19:09.331 Slow getMessage: rsh_to_client 0 133569 RSH_STDERR_OUTPUT 
    13:19:09.346 Slow getMessage: rsh_to_client 1 133568 RSH_STDERR_OUTPUT 
    13:19:09.346 Slow getMessage: rsh_to_client 2 133568 RSH_STDERR_OUTPUT 
    13:19:09.346 Slow getMessage: rsh_to_client 3 133568 RSH_STDERR_OUTPUT 
    13:19:09.346 Slow getMessage: rsh_to_client 4 133568 RSH_STDERR_OUTPUT 
    13:19:09.362 Slow getMessage: rsh_to_client 5 133569 RSH_STDOUT_OUTPUT 
    13:19:09.362 Slow getMessage: rsh_to_client 6 133569 RSH_STDOUT_OUTPUT 
    13:19:09.409 Slow getMessage: six-0 1 133774 SEND_SIGNON_SUCCESSFUL 
    13:19:09.409 Slave six-0 connected
    13:19:09.424 Slave one-0 connected
    13:19:09.440 Slow getMessage: rsh_to_client 0 133617 RSH_STDERR_OUTPUT 
    13:19:09.440 Slow getMessage: rsh_to_client 1 133617 RSH_STDERR_OUTPUT 
    13:19:09.456 Slow getMessage: rsh_to_client 2 133633 RSH_STDERR_OUTPUT 
    13:19:09.456 Slow getMessage: rsh_to_client 3 133633 RSH_STDERR_OUTPUT 
    13:19:09.456 Slow getMessage: rsh_to_client 4 133633 RSH_STDERR_OUTPUT 
    13:19:09.456 Slow getMessage: rsh_to_client 5 133602 RSH_STDOUT_OUTPUT 
    13:19:09.471 Slow getMessage: rsh_to_client 6 133617 RSH_STDOUT_OUTPUT 
    13:19:09.471 Slow getMessage: rsh_to_client 0 133589 RSH_STDERR_OUTPUT 
    13:19:09.471 Slow getMessage: rsh_to_client 1 133573 RSH_STDERR_OUTPUT 
    13:19:09.471 Slow getMessage: rsh_to_client 2 133573 RSH_STDERR_OUTPUT 
    13:19:09.487 Slow getMessage: rsh_to_client 3 133589 RSH_STDERR_OUTPUT 
    13:19:09.487 Slow getMessage: rsh_to_client 4 133589 RSH_STDERR_OUTPUT 
    13:19:09.487 Slow getMessage: rsh_to_client 5 133558 RSH_STDOUT_OUTPUT 
    13:19:09.487 Slow getMessage: rsh_to_client 6 133558 RSH_STDOUT_OUTPUT 
    13:19:09.565 Slow getMessage: two-0 1 133838 SEND_SIGNON_SUCCESSFUL 
    13:19:09.565 Slave two-0 connected
    13:19:09.752 Slow getMessage: signon 0 133756 SEND_SIGNON_INFO_TO_MASTER
    13:19:09.861 Slow getMessage: signon 0 133804 SEND_SIGNON_INFO_TO_MASTER
    13:19:09.892 Slow getMessage: signon 0 133760 SEND_SIGNON_INFO_TO_MASTER
    13:19:10.048 Slave seven-0 connected
    13:19:10.064 Slave eleven-0 connected
    13:19:10.173 Slave nine-0 connected
    13:19:10.189 Slow getMessage: eight-0 1 133756 SEND_SIGNON_SUCCESSFUL 
    13:19:10.189 Slave eight-0 connected
    13:19:10.282 Slow getMessage: ten-0 1 133788 SEND_SIGNON_SUCCESSFUL 
    13:19:10.282 Slave ten-0 connected
    13:19:10.314 Slow getMessage: twelve-0 1 133745 SEND_SIGNON_SUCCESSFUL 
    13:19:10.314 Slave twelve-0 connected
    13:19:10.329 All slaves are now connected
    13:19:10.626 Slow getMessage: two-0 2 133821 HEARTBEAT_MESSAGE 
    13:19:10.641 Slow getMessage: six-0 2 133788 HEARTBEAT_MESSAGE 
    13:19:10.641 Slow getMessage: four-0 2 134068 HEARTBEAT_MESSAGE 
    13:19:10.782 Slow getMessage: eight-0 2 133741 HEARTBEAT_MESSAGE 
    13:19:10.782 Slow getMessage: twelve-0 2 133745 HEARTBEAT_MESSAGE 
    13:19:10.782 Slow getMessage: ten-0 2 133789 HEARTBEAT_MESSAGE 
    13:19:10.984 sd=sd1,lun=\\.\PhysicalDrive1 lun size: 193273528320 bytes; 180.0000 GB (1024**3); 193.2735 GB (1000**3)
    Link to Run Definitions:  run1 For loops: None
    13:19:11.140
    13:19:11.140 SlaveList.printWorkForSlaves() for rd=run1 (w)
    13:19:11.140 slv=one-0 wd=wd1 sd=sd1 rd= 75 sk=100 skw=100.00 rh= 0 th=1
    13:19:11.140
    13:19:11.140 slave=one-0 received work for 1 threads
    13:19:11.140
    13:19:11.140 host=one received work for 1 threads
    13:19:11.140 host=two received work for 0 threads
    13:19:11.140 host=three received work for 0 threads
    13:19:11.140 host=four received work for 0 threads
    13:19:11.140 host=five received work for 0 threads
    13:19:11.140 host=six received work for 0 threads
    13:19:11.140 host=seven received work for 0 threads
    13:19:11.140 host=eight received work for 0 threads
    13:19:11.140 host=nine received work for 0 threads
    13:19:11.140 host=ten received work for 0 threads
    13:19:11.140 host=eleven received work for 0 threads
    13:19:11.140 host=twelve received work for 0 threads
    13:19:11.140 Total amount of work received: 1 threads
    13:19:11.140
    13:19:11.140 Waiting for synchronization of all slaves
    13:19:11.905 Synchronization of all slaves complete
    13:19:12.001 Starting RD=run1; I/O rate: 25; elapsed=300; For loops: None
    13:19:12.002 Starting RD=run1; I/O rate: 25; elapsed=300; For loops: None
    13:19:55.635 Slow getMessage: six-0 3 133781 HEARTBEAT_MESSAGE 
    13:19:55.635 Slow getMessage: two-0 3 133830 HEARTBEAT_MESSAGE 
    13:19:55.635 Slow getMessage: four-0 3 134062 HEARTBEAT_MESSAGE 
    13:19:55.776 Slow getMessage: twelve-0 3 133749 HEARTBEAT_MESSAGE 
    13:19:55.776 Slow getMessage: eight-0 3 133744 HEARTBEAT_MESSAGE 
    13:19:55.791 Slow getMessage: ten-0 3 133792 HEARTBEAT_MESSAGE 
    Dec 22, 2014 interval i/o MB/sec bytes read resp read write resp resp queue cpu% cpu%
      rate 1024**2 i/o pct time resp resp max stddev depth sys+u sys
    13:20:12.346 1 24.85 0.19 8192 75.12 0.691 0.202 2.170 33.645 1.474 0.0 1.2 0.8
    13:20:40.628 Slow getMessage: two-0 4 133822 HEARTBEAT_MESSAGE 
    13:20:40.644 Slow getMessage: six-0 4 133775 HEARTBEAT_MESSAGE 
    13:20:40.644 Slow getMessage: four-0 4 134055 HEARTBEAT_MESSAGE 
    13:20:40.800 Slow getMessage: ten-0 4 133795 HEARTBEAT_MESSAGE 
    13:20:40.800 Slow getMessage: twelve-0 4 133751 HEARTBEAT_MESSAGE 
    13:20:40.800 Slow getMessage: eight-0 4 133747 HEARTBEAT_MESSAGE 
    13:21:12.267 2 25.82 0.20 8192 74.37 0.716 0.227 2.133 36.170 1.565 0.0 1.2 0.8
    13:21:25.652 Slow getMessage: six-0 5 133783 HEARTBEAT_MESSAGE 
    13:21:25.652 Slow getMessage: four-0 5 134048 HEARTBEAT_MESSAGE 
    13:21:25.652 Slow getMessage: two-0 5 133831 HEARTBEAT_MESSAGE 
    13:21:25.793 Slow getMessage: eight-0 5 133733 HEARTBEAT_MESSAGE 
    13:21:25.808 Slow getMessage: ten-0 5 133797 HEARTBEAT_MESSAGE 
    13:21:25.808 Slow getMessage: twelve-0 5 133753 HEARTBEAT_MESSAGE 
    13:22:10.658 Slow getMessage: two-0 6 133821 HEARTBEAT_MESSAGE 
    13:22:10.658 Slow getMessage: four-0 6 134069 HEARTBEAT_MESSAGE 
    13:22:10.674 Slow getMessage: six-0 6 133773 HEARTBEAT_MESSAGE 
    13:22:10.814 Slow getMessage: ten-0 6 133797 HEARTBEAT_MESSAGE 
    13:22:10.830 Slow getMessage: eight-0 6 133748 HEARTBEAT_MESSAGE 
    13:22:10.830 Slow getMessage: twelve-0 6 133753 HEARTBEAT_MESSAGE 
    13:22:12.267 3 24.85 0.19 8192 73.44 0.805 0.343 2.081 38.852 2.445 0.0 0.8 0.4
    13:22:55.667 Slow getMessage: two-0 7 133830 HEARTBEAT_MESSAGE 
    13:22:55.667 Slow getMessage: six-0 7 133766 HEARTBEAT_MESSAGE 
    13:22:55.667 Slow getMessage: four-0 7 134063 HEARTBEAT_MESSAGE 
    13:22:55.807 Slow getMessage: eight-0 7 133735 HEARTBEAT_MESSAGE 
    13:22:55.807 Slow getMessage: twelve-0 7 133740 HEARTBEAT_MESSAGE 
    13:22:55.807 Slow getMessage: ten-0 7 133784 HEARTBEAT_MESSAGE 
    13:23:12.268 4 24.80 0.19 8192 75.47 0.770 0.324 2.145 38.519 2.049 0.0 0.6 0.4
    13:23:40.660 Slow getMessage: two-0 8 133823 HEARTBEAT_MESSAGE 
    13:23:40.675 Slow getMessage: six-0 8 133774 HEARTBEAT_MESSAGE 
    13:23:40.675 Slow getMessage: four-0 8 134055 HEARTBEAT_MESSAGE 
    13:23:40.816 Slow getMessage: eight-0 8 133738 HEARTBEAT_MESSAGE 
    13:23:40.816 Slow getMessage: ten-0 8 133787 HEARTBEAT_MESSAGE 
    13:23:40.831 Slow getMessage: twelve-0 8 133742 HEARTBEAT_MESSAGE 
    13:24:12.267 5 25.28 0.20 8192 75.28 0.821 0.420 2.043 44.044 2.770 0.0 0.4 0.3
    13:24:12.377 avg_2-5 25.19 0.20 8192 74.64 0.778 0.328 2.100 44.044 2.250 0.0 0.7 0.5
    13:24:12.408
    13:24:12.408 Total i/o for slave=one-0 : reads: 5632 writes: 1904 total: 7536 skew: 100.00%
    13:24:12.408 Total i/o done: reads: 5632 writes: 1904 total: 7536
    13:24:12.845
    13:24:12.845 Counts reported below are for non-warmup intervals (4).
    13:24:12.845 Note that for an Uncontrolled MAX run skew is irrelevant.
    13:24:12.845 I/O count for all WDs: 6045
    13:24:12.845 Calculated skew for wd=wd1 : 6045 1511/sec (100.00%) Expected skew: 100.00%
    13:24:12.845 Flushing all reports
    13:24:12.845 Memory total Java heap: 61.875 MB; Free: 44.098 MB; Used: 17.777 MB;
    13:24:12.923 Ending Reporter
    13:24:13.437 Slow getMessage: four-0 9 134069 CLEAN_SHUTDOWN_COMPLETE 
    13:24:13.437 Slow getMessage: rsh_to_client 7 134069 RSH_STDOUT_OUTPUT 
    13:24:13.453 Slow getMessage: rsh_to_client 7 133788 RSH_STDOUT_OUTPUT 
    13:24:13.469 Slow getMessage: rsh_to_client 7 133837 RSH_STDOUT_OUTPUT 
    13:24:13.469 Slow getMessage: six-0 9 133788 CLEAN_SHUTDOWN_COMPLETE 
    13:24:13.469 Slow getMessage: two-0 9 133837 CLEAN_SHUTDOWN_COMPLETE 
    13:24:13.562 Slow getMessage: rsh_to_client 8 133928 RSH_STDOUT_OUTPUT 
    13:24:13.562 Slow getMessage: rsh_to_client 9 133928 RSH_STDERR_OUTPUT 
    13:24:13.562 Slow getMessage: rsh_to_client 10 133850 RSH_COMMAND 
    13:24:13.578 Slow getMessage: rsh_to_client 8 133696 RSH_STDOUT_OUTPUT 
    13:24:13.578 Slow getMessage: rsh_to_client 8 133648 RSH_STDOUT_OUTPUT 
    13:24:13.578 Slow getMessage: rsh_to_client 9 133696 RSH_STDERR_OUTPUT 
    13:24:13.593 Slow getMessage: rsh_to_client 7 133790 RSH_STDOUT_OUTPUT 
    13:24:13.593 Slow getMessage: rsh_to_client 7 133746 RSH_STDOUT_OUTPUT 
    13:24:13.593 Slow getMessage: rsh_to_client 7 133742 RSH_STDOUT_OUTPUT 
    13:24:13.593 Slow getMessage: rsh_to_client 9 133663 RSH_STDERR_OUTPUT 
    13:24:13.609 Slow getMessage: twelve-0 9 133747 CLEAN_SHUTDOWN_COMPLETE 
    13:24:13.609 Slow getMessage: ten-0 9 133790 CLEAN_SHUTDOWN_COMPLETE 
    13:24:13.609 Slow getMessage: eight-0 9 133742 CLEAN_SHUTDOWN_COMPLETE 
    13:24:13.625 Slow getMessage: rsh_to_client 10 133633 RSH_COMMAND 
    13:24:13.625 Slow getMessage: rsh_to_client 10 133616 RSH_COMMAND 
    13:24:13.671 Slow getMessage: rsh_to_client 8 133606 RSH_STDOUT_OUTPUT 
    13:24:13.671 Slow getMessage: rsh_to_client 9 133606 RSH_STDERR_OUTPUT 
    13:24:13.671 Slow getMessage: rsh_to_client 8 133601 RSH_STDOUT_OUTPUT 
    13:24:13.671 Slow getMessage: rsh_to_client 10 133559 RSH_COMMAND 
    13:24:13.687 Slow getMessage: rsh_to_client 9 133601 RSH_STDERR_OUTPUT 
    13:24:13.687 Slow getMessage: rsh_to_client 10 133570 RSH_COMMAND 
    13:24:13.687 Slow getMessage: rsh_to_client 8 133650 RSH_STDOUT_OUTPUT 
    13:24:13.703 Slow getMessage: rsh_to_client 9 133666 RSH_STDERR_OUTPUT 
    13:24:13.703 Slow getMessage: rsh_to_client 10 133619 RSH_COMMAND 
    13:24:13.718 Vdbench execution completed successfully. Output directory: C:\vdbench\.\testout
    13:24:13.718 Memory total Java heap: 62.062 MB; Free: 43.601 MB; Used: 18.462 MB;

  • Content Index State Crawling

    We have an Exchange database that seems to always have a Content Index State that is crawling.  Some end users have complained about slow searches and indexing issues (which is expected).  We have
    stopped the search services and renamed the catalog directory in an effort to rebuild the search catalog, but it just goes right back to crawling.  The database is only about 300GB, so I don't think size is an issue.
    Could it be there is some corruption on the database that is causing issues with the index catalog.  We have removed the database from the DAG and tried it as a stand alone database, with the same
    results.
    Any ideas would be appreciated.

    did you find anything in the event viewer (app/Sys logs).
    You may consider restarting Search Indexer service and then check if that gets healthy or not.
    If it still doesn't say healthy then you might have to consider Resetting Search Index for problematic Database or all database if all if it is a problem with all databases.
    Refer to the link below to get info about how to reset Search index for a database.
    Exchange
    2010 Database copy on server has content index catalog files in the following state: Failed
    Establishing Exchange Content Index
    Rebuild Baselines – Part 1
    How To Troubleshoot Issues With Exchange 2010 Search
    Reply back with the outcome!
    Pavan Maganti ~ ( Exchange | 2003/2007/2010/E15(2013)) ~~ Please remember to click “Vote As Helpful&quot; if it really helps and &quot;Mark as Answer” if it answers your question, “Unmark as Answer” if a marked post does not actually answer your
    question. ~~ This Information is provided is &quot;AS IS&quot; and confers NO Rights!!

  • How can i make indexes on the Master data table

    Hi Gurus,
    I got a query, in this one i have an Infoobject with many values, like to say invoice number, and i filter and need a lot of Nav Attr of this IO, for this reason the query performance its really bad but i dont know what to do, some friend advice me this : So I would suggest some indexes on the Master data table of your IO for the navigationnal attributer you want to use.
    what else can you tell me? if i put this IO in Line item dimension? , or just with flag high cardinality ? help guys.....

    Hi Jorge.....
    Look.........Line item dimension and High Cardinality r related to each other..............Characteristic which has High Cardinality..we will use it as Line item dimension..........But........A dimension marked as a line item cannot subsequently include additional characteristics.......... This is only possible with normal dimensions...........If u r very sure.............then u can go for Line item dimension............no issues........
    U can also try to Create index..............
    Check this...........
    Re: Indexes on Master Data tables
    Regards,
    Debjani..........

Maybe you are looking for