V$segstat statistics meaning

Hi,
I am working on a project to find all the segments growth pattern in database and for that I am using V$SEGSTAT to get the stats about segments.
In my environment, for 1 LOB object, the stats in V$SEGSTAT looks like:
space used     ==> 953767766114
space allocated     ==> 67108864
And I want to know what is the meaning of 'space used' here? How and when it can be more that actual space allocated??
I know when it can go negative (from shrink and reorg) but in order to get my project done, I need the clear definition of 'space used' and 'space allocated' stats.
The actual LOB size here is only 25gig, so how come 'space used' can be 953 gig here?
I know that V$SEGSTAT contains cumulative stats since the instance startup but what metrics its using to calculate these values?
The only thing I can think of here is that it's counting every block being used/accessed as 'space used', thats the only way it can go that high.
Oracle document just has the definition of the view and I couldn't find any clear definition of these stats.
Oracle Version ==> 11203
OS ==> AIX
Thanks
Daljit Singh

In the answer I gave to the thread you linked to I said:
I do know that the mechanism is a little more sophisticated than a simple "if it's not in the buffer cache it's not in v$segstat"
v$segment_statistics is simply v$segstat with a couple of joins to add names.
Regards
Jonathan Lewis
@jloracle

Similar Messages

  • What does the term "statistics" mean in database ?

    hi,
    thanks in advance
    regards vinay

    vinayraj wrote:
    yeahh exactlyy ...
    so does statistics mean the calculations of the query plan ?
    Did you search in the documentation for the same? Since you didn't bother to mention the version, assuming 12.1, have a read of the following link,
    Optimizer Statistics Concepts
    Quoting from it,
    Oracle Database optimizer statistics describe details about the database and its objects
    Aman....

  • Can Signal Express prompt the user for the next ascii file name?

    I am using the following to collect data from Thermocouples and Strain Guages in our plant.  It allows me to plt data every second, while recording only every 3 seconds to cut down the file size.
    Big Loop- 
    Small Loop- 
          Conitional repeat...
          DAQmx Aquire...
          Statistics (mean) Temp...
          Statistics (mean) Press...
          Current Iteration...
    End Small Loop-
    Save to ASCII/LVM
    The problem is that I have to configure the save step as "overwrite once, then append" so I get a single file each time the project runs.  How can I get Signal Express to either prompt the user for a new file name with each run or have the ascii file saved into the log directory for that run.  As it stands now, the file gets overwritten with each new project run.
    Thank you.
    new user

    Hi crawlejg,
    You can set signal express to increment the file being created each time.  But if you are looking for 1 new file each time the project runs you will have to use LabVIEW as this advanced functionality is not available in Signal Express.  If you need help getting this to work in LabVIEW or have any other questions please feel free to ask!
    Sincerely,
    Jason Daming
    Applications Engineer
    National Instruments
    http://www.ni.com/support

  • Document number

    Hi SAP guru,
    could you please advice how to get the document number statistics for particular year. By statistics mean type of document with its numerber range and last document entered in that document type
    thanks & regards
    rajesh thakur

    Hi ,
    To this question i would like to ask one thing , that is is your document type linked to individual number ranges or there are document types which are having common number rangers .
    if the answer to the question is yes that you are having unique number ranges then in that case you can get the number range information from table NRIV or else t code SNRO .
    Secondly , you also have to bear in mind that the document type defined are user for reversal purpose or not .
    In SNRO you will get the present document number that is posted ( NRIV ) , for more references you will have to also make use of tables BKPF or BSEG to have a clear picture .
    hope this suffices .
    Regards ,
    Dewang

  • Can insert performance be improved playing with env parameters?

    Below is the environment confioguration and results of my bulk load insert experiments. The results are from two scenarios that is also described below. The values for the two scenarios is separated by a space.
    Environment Configuration:
    setTxn     N
    DeferredWrite Y     
    Sec Bulk Load     Y
    Post Build SecIndex Y
    Sync Y
    Column1 value reflects for the scenario:
    Two databases
    a. Database with 2,500,000 records
    b. Database with 2,500,000 records
    Column2 value reflects for the scenario:
    Two databases
    a. Database with 25,000,000 records
    b. Database with 25,000,000 records
    1. Is there a good documentation which describes what the environment statistics mean.
    2. Looking at the statistics below, can you make any suggestions for performance improvement.
    Looking at the below statistics is the:
    Eviction Stats                    
    nEvictPasses               3929          146066
    nNodesSelected               309219          17351997
    nNodesScanned          3150809     176816544
    nNodesExplicitlyEvicted     152897     8723271
    nBINsStripped          156322     8628726
    requiredEvictBytes     524323     530566
    CheckPoint Stats     
    nCheckpoints     55     1448
    lastCheckpointID     55     1448
    nFullINFlush     54     1024
    nFullBINFlush     26     494
    nDeltaINFlush     116     2661
    lastCheckpointStart     0x6f/0x2334f8     0xb6a/0x82fd83
    lastCheckpointEnd     0x6f/0x33c2d6     0xb6a/0x8c4a6b
    endOfLog     0xb/0x6f22e     0x6f/0x75a843     0xb6a/0x23d8f
    Cache Stats     
    nNotResident     4591918     57477898
    nCacheMiss     4583077     57469807
    nLogBuffers     3     3
    bufferBytes     3145728     3145728
    (MB)     3.00     3.00
    cacheDataBytes     563450470     370211966
    (MB)     537.35     353.06
    adminBytes     29880     16346272
    lockBytes     1113     1113
    cacheTotalBytes     566596198     373357694
    (MB)     540.35     356.06
    Logging Stats          
    nFSyncs 59     1452
    nFSyncRequest     59     1452
    nFSyncTimeouts     0     0
    nRepeatFaultReads     31513     6525958
    nTempBufferForWrite     0     0
    nRepeatIteratorReads     0     0
    totalLogSize     1117658932     29226945317
    (MB)     1065.88     27872.99
    lockBytes     1113     1113

    Hello Linda,
    I am inserting 25,000,000 records of the type:
    Database 1
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    The secondary keys are on {long,long} and {String}
    Database 2
    Key --> Data
    [long,Integer,long] --> [{long,long}, {Integer}}
    The secondary keys are on {long,long} and {Integer}
    i set the env parameters to non-transactional and setDeferredWrite(True)
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
    private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
        try {
              SecondaryIndex<TDetailSecondaryKey, TDetailStringKey,
                                       TDetailStringRecord> secondaryIndex      = 
                                       store.getSecondaryIndex(
                                             dataAccessLayer.getPrimaryIndex() ,
                                             TDetailSecondaryKey.class,         
                                             SECONDARY_KEY_NAME
            } catch (DatabaseException e) {
                  throw new RuntimeException(e);
    We are inserting to 2 databases  as mentioned above.
    NumRecs        250,000x2   2,500,000x2     25,000,000x2
    TotalTime(ms)  16877             673623     30225781
    PutTime(ms)    7684             76636   1065030
    BuildSec(ms)   4952             590207     29125773
    Sync(ms)       4241             6780     34978Why does building secondaryIndex ( 2 secondary databases in this case) take so much longer than inserting to the primary database - 27 times longer !!!
    Its hard to believe that building of the tree for secondary database takes so much longer.
    Why doesnt building the tree for primary database take so long. The data in the primary database is same as its key to be able to search on these values.
    Hence its surprising it takes so long
    The cache stats mentioned above relate to these .
    Can you try explaining this. We are trying to figure out is it worth trying to build the secondary index later for bulk loading.

  • What is the column value in v$sesstat & v$sysstat

    hi guys,
    i don't understand what this VALUE is even after reading the documentation.
    What does it refering when it mention statistic value?

    flaskvacuum wrote:
    SID USERNAME                       STATISTIC                                                     VALUE
    94 ORACLE PROC                    physical read total bytes                                          15110144
    94 ORACLE PROC                    physical reads                                                      420
    94 ORACLE PROC                    physical reads cache                                                420
    94 ORACLE PROC                    physical reads cache prefetch                                        12
    94 ORACLE PROC                    physical write total IO requests                                    936
    94 ORACLE PROC                    physical write total bytes                                         14712832
    94 ORACLE PROC                    pinned cursors current                                                8what it mean? on the value....This statistics means that the session has read ~15MB (15 110 144 bytes) from disk, in 420 go's, written totally just under 15MB (14 712 832 bytes) to disk in 936 requests.
    It has also pinned 8 cursors.
    HtH
    Johan

  • Usage meter counts while charging

    This is my second iphone. My first one would not reach full charge and the usage meter never showed any statistics after a week of use.
    This phone has been great with a few things wrong, one continues to be the battery meter. When charging sometimes the battery on the screen will be full, sometimes it isn't no real pattern. But the plug icon always shows up and the usage meter does reset. However I have noticed one thing.
    On the times that the battery icon is not full but there is a plug icon indicating full charge, when I unplug the iphone and check the usage it will read usage and stand-by times that are often several hours but both are equal. Like this morning I got up and it read 3 hours 23 minutes on both usag and stand-by time.
    Now on the few times that the battery icon actually gets full when I pick it up the usage meter reads 0 for both so it is correctly set.
    Has anyone else experienced this? Or seen their usage meter start counting while the iphone is charging. It is very strange and confuses me.

    Nathan, I'm afraid I don't understand your answer and hope you can clarify it.
    According to the iPhone user manual (pp. 95-96), the numbers in the usage statistics mean the following:
    ∙ Amount of time iPhone has been unlocked and in use since the last full charge
    ∙ Amount of time iPhone has been in standby mode—locked but turned on—since the last full charge
    If the phone becomes fully charged during the night and then just sits there in sleep mode, then the first number (unlocked and in use) should be zero. During that time, even if you argue that it's being "used" by watching for calls, it's not unlocked. This is not happening, however, for myself or the original poster. Rather, both meters appear to be showing standby time since fully charged with the wall charger.
    When I then connect it to my computer in the morning and it again gets fully charged, which only takes a few minutes, the usage stats stay at 0 minutes for both numbers until I disconnect the iPhone from the computer. Even if I leave it there for an hour or more, it stays at 0/0 until I disconnect it.
    Simply put, my iPhone behaves differently with respect to these usage statistics depending on whether I charge it with the wall charger or with my computer. This is a benign and easily-worked-around bug, but it is a bug nonetheless.

  • Histogram and saturation in Photoshop

    1. Photoshop histogram panel is the average of the color intensity of the selected areas and layers of color channel statistics mean?
    2. In the Photoshop histogram panel do you mean the average of the brightness of the image luminance channel?
    I represent the average value of the maximum to the average of brightness from 0 to 255?
    3. To obtain the saturation current number of images should I use any method or menu or panel?
    I would appreciate a response.
    Mail: [email protected]

    Not sure I follow all the details you mention.
    Hope this helps:
    Photoshop Help | Viewing histograms and pixel values

  • SM51 - Details Analysis

    I have a job that I am having performance issues with.  Right now it has shown no movement in SM51 for over an hour. 
    What do some of these statistics mean in SM51
    Memory (sum private) ?
    RSQL
    thx, J

    I'd look more at the database statistics (the number of direct and sequential reads, etc.) If these keep going up, it means the program is reading from the database. If they don't, the program is stuck somewhere processing what it has read.
    Rob

  • When tables should be analyzed

    hi all,
    i want to know on what basis or how a person woul be able to know when table should be analyzed..........
    generally we have a tables that have been analyzed on 06/05/2008 15:50:15  query that i have used is
    select      OWNER,
         TABLE_NAME,
         to_char(LAST_ANALYZED,'MM/DD/YYYY HH24:MI:SS') last_analyzed
    from      dba_tab_columns
    where      OWNER ='name';

    burleson wrote:
    Reanalyze ONLY when you want your SQL execution plans to change . . .
    If you are satisfied with your plans, why change them?
    Hi Don
    Ummm, but if you don't update your statistics, your execution plans can change as well ...
    It's incorrect to suggest that not changing statistics means your plans won't change. Two things of course continue to change even if you don't update the statistics, the data and the data being selected.
    By following your advice, you might not have changed stats from 31 Dec 2005. Therefore Oracle thinks the max date value in your data is still 31 Dec 2005. But of course you now data as of say Jan 3 2009. When you query data of interest but the range of data you want is between say 1 Jan 2008 and 31 Dec 2008, how will the CBO determine the correct cardinality if it still thinks the max date is 31 Dec 2005 ?
    Actual answer is version dependent but it will get the cardinality way wrong and likely generate a poor execution plan.
    So perhaps your might want to update your statistics because you're satisfied with your plans and you don't won't them to change and suddenly go "wrong" by letting your statistics go stale.
    Hope this helps.
    Cheers
    Richard Foote
    http://richardfoote.wordpress.com/

  • Choosing the right Macbook for me?

    Ok, so I'm thinking about buying a Macbook in the next couple of months. I'm a high school student, and I'm going to primarily use it for schoolwork, music, taking photos, and watching movies. I have a lot of music and quite a few movies, plus a good number of pictures. I'm trying to decide whether to get the cheapest one--($1,099) or the second-cheapest one ($1,299.) I'd obviously prefer to get the cheaper one, but do you think that I should invest in the more expensive one? The differences are:
    The cheap one has a 2.0 GHz Intel Core 2 Duo, the more expensive has a 2.2 GHz Intel Core 2 Duo.
    The cheap one has a 80 GB hard drive, the more expensive has a 120 GB hard drive.
    The cheap one has a Combo Drive, the more expensive a Super Drive.
    Honestly, I'm not exactly sure what some of those statistics mean. But advice would definitely be appreciated.
    Thanks so much!

    Everything else aside go with the one with the superdrive. I have seen many posts on here a few months after someone bought the Macbook with a Combo drive and they were trying to change it for a superdrive. They never thought they would want to burn a dvd but they realized the limitations of only burning CDs. It is not cheap to install a superdrive after the fact.

  • What is the meaning of partition by statistics

    Hi All
    What is the meaning of partition by statistics ?
    I could not grasp it meaning.
    I have a fact table that has about 2.8 million records and around 8 dimensions
    I currently am parititioning it by time as each time period has equal number of records

    Murtuza:
    Certain processes generate postings above and beyond the entered information, eg. cash discounts or rounding etc.  these internally generated items are posting relevant but were not entered by user/transaction.  in this way that flag identifies the cause for the posting.
    regards,
    bill.

  • How oracle decide whetehr to use index or full scan (statistics)

    Hi Guys,
    Let say i have a index on a column.
    The table and index statistics has been gathered. (without histograms).
    Let say i perform a select * from table where a=5;
    Oracle will perform a full scan.
    But from which statistics it will be able to know indeed most of the column = 5? (histograms not used)
    After analyzing, we get the below:
    Table Statistics :
    (NUM_ROWS)
    (BLOCKS)
    (EMPTY_BLOCKS)
    (AVG_SPACE)
    (CHAIN_COUNT)
    (AVG_ROW_LEN)
    Index Statistics :
    (BLEVEL)
    (LEAF_BLOCKS)
    (DISTINCT_KEYS)
    (AVG_LEAF_BLOCKS_PER_KEY)
    (AVG_DATA_BLOCKS_PER_KEY)
    (CLUSTERING_FACTOR)
    thanks
    Index Column (A)
    ======
    1
    1
    2
    2
    5
    5
    5
    5
    5
    5

    I have prepared some explanation and have not noticed that the topic has been marked as answered.
    This my sentence is not completely true.
    A column "without histograms" means that the column has only one bucket. More correct: even without histograms there are data in dba_tab_histograms which we can consider as one bucket for whole column. In fact these data are retrieved from hist_head$, not from histgrm$ as usual buckets.
    Technically there is no any buckets without gathered histograms.
    Let's create a table with skewed data distribution.
    SQL> create table t as
      2  select least(rownum,3) as val, '*' as pad
      3    from dual
      4  connect by level <= 1000000;
    Table created
    SQL> create index idx on t(val);
    Index created
    SQL> select val, count(*)
      2    from t
      3   group by val;
           VAL   COUNT(*)
             1          1
             2          1
             3     999998So, we have table with very skewed data distribution.
    Let's gather statistics without histograms.
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for all columns size 1', cascade => true);
    PL/SQL procedure successfully completed
    SQL> select blocks, num_rows  from dba_tab_statistics
      2   where table_name = 'T';
        BLOCKS   NUM_ROWS
          3106    1000000
    SQL> select blevel, leaf_blocks, clustering_factor
      2    from dba_ind_statistics t
      3   where table_name = 'T'
      4     and index_name = 'IDX';
        BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
             2        4017              3107
    SQL> select column_name,
      2         num_distinct,
      3         density,
      4         num_nulls,
      5         low_value,
      6         high_value
      7    from dba_tab_col_statistics
      8   where table_name = 'T'
      9     and column_name = 'VAL';
    COLUMN_NAME  NUM_DISTINCT    DENSITY  NUM_NULLS      LOW_VALUE      HIGH_VALUE
    VAL                     3 0,33333333          0           C102            C104So, Oracle suggests that values between 1 and 3 (raw C102 and C104) are distributed uniform and the density of the distribution is 0.33.
    Let's try to explain plan
    SQL> explain plan for
      2  select --+ no_cpu_costing
      3         *
      4    from t
      5   where val = 1
      6  ;
    Explained
    SQL> @plan
    | Id  | Operation         | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT  |      |   333K|   300 |
    |*  1 |  TABLE ACCESS FULL| T    |   333K|   300 |
    Predicate Information (identified by operation id):
       1 - filter("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)Below is an excerpt from trace 10053
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 0.33333 Min: 1 Max: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 333333  Computed: 333333.33  Non Adjusted: 333333.33
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 2377.00  resc_cpu: 0
        ix_sel: 0.33333  ix_sel_with_filters: 0.33333
        Cost: 2377.00  Resp: 2377.00  Degree: 1
      Best:: AccessPath: TableScan
             Cost: 300.00  Degree: 1  Resp: 300.00  Card: 333333.33  Bytes: 0Cost of FTS here is 300 and cost of Index Range Scan here is 2377.
    I have disabled cpu costing, so selectivity does not affect the cost of FTS.
    cost of Index Range Scan is calculated as
    blevel + (leaf_blocks * selectivity + clustering_factor * selecivity) = 2 + (4017*0.33333 + 3107*0.33333) = 2377.
    Oracle considers that it has to read 2 root/branch blocks of the index, 1339 leaf blocks of the index and 1036 blocks of the table.
    Pay attention that selectivity is the major component of the cost of the Index Range Scan.
    Let's try to gather histograms:
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for columns val size 3', cascade => true);
    PL/SQL procedure successfully completedIf you look at dba_tab_histograms you will see following
    SQL> select endpoint_value,
      2         endpoint_number
      3    from dba_tab_histograms
      4   where table_name = 'T'
      5     and column_name = 'VAL'
      6  ;
    ENDPOINT_VALUE ENDPOINT_NUMBER
                 1               1
                 2               2
                 3         1000000ENDPOINT_VALUE is the column value (in number for any type of data) and ENDPOINT_NUMBER is cumulative number of rows.
    Number of rows for any ENDPOINT_VALUE = ENDPOINT_NUMBER for this ENDPOINT_VALUE - ENDPOINT_NUMBER for the previous ENDPOINT_VALUE.
    explain plan and 10053 trace of the same query:
    | Id  | Operation                   | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT            |      |     1 |     4 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |     4 |
    |*  2 |   INDEX RANGE SCAN          | IDX  |     1 |     3 |
    Predicate Information (identified by operation id):
       2 - access("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 5.0000e-07 Min: 1 Max: 3
        Histogram: Freq  #Bkts: 3  UncompBkts: 1000000  EndPtVals: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 4.00  resc_cpu: 0
        ix_sel: 1.0000e-06  ix_sel_with_filters: 1.0000e-06
        Cost: 4.00  Resp: 4.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IDX
             Cost: 4.00  Degree: 1  Resp: 4.00  Card: 1.00  Bytes: 0Pay attention on selectivity, ix_sel: 1.0000e-06
    Cost of the FTS is still the same = 300,
    but cost of the Index Range Scan is 4 now: 2 root/branch blocks + 1 leaf block + 1 table block.
    Thus, conclusion: histograms allows to calculate selectivity more accurate. The aim is to have more efficient execution plans.
    Alexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • UCCE 8.5.3/8.5.4 call volume statistics not matching in interval tables

    hello,
    We have just migrated a call center to UCCE 8.5.3 that runs roggers and ICM call flow scripts that contain very basic flows. In each flow there is basically a one-to-one ratio of call type elements to select skill group elements.
    Generally you would expect the call volume (as our scripts are written) to have almost the same call volume in the call type interval table and the skill group interval table, with allowable differences because of RONAs, etc. But basically somewhat close.
    In general this does work but after a reboot of the roggers (A and B sides) (logger down first, then router and reversed to start), the skill group interval data lags greatly from the call type interval data, meaning maybe 10% of the call type interval volume. We have learned that completely shutting down both A and B side and then briinging it back up in order from scratch seems to alleviate the problem (until the next restart). We cannot just leave the servers alone though due to company security patching policies.
    Cisco TAC had recommended patching to UCCE 8.5.4 which we have recently done, but the probelm persists.
    I was wondering if this data discrepancy has ever been seen by anyone else or if possibly some system config issue might be self-deating us? Rebooting the roggers leaves the phone system itself working just fine for the call centers, but the recording of statistics to the skill group interval table is greatly impacted with almost no recording of volume in relation to the call type interval (and for comparison, the call type skill group interval table as well).
    We would generally not worry about it, but the workforce management vendor that takes its feed from UCCE only uses the skill group interval, which is basically reporting almost no volume in the mentioned scenarios.
    If anyone can provide any information it would be most appreciated/
    Thanks.
    Greg

    Thank you for the response. The time source check is a great idea. We ran into problems before when internal web service servers did not match the PGs (CTIOS service) and CTI provideed stats did not match.
    We will continue to work through TAC, but I was just wondering if anyone else had seen this (as 8.5.4 sid not fix it) and if maybe it could have been something self-defeating in our system configuration or scripting. We did not immediately know this was happening until our 3rd party workforce management vendor made us aware.
    Thanks,
    Greg

  • How to remove the ALV Statistics option from Accounts user's userids

    Dear Sir/Madam,
    In our Accounts department each time one extra page is comming out. When the ALV Statistics option is make Unchecked, then it funtioninng normally that is, single page is comming out.
    Is there any way to make this option permanently Unchecked ? If possible please advice and also whether there will be any unwanted implications of it ?
    Please advice.
    Thanks and Regards,
    Pranab

    Option 1 - Use SAP Report RSPRIPARADMIN you can set this for all users
    Option 2  - user parameter SP01_FROM   0000000002 - This means print job will start from page 2
    Option 3 - List -> Print -> click to properties button in print window -> expend cover sheets -> double click on ALV statistics , now uncheck the ALV statistics check box . now click on setting button and click on copy setting button
    Edited by: Savo on Jul 1, 2011 4:06 PM

Maybe you are looking for