Problem doing wlse radio scan

Hi,
Just did a radio scan of a building with 18 APs. It all went well but one AP was removed from the scan due to the following error.
ERROR: Measurement Request NAK received - removing interface 00-40-96-a1-0d-7c from AP Radio Scan participation
I've not had any luck finding and explanation for this error....where can I look it up?
....thx....J

Is your WDS properly linked to your WLSE. It should say : "Setting up registry keys" in the wireless services/wds in Wireless Network Manager Information.
I had the same problem also the WNM info then said authenticated. This is a confusing state. I then thought it was Ok....but it was not...
Good luck

Similar Messages

  • WLSE Radio Scan Problems

    I have recently set up WDS at a remote campus. I scheduled a radio scan for 4:00am so as not to interfere with active users. The scan completed but not successfully. I got an error message regarding the WDS and I can not determine the reason. I have attached the Job Log from the WLSE.
    All the AP's are registered with the WDS AP. The WDS AP is dedicated with the radio turned off. The WDS is in an authenticated state with the WNM. I'm not sure what the problem could be.

    This has been successfully resolved with the help of TAC. The issue is actually a bug in the ACS v3.3.3. The bug ID is CSCsb47726. Once I applied the patch (replacement of a specific dll) the WDS and ACS and WLSE completely authenticated to one another.

  • WLSE AP Radio Scan Failure

    I finally got the WLSE talking with all of my access points. The WLSE is able to discover all of the access points, so I believe that means that SNMP is working properly.
    However, now I'm trying to perform an AP Radio Scan. When I do, I get this error message:
    ERROR: AP not SNMP accessible - removing interface xx-xx-xx-xx-xx-xx from AP Radio Scan participation
    ERROR: No APs are available to particpate - ending AP Scan.
    I get this for all of the access points.
    Why would this be happening?

    Other possible causes:
    - AP not SNMP Reachable from WLSE. To verify, use the SNMP Reachable Test in WLSE Administration screen. If the AP have multiple radio interfaces and if any one of them is down, then this will also cause problems while doing the scan.
    - AP do not have the iso view tied to the RW community string.

  • Select statement in a function does Full Table Scan

    All,
    I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
    Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
    Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
    Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
    Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = case_number_in and
    a1.case_number = a2.case_number and
    a2.application_date <= effective_date_in and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = 'A006438' and
    a1.case_number = a2.case_number and
    a2.application_date <= '01-Apr-2009' and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Why using the values in the passed parameter in the first select statement causes full table scan?
    Thank you for your help,
    Seyed
    Edited by: user11117178 on Jul 30, 2009 6:22 AM
    Edited by: user11117178 on Jul 30, 2009 6:23 AM
    Edited by: user11117178 on Jul 30, 2009 6:24 AM

    Hello Dan,
    Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
    PLAN_TABLE_OUTPUT
    Plan hash value: 2132048964
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT              |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |*  1 |  HASH JOIN                    |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |   2 |   BITMAP CONVERSION TO ROWIDS |                         |     3 |     9 |     1   (0)| 00:00:01 |       |       |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES    |       |       |            |          |       |       |
    |   4 |   PARTITION RANGE ITERATOR    |                         |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    |   5 |    TABLE ACCESS FULL          | PS2_FS_TRANSACTION_FACT |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    Predicate Information (identified by operation id):
       1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
       3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
    Thank you very much,
    Seyed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Query is doing full table scan

    Hi All,
    The below query is doing full table scan. So many threads from application trigger this query and doing full table scan. Can you please tell me how to improve the performance of this query?
    Env is 11.2.0.3 RAC (4 node). Unique index on VZ_ID, LOGGED_IN. The table row count is 2,501,103.
    Query is :-
    select ccagentsta0_.LOGGED_IN as LOGGED1_404_, ccagentsta0_.VZ_ID as VZ2_404_, ccagentsta0_.ACTIVE as ACTIVE404_, ccagentsta0_.AGENT_STATE as AGENT4_404_,
    ccagentsta0_.APPLICATION_CODE as APPLICAT5_404_, ccagentsta0_.CREATED_ON as CREATED6_404_, ccagentsta0_.CURRENT_ORDER as CURRENT7_404_,
    ccagentsta0_.CURRENT_TASK as CURRENT8_404_, ccagentsta0_.HELM_ID as HELM9_404_, ccagentsta0_.LAST_UPDATED as LAST10_404_, ccagentsta0_.LOCATION as LOCATION404_,
    ccagentsta0_.LOGGED_OUT as LOGGED12_404_, ccagentsta0_.SUPERVISOR_VZID as SUPERVISOR13_404_, ccagentsta0_.VENDOR_NAME as VENDOR14_404_
    from AGENT_STATE ccagentsta0_ where ccagentsta0_.VZ_ID='v790531'  and ccagentsta0_.ACTIVE='Y';
    Table Scan                                                       AGENT_STATE                                                2.366666667
    Table Scan                                                       AGENT_STATE                                                0.3666666667
    Table Scan                                                       AGENT_STATE                                                1.633333333
    Table Scan                                                       AGENT_STATE                                                       0.75
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                2.533333333
    Table Scan                                                       AGENT_STATE                                                0.5333333333
    Table Scan                                                       AGENT_STATE                                                       1.95
    Table Scan                                                       AGENT_STATE                                                        0.8
    Table Scan                                                       AGENT_STATE                                                0.2833333333
    Table Scan                                                       AGENT_STATE                                                1.983333333
    Table Scan                                                       AGENT_STATE                                                        2.5
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                1.883333333
    Table Scan                                                       AGENT_STATE                                                        0.9
    Table Scan                                                       AGENT_STATE                                                2.366666667
    But the explain plan shows the query is taking the index
    Explain plan output:-
    PLAN_TABLE_OUTPUT
    Plan hash value: 1946142815
    | Id  | Operation                   | Name            | Rows  | Bytes | Cost (%C
    PU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT            |                 |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| AGENT_STATE     |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  2 |   INDEX RANGE SCAN          | AGENT_STATE_IDX |   229 |       |     4
    (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("CCAGENTSTA0_"."ACTIVE"='Y')
       2 - access("CCAGENTSTA0_"."VZ_ID"='v790531')
    The values (VZ_ID) i have given are dummy values picked from the table. I dont get the actual values since the query is coming with bind variables. Please let me know your suggestion on this.
    Thanks,
    Mani

    Hi,
    But I am not getting what is the issue..its a simple select query and index is there on one of the leading columns (VZ_ID --- PK). Explain plan says its using its using Index and it only select fraction of rows from the table. Then why it is doing FTS. For Optimizer, why its like a query doing FTS.
    The rule-based optimizer would have  picked the plan with the index. The cost-based optimizer, however, is picking the plan with the lowest cost. Apparently, the lowest cost plan is the one with the full table scan. And the optimizer isn't necessarily wrong about this.
    Reading data from a table via index probes is only efficient when selecting a relatively small percentage of rows. For larger percentages, a full table scan is generally better.
    Consider a simple example: a query that selects from a table with biographies for all people on the planet. Suppose you are interested in all people from a certain country.
    select * from all_people where country='Vatican'
    would only return only 800 rows (as Vatican is an extremely small country with population of just 800 people). For this case, obviously, using an index would be very efficient.
    Now if we run this query:
    select * from all_people where contry = 'India',
    we'd be getting over a billion of rows. For this case, a full table scan would be several thousand times faster.
    Now consider the third case:
    select * from all_people where country = :b1
    What plan should the optimizer choose? The value of :b1 bind variable is generally not known during the parse time, it will be passed by the user when the query is already parsed, during run-time.
    In this case, one of two scenarios takes place: either the optimizer relies on some built-in default selectivities (basically, it takes a wild guess), or the optimizer postpones taking the final decision until the
    first time the query is run, 'peeks' the value of the bind, and optimizes the query for this case.
    In means, that if the first time the query is parsed, it was called with :b1 = 'India', a plan with a full table scan will be generated and cached for subsequent usage. And until the cursor is aged out of library cache
    or invalidated for some reason, this will be the plan for this query.
    If the first time it was called with :b1='Vatican', then an index-based plan will be picked.
    Either way, bind peeking only gives good results if the subsequent usage of the query is the same kind as the first usage. I.e. in the first case it will be efficient, if the query would always be run for countries with big popultions.
    And in the second case, if it's always run for countries with small populations.
    This mechanism is called 'bind peeking' and it's one of the most common causes of performance problems. In 11g, there are more sophisticated mechanisms, such a cardinality feedback, but they don't always work as expected.
    This mechanism is the most likely explanation for your issue. However, without proper diagnostic information we cannot be 100% sure.
    Best regards,
      Nikolay

  • Why is DBXML doing a table scan on this query?

    I loaded a database with about 610 documents, each contains about 5000 elements of the form:
    <locations><location><id>100</id> ... </location> <location><id>200</id> ... </location> ... </locations>
    The size of my dbxml file is about 16G. I create this with all default settings, except that I set auto-indexing off, and added 3 indexes, listed here:
    dbxml> listIndexes
    Index: unique-edge-element-equality-string for node {}:id
    Index: edge-element-presence-none for node {}:location
    Index: node-element-presence-none for node {}:locations
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    4 indexes found.
    I am performing the following query:
    dbxml> query 'for $location in (collection("CitySearch.dbxml")/locations/location[id = 41400]) return $location'
    This has the following query plan:
    dbxml> queryPlan 'for $location in (collection("CitySearch.dbxml")/locations/location[id = 41400]) return $location'
    <XQuery>
    <Return>
    <ForTuple uri="" name="location">
    <ContextTuple/>
    <QueryPlanToAST>
    <ParentOfChildJoinQP>
    <ValueFilterQP comparison="eq" general="true">
    <PresenceQP container="CitySearch.dbxml" index="unique-edge-element-equality-string" operation="prefix" child="id"/>
    <NumericLiteral value="4.140E4" typeuri="http://www.w3.org/2001/XMLSchema" typename="integer"/>
    </ValueFilterQP>
    <ChildJoinQP>
    <NodePredicateFilterQP uri="" name="#tmp0">
    <PresenceQP container="CitySearch.dbxml" index="node-element-presence-none" operation="eq" child="locations"/>
    <LevelFilterQP>
    <VariableQP name="#tmp0"/>
    </LevelFilterQP>
    </NodePredicateFilterQP>
    <PresenceQP container="CitySearch.dbxml" index="edge-element-presence-none" operation="eq" parent="locations" child="location"/>
    </ChildJoinQP>
    </ParentOfChildJoinQP>
    </QueryPlanToAST>
    </ForTuple>
    <QueryPlanToAST>
    <VariableQP name="location"/>
    </QueryPlanToAST>
    </Return>
    </XQuery>
    When I run the query, it is very clearly performing a table scan, the query takes about 10 minutes to run (argh!!) and the disk is read for the length of the query. Why is this doing a table scan, and what can I do to make this a simple, direct node access?
    Andrew

    Hi George,
    I took a subset of my data set, and left auto indexing on to see what the query plan would be, then I duplicated the index being used in my larger data set with auto indexing off. The problem with leaving auto indexing on for the entire data set was the apparent size of the file: with just the single index, the file was about 17G, with auto indexing on, it was climbing over 30G (with 40 indices, I didn't include all of the tags in my original post) when I killed it. Further data load was taking forever, it is much faster with auto indexing off and then add the single index.

  • Optimiser difference : a max() sometimes doing a backward scan and sometimes forward scan

    I interested why we get 2 different query plans on Sybase 15.5 and 15.7 for similar queries
    declare @MID int
    select @MID = 25717
    select MID , max(Date) from CB where MID = @MID group by MID
    select @MID, max(Date) from CB where MID = @MID
    The index is on (MID, Date).
    The first does forward scan and the second does a backward scan.
    With our data, the first query does 8000 page reads (with a SAN read costing 4ms = 32s)
    and the second query does 4 page reads (with a SAN read costing 4ms = 16ms)
    1)
    |   |   |GROUP SORTED Operator (VA = 1)
    |   |   |  Evaluate Grouped MAXIMUM AGGREGATE.
    |   |   |
    |   |   |   |SCAN Operator (VA = 0)
    |   |   |   |  FROM TABLE
    |   |   |   |  CB
    |   |   |   |  Index : IDX1
    |   |   |   |  Forward Scan.
    |   |   |   |  Positioning by key.
    |   |   |   |  Index contains all needed columns. Base table will not be read.
    |   |   |   |  Keys are:
    |   |   |   |    MID ASC
    |   |   |   |  Using I/O Size 16 Kbytes for index leaf pages.
    |   |   |   |  With LRU Buffer Replacement Strategy for index leaf pages.
    2)
    |   |  Evaluate Ungrouped MAXIMUM AGGREGATE.
    |   |  Scanning only up to the first qualifying row.
    |   |
    |   |   |SCAN Operator (VA = 0)
    |   |   |  FROM TABLE
    |   |   |  CB
    |   |   |  Index : IDX1
    |   |   |  Backward scan.
    |   |   |  Positioning by key.
    |   |   |  Index contains all needed columns. Base table will not be read.
    |   |   |  Keys are:
    |   |   |    MID ASC
    |   |   |  Using I/O Size 2 Kbytes for index leaf pages.
    |   |   |  With LRU Buffer Replacement Strategy for index leaf pages.
    Why does the optimiser generate different plans ?

    Okay - so the one reason I like to see plancost vs. just showplan...you can see in the one without the group by clause that it does a ScalarAgg vs. a GroupSorted ala:
    (without group by clause)
    ==================== Lava Operator Tree ====================
                            Emit                               
                            (VA = 2)                           
                            r:1 er:1                           
                            cpu: 0                             
                ScalarAgg                                      
                  Max                                          
                (VA = 1)                                       
                r:1 er:1                                       
                cpu: 0                                         
    IndexScan                                                  
    IDX1                                                       
    (VA = 0)                                                   
    r:1 er:1                                                   
    l:3 el:3                                                   
    p:3 ep:3                                                   
    ============================================================
    (with group by clause)
    ==================== Lava Operator Tree ====================
                                        Emit                    
                                        (VA = 3)                
                                        r:1 er:8                
                                        cpu: 300                
                            Restrict                            
                            (0)(0)(0)(4)(0)                     
                            (VA = 2)                            
                            r:1 er:8                            
                GroupSorted                                     
                Grouping                                        
                (VA = 1)                                        
                r:1 er:8                                        
    IndexScan                                                   
    IDX1                                                        
    (VA = 0)                                                    
    r:353666 er:158986                                          
    l:863 el:386                                                
    p:176 ep:78                                                 
    ============================================================
    Now the key to the problem is in the GroupSorted function.
    But, let's step back a bit.   Query optimization is just one of the many (and nearly final) steps prior to query execution.  Prior to query optimization we have the TDS Receive buffers, the SQL parsing, the SQL normalization and pre-processing.   In the pre-processing, we normally try to construct what is called the LOP (Logical Operator) tree.   Note that no-where in that sequence is any execution - I mention this because once before you commented on someone else's posting about a purported optimization problem with
    select * from table where column=<value> or 1=2
    ....the notion of evaluating 1=2 wouldn't happen until the execution phase - so optimization will not consider it - in this case it looks it at as an expression to be evaluated and skips it.   If you want to see what I mean do a showplan and statistics io on
    select * from sysobjects where 1=2
    and you will see that due to lack of sargs we do a tablescan from the optimizer due to lack of sargs...but it is during query execution that we short circuit and hence the 0 LIO's.    I point that out because there are different phases involved and in this case it is the pre-processor that selects the overall LOP tree to start with....but then the optimizer has to consider which LOPs to use underneath that such as GroupSorted (using an index) vs. GroupSorting (sort operation), etc.
    Soooo....in the GROUP BY example, it starts with:
    The Lop tree:
    ( project
            ( group
                    ( scan CB
    vs. your other example (without the Group By):
    The Lop tree:
    ( project
            ( scalar
                    ( scan CB
    ....so you can see the difference right off the bat in what the optimizer received from the pre-processor.   Like I pointed out earlier, the problem *started* the second you put the group by clause in - before the optimizer even got to look at.
    You then get 2 optimization blocks....the innermost for the (scan CB) and then the outer for (group (inner block)) ala:
    OptBlock1
            The Lop tree:
            ( scan CB
            Generic Tables: ( Gtt1( CB ) Gti2( IDX1 ) )
            Generic Columns: (  Gc0(CB ,Rid) Gc1(CB ,MID) Gc2(CB ,ValuationDate))  Virtual Generic Columns: ( )
            Predicates: ( { CB.MID Gtg0.Gc4 }  = 0 tc:{1} )
            Transitive Closures: ( Tc0 = {  Gc0(CB ,Rid)} Tc1 = {  Gc1(CB ,MID) Gc4(Gtg0 ,_gcelement_4)} Tc2 = {  Gc2(CB ,ValuationDate)} )
    OptBlock0
            The Lop tree:
            ( pseudoscan
            Generic Tables: ( Gtg0 )
            Generic Columns: (  Gc3(Gtg0 ,_gcelement_3) Gc4(Gtg0 ,_gcelement_4) Gc5(Gtg0 ,_virtualagg) Gc6(Gtg0 ,_virtualagg))  Virtual Generic Columns: ( )
            Predicates: ( { CB.MID Gtg0.Gc4 }  = 0 tc:{1} )
            Transitive Closures: ( Tc3 = {  Gc3(Gtg0 ,_gcelement_3) Gc5(Gtg0 ,_virtualagg) Gc6(Gtg0 ,_virtualagg)} Tc1 = {  Gc1(CB ,MID) Gc4(Gtg0 ,_gcelement_4)} )
    The question is, why did we estimate 8 rows returned from the group by????   And would that change the behavior???
    That comes from this section:
            ( PopGroupSorted
                    proj: {{ CB.MID Gtg0.Gc4 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } }
                    pred: [Tc{} Pe{{ CB.MID Gtg0.Gc4 }  = 0}]
                    subs: {CB.MID ,Gtg0._virtualagg }
                    cost: 66305.12
                    I/O estimate : [
                            rowcount=7.388771
                            averagewidth=19
                            pages=1
                            prefetchpages=1
                            lio=0 pio=0 cpu=158985.8
                            total lio=385.3457 total pio=77.60534 total cpu=635943
                            tempdb=0
                    Cache Strategy: [
                            prefetch=YES
                            iosize=131072 Bytes
                            bufreplace=LRU
                    order: none
                    ( PopIndScan index: Gti2( IDX1 )
                                    table: Gtt1( CB )
                            proj: {{ CB.MID Gtg0.Gc4 } ,{ CB.ValuationDate } }
                            pred: [Tc{} Pe{{ CB.MID Gtg0.Gc4 }  = 0}]
                            subs: {CB.MID ,CB.ValuationDate }
                            cost: 50406.55
                            I/O estimate : [
                                    rowcount=158985.8
                                    averagewidth=14
                                    pages=383.3457
                                    prefetchpages=383.3457
                                    lio=385.3457 pio=77.60534 cpu=476957.2
                                    scanlio=385.3457 scanpio=77.60534 scancpu=476957.2
                                    tempdb=0
                            Cache Strategy: [
                                    prefetch=YES
                                    iosize=131072 Bytes
                                    bufreplace=LRU
                            order: none )
    Now to understand what is happening, let's take the bad example:
    select MID , max(Date) from CB where MID = @MID group by MID
    Now, let's make it a bit more generic:
    select type, count(*), max(crdate)
    from sysobjects
    where type='P'
    group by type
    If run in master on my machine, I get:
    type                                            
    P             20             Oct  2 2014 12:23PM
    ...as a result, with an initial LOP of:
    The Lop tree:
    ( project
    ( group
      ( scan sysobjects
    Which should be no surprise....with no index on type...or due to the quantity of rows in sysobjects, we simply go for a tablescan.   But we have our (group ) LOP.....and if we look at the plancost:
    ==================== Lava Operator Tree ====================
                                        Emit                   
                                        (VA = 3)               
                                        r:1 er:2               
                                        cpu: 0                 
                            Restrict                           
                            (0)(0)(0)(4)(0)                    
                            (VA = 2)                           
                            r:1 er:2                           
                GroupSorted                                    
                Grouping                                       
                (VA = 1)                                       
                r:1 er:2                                       
    TableScan                                                  
    sysobjects                                                 
    (VA = 0)                                                   
    r:20 er:19                                                 
    l:8 el:8                                                   
    p:0 ep:2                                                   
    ============================================================
    We see sort of the same problem....we are estimating 2 rows to be returned from the group by. 
    Stepping back a bit, remember, if we don't have any stats on a column, we assume 0.1 for =, 0.25 for bounded range (between) and 0.3 for unbounded range.   In my case, with no stats on type in sysobjects in master, we can see this clearly in the index selectivity area of:
    The table (Datarows) has 188 rows, 6 pages,
    Data Page Cluster Ratio 0.9999900
        type = 'P'
       Estimated selectivity for type,
            selectivity = 0.1,
        Search argument selectivity is 0.1.
        using table prefetch (size 32K I/O)
        Large IO selected: The number of leaf pages qualified is > MIN_PREFETCH pages
        in data cache 'default data cache' (cacheid 0) with LRU replacement
    In the index selectivity due to using the magic values.   In the group by costing, we see:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc5 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } ,{ Gtg0.Gc4 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
      subs: {sysobjects.type ,Gtg0._virtualagg ,Gtg0._virtualagg }
      cost: 65.44175
      I/O estimate : [
       rowcount=1.88
       averagewidth=21
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=18.8
       total lio=8 total pio=1.00007 total cpu=244.4
      Cache Strategy: [
       prefetch=YES
       iosize=4096 Bytes
       bufreplace=LRU
      order: none
      ( PopTabScan table: Gtt1( sysobjects )
       proj: {{ sysobjects.type Gtg0.Gc5 } ,{ sysobjects.crdate } }
       pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
       subs: {sysobjects.type ,sysobjects.crdate }
       cost: 63.56175
       I/O estimate : [
       rowcount=18.8
        averagewidth=12
        pages=8
        prefetchpages=8
        lio=8 pio=1.00007 cpu=225.6
        scanlio=8 scanpio=1.00007 scancpu=225.6
       Cache Strategy: [
        prefetch=YES
        iosize=32768 Bytes
        bufreplace=LRU
       order: none )
    Now....to go from the row count to the estimated rows for a group by, we don't use the column selectivity - we use column density stats.  The real reason behind the 0.1 is that we are saying is that there likely is only 10 distinct values.   For example, if I update statistics sysobjects (type) in my system, the output changes to:
    ==================== Lava Operator Tree ====================
                                        Emit                   
                                        (VA = 3)               
                                        r:1 er:1               
                                        cpu: 0                 
                            Restrict                           
                            (0)(0)(0)(4)(0)                    
                            (VA = 2)                           
                            r:1 er:1                           
                GroupSorted                                    
                Grouping                                       
                (VA = 1)                                       
                r:1 er:1                                       
    TableScan                                                  
    sysobjects                                                 
    (VA = 0)                                                   
    r:20 er:21                                                 
    l:8 el:8                                                   
    p:0 ep:2                                                   
    ============================================================
    Which is based on:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc5 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } ,{ Gtg0.Gc4 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
      subs: {sysobjects.type ,Gtg0._virtualagg ,Gtg0._virtualagg }
      cost: 65.80175
      I/O estimate : [
       rowcount=1
       averagewidth=21
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=20
       total lio=8 total pio=1.00007 total cpu=248
      Cache Strategy: [
       prefetch=YES
       iosize=4096 Bytes
       bufreplace=LRU
      order: none
      ( PopTabScan table: Gtt1( sysobjects )
       proj: {{ sysobjects.type Gtg0.Gc5 } ,{ sysobjects.crdate } }
       pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
       subs: {sysobjects.type ,sysobjects.crdate }
       cost: 63.80175
       I/O estimate : [
        rowcount=20
        averagewidth=12
        pages=8
        prefetchpages=8
        lio=8 pio=1.00007 cpu=228
        scanlio=8 scanpio=1.00007 scancpu=228
    Note that my overall rowcount went up a bit....but the group by went to 1 row....if we look at the column stats via optdiag:
    Statistics for column:                                                          "type"
    Last update of column statistics:                                               Feb 15 2015  9:18:32:850PM
         Range cell density:                                                        0.0053191489361702
        Total density:                                                             0.4216274332277049
         Range selectivity:                                                         default used (0.33)
         In between selectivity:                                                    default used (0.25)
         Unique range values:                                                       0.0053191489361702
         Unique total values:                                                       0.2000000000000000
         Average column width:                                                      default used (2.00)
         Rows scanned:                                                              188.0000000000000000
         Statistics version:                                                        4
    Histogram for column:                                                           "type"
    Column datatype:                                                                char(2)
    Requested step count:                                                           20
    Actual step count:                                                              9
    Sampling Percent:                                                               0
    Tuning Factor:                                                                  20
    Out of range Histogram Adjustment is DEFAULT.                                  
    Low Domain Hashing.                                                            
         Step     Weight                    Value
            1     0.00000000       <=       "EJ"
            2     0.00531915        <       "P "
            3     0.10638298        =       "P "
            4     0.00000000        <       "S "
            5     0.30319148        =       "S "
            6     0.00000000        <       "U "
            7     0.56382978        =       "U "
            8     0.00000000        <       "V "
            9     0.02127660        =       "V "
    ....so the 20 rows are due to the slightly higher 0.10638 vs. 0.1...
    Now....if we know the value for the where clause and have stats on the column, I notice we always estimate 1 row to be returned from the group by.   However, if we don't have stats - or if we don't know the value - we estimate more than one row.   For example if I declare @var for type ala:
    declare @type char(2)
    select @type='P'
    select type, max(crdate)
    from sysobjects
    where type=@type
    group by type
    go
    I then get:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc4 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc4 }  = '}]
      subs: {sysobjects.type ,Gtg0._virtualagg }
      cost: 83.58154
      I/O estimate : [
      rowcount=4.17777
       averagewidth=17
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=79.26596
       total lio=8 total pio=1.00007 total cpu=425.7979
    ...if we have stats on type vs. the 1.88 when we don't.   Which points out that the uniqueness stats and selectivity are likely considered in the final cost of the group by.   In your case, the estimate of:
                    I/O estimate : [
                            rowcount=7.388771
                            averagewidth=19
                            pages=1
                            prefetchpages=1
                            lio=0 pio=0 cpu=158985.8
                            total lio=385.3457 total pio=77.60534 total cpu=635943
                            tempdb=0
    ...is likely due to the column uniqueness and selectivity stats based on the @MID variable
    Note that despite we now know the GROUP BY is only going to return a single row, we still don't change the plan.....we still hit the index in ascending fashion.   The reason why is that we don't assume (or should I say we don't make a special case) that the only aggregate is a max().   It could be count(), avg(), min() or a combination of which - any of could require reading every index row other than strictly min().   Could we do so??  Possibly.....however, arguable, the select max() without the group by is the more correct SQL to use....   We do (on the other hand) have special processing for scalaragg with max() as it does the backwards scan if max() is the only aggregate....if you do select min(), max() - then it does a forward scan even with scalaragg.   If you have just min(), we do a forward scan and short circuit after first row (as if select top 1)....

  • HT5429 How long after you report a problem does it take for the fix to be picked up in maps?

    How long after you report a problem does it take for the fix to be picked up in maps? The street I live on is misspelled. It's shown as one word, but it should be two words. Maps cannot find the address when it is spelled correctly, so I have it purposely misspelled it as one word in my contacts, which helps for the most part, however it still tries to place my home on an entirely different street all together. Fortunately this new random street is at least close to where I live. When I used to spell my street correctly with two words, maps would try to send me to the next town over. So it kind of works, as long as I misspell my street, and ignore the fact that it is showing where I live to be a couple of streets down from where I actually live. I live in Connecticut - not in the most populated of areas, but not in the middle of nowhere either. Amazingly enough Google Maps will correctly find my address whether it's spelled with one words or two words. Google Maps also shows my apartment complex, which Apple Maps does not. I really wish Apple would just do a quick pass of the area to fix these issues. I’ve reported the problem several times, months ago. How long will it take for there to be a fix? I keep trying to use the Apple ecosystem, but Google is clearly the better solution for me.

    Apple does not do the GIS data for maps. That comes from 3rd party vendors like Tom Tom. I have read it takes some time to update map data. The only thing you can do is report it.
    Not to belittle your complaint, but on my end it is Apple that is clearly better. When looking at my house in Google, the satellite photo is more than 4 years old, and the Apple one is much newer. I can tell because of the condition of my home and the neighbor's. They had an above ground pool which was removed 4 years ago and it shows on the Google Map. I had remodeling and roof work done to my home that started 3 years ago and Apple's satellite view showed this work done, which took over a year to complete. I live in the middle of a block in a rural town. Both Apple and Google split the block into 100 parts and put my home close to the beginning of the block instead of where it actually sits. I've reported it to both Google and Apple and no one has changed. Just one of those things. But, keep your chin up, it will get corrected eventually.

  • How does one update Flash? I currently cannot watch videos on You Tube, it says I need to upgrade, I have no problem doing this on a PC.

    How does one update Flash? I currently cannot watch videos on You Tube, it says I need to upgrade, I have no problem doing this on a PC.

    If you're not using chrome you can just go to adobe's website (http://get.adobe.com/flashplayer/) and download the newest flash player.  You run the .dmg and then open the application, which updates flash.  With the latest version of flash there should be a preference pane within system preferences to allow you to change settings and check for updates.

  • When I enable imatch on my iPhone 4s it takes approximately 30 minutes before other data fills 13.2gb of usable data. This problem does not occur when I manually sync music to my phone. Is this a common bug, and if so; is there a fix?

    When I enable imatch on my iPhone 4s it takes approximately 30 minutes before other data fills 13.2gb of usable data on the phone. This problem does not occur when I manually sync music to my phone only when I access imatch. Is this a common bug, and if so; is there a fix?

    yes it is. you can sign out of itunes account then sign back in. use http://support.apple.com/kb/ht1311 to sign out.

  • I have a 17" Macbook pro with flickering red and cyan(blue) lines across the screen. The issue disappears temporarily when I tap on the computer, and the problem does not occur when I use external display or try to screen capture the problem.

    I purchased my Macbook (17") through a certified apple tecnition in August 2012, it was refurbished and the motherboard was completely replaced. I do a lot of photo editing, but I have been unable to do so because of the red vibrating lines that interrupt my screen. The issue disappears temporarily when I tap on the computer, and the problem does not occur when I use external display or try to screen capture the problem. I brought the computer back to the technition I purchased it from and he said that it was a problem with my fan, so I have two new fans but the issue is still occuring. He says he doesnt know whats wrong. Does anyone have any information on this issue?
    Here is an image of the issue
    http://www.flickr.com/photos/67839707@N08/8884847081/

    I recommend having your Mac serviced by someone competent. A force sufficient to "blow apart" the fans was clearly excessive and may have damaged the display cable, as well as any number of other problems.
    Dust is properly cleaned with a vacuum, preferably one designed for computer service, and they are not cheap.
    Compressed air should never be used. It just blows dust everywhere, often into places where it can no longer be removed.

  • There's a problem with the drive scan drive message appears when plugging in ipod into a new windows 8 computer

    I have a new Windows 8 laptop. I copied all of my music onto it. I then plug in my ipod and get "there's a problem with this drive, scan and fix now". When I click scan and fix it tells me "can't perform the check because Windows can't access the disk.".
    I uninstalled iTunes and reinstalled it - but same problem.

    bump, I have same issue. thanks, MM

  • "The upload has failed. There was a problem running a virus scan for the file."  any ideas???

    "The upload has failed.
    There was a problem running a virus scan for the file. "
    This is the message i get when tryng to update
    any ideas?

    Error: "svr.VirusScanExecutionError"
    An intermittent problem with acrobat.com's underlying virus scan component causes this issue. This issue happens occasionally on a small number of server instances.
    The solution is to update the article again. Trying again typically routes you to a different host in the server array.

  • On my new iMac, mail has no sound after waking from sleep.  If I select do not wake for network access in general preferences, the problem does not occur.  Anyone have an idea on how to fix this?

    On my new iMac, mail has no sound after waking from sleep.  If I select do not wake for network access in general preferences, the problem does not occur.  Anyone have an idea on how to fix this? Thanks --

    Same here on an older iMac since Lion. Thanks for the uncheck wake for network access tip though, it doesn't do it either in that case, I will leave it that way.

  • Does the radio accessory work with other iPods or with iPhone?

    Does the radio accessory work for the ipod nano with other iPods or with iPhone?

    Are you talking about the built in Radio on the Nano? If so, the answer is no.

Maybe you are looking for