WinRT bug report - ListView with large dataset being clipped

I think I've found a bug in the Windows store (WinRT 8.1) version of the ListView (I've not tried on windows phone).
When I create a page with a ListView and bind it to an ItemsSource with many items I find that as I start to scroll through these items the rendered ListViewItems will suddenly disappear as if they are hidden behind another control.
I have created a simple example to demonstrate the issue.  Create a blank Universal app and replace the contents of the MainPage.xaml and MainPage.xaml.cs of the Win8 app with the code below.  Build and run the code and use the mouse to drag the
scrollbar handle down to item 41351 and you will see that all items after that are not displayed.
This looks like a bug to me but what does everyone else think?
Does anyone know of a work-around?
MainPage.xaml
<Page
x:Class="WinRTListViewClipping.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:WinRTListViewClipping"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
x:Name="Root">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<ListView ItemsSource="{Binding Items}"/>
</Grid>
</Page>
MainPage.xaml.cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.InteropServices.WindowsRuntime;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;
// The Blank Page item template is documented at http://go.microsoft.com/fwlink/?LinkId=234238
namespace WinRTListViewClipping
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
public static readonly DependencyProperty ItemsProperty =
DependencyProperty.Register(
"Items",
typeof(List<string>),
typeof(MainPage),
new PropertyMetadata(new List<string>()));
public List<string> Items
get { return (List<string>)GetValue(ItemsProperty); }
set { SetValue(ItemsProperty, value); }
public MainPage()
DataContext = this;
for (int idx = 0; idx < 100000; idx++)
Items.Add("Item: " + idx.ToString());
this.InitializeComponent();

I've noticed an annoying feature of the WinRT ListView and I've created a simple example to demonstrate the problem.
With the ListView I'm experiencing unwanted clipping where a lot of items are displayed.  In the following example the ListView works fine on the first 41350 items but after that you cannot see them.  They still seem to be there because you can
move the focus using the cursor keys, but you just cannot see them.
Does anyone know why this happens and if there's a work around or fix for it?
Windows Store Example - XAML
<Page
x:Class="WinRTListViewClipping.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:WinRTListViewClipping"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
x:Name="Root">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<ListView ItemsSource="{Binding Items}"/>
</Grid>
</Page>
Windows Store Example - Code Behind
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.InteropServices.WindowsRuntime;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;
// The Blank Page item template is documented at http://go.microsoft.com/fwlink/?LinkId=234238
namespace WinRTListViewClipping
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
public static readonly DependencyProperty ItemsProperty =
DependencyProperty.Register(
"Items",
typeof(List<string>),
typeof(MainPage),
new PropertyMetadata(new List<string>()));
public List<string> Items
get { return (List<string>)GetValue(ItemsProperty); }
set { SetValue(ItemsProperty, value); }
public MainPage()
DataContext = this;
for (int idx = 0; idx < 100000; idx++)
Items.Add("Item: " + idx.ToString());
this.InitializeComponent();

Similar Messages

  • Is anyone working with large datasets ( 200M) in LabVIEW?

    I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?

    Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
    Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
    I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
    Hope this helps!
    Chris
    Attachments:
    Mean Derivation.JPG ‏20 KB

  • BUG: Last Image of Large Number Being Moved fails

    This has happened several times in organizing some folders.  Moving over 100 images at a time, it seems that one image near the end fails - I get the screen that Lightroom can't move the image right now.  It's always just one image.  I can move it on it's own just a second later and it works just fine.
    While the Move operation is being fixed, consider that it could go way faster than it does now if the screen didn't have to be refreshed after each file has been moved.  I can see the value of the refresh if it's just a few images being moved, but for a large number, the refresh isn't helpful anyhow.
    Paul Wasserman

    I posted on this last week, and apparently a number of people have experienced this.
    http://forums.adobe.com/thread/690900
    Please report it on this bug report site so that it gets to the developers' attention sooner:
    https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform
    Bob

  • BUG REPORT: Layers with 100% height in timeline knock bottommost layer(s) out of scrollable area

    I have tried to post this twice to the official bug report page, but (somewhat hilariously) the bug report page bounces it back with an internal error about an unknown email address, so I'm posting here too just in case. As I have no idea how to report a bug about the bug report page, I decided I might as well just post the bug here.
    CS5.5/Windows 7 - Verified on 6 different machines, some 64 bit, some 32 bit
    If I have a layer in the timeline that has been expanded to 200% or 300% (via Layer Properties) height, the bottom-most layers gets pushed outside of the viewable area. The scrollbars simply don't scroll that far. I double-checked and this is new since CS4--which still works correctly--so never let it be said that the engineers aren't working on the code! :-/
    Basically, I use this for audio layers, as it is nice to expand them to 200% so it's easier to see the waveform and guess where certain words are ending, etc when doing timeline syncing.
    It can be worked around by expanding the timeline to be tall enough to bring everything into view, so I suspect the bug lies in the scrollable area calculation. However, since space is at a premium, usually I only have about 5-6 layers worth of space allocated to the timeline panel, so this bug hits me quite frequently (since I have all my old projects set up with 200% audio tracks).

    sam wrote:
    I have the same soundcard (or at least the same modules loaded) and I'm getting good volume. 
    Things to check:
    If you have the pcm and main volume all the way up in alsa mixer that should be it.  You may have to adjust the volume separately if you use esd, arts, pulse, or jack (I don't use them). 
    Another thing to test is to make sure your using alsa not oss, and you may want to raise the oss level and see what happens. 
    The final thing is you want to check to see which sound device the audio programs are using (usually found in the preferences).
    If none of that works, then you're out of luck.  It would help if you post what sound programs you are using and what sound daemons you are using.
    Here is my Daemons line from /etc/rc.conf
    DAEMONS=(@syslog-ng @network @netfs @crond @alsa @hal @fam)
    So i would think im using alsa not oss.  but how would i go about raising the oss to make sure?  there is no "oss" option in alsamixer.  I have tried just abuot every audio progream there is and even firefox, all of them produce about half of the maximum volume level.

  • Charts with large datasets?

    I'm writing an application that requires graphing of multi-series sql statements over moderately large datasets. Think 45-90k data points or so.
    I've noticed that with datasets larger than 5k or so, the flash charts eat a lot of cpu time when rendering and I haven't gotten flash charts to display 15k data points appropriately. I get a message about flash having a long running script.
    Does anybody have suggestions for how to display 50,000+ data point charts in apex? Is there a recommended tool to integrate with apex that would create the chart server-side as a graphic and then push the graphic to the client? Also, if possible I would like to call this tool directly from DB jobs to push graphs out to people via email on a recurring basis.
    Any suggestions would be very much appreciated.
    Thanks,
    Matt

    Thanks Mike.
    I originally worked exclusively in Mac. It was the only game in town at one time. I have been working on higher end Windows-based workstations for the past 10 years or so (I also do some video production). Apple products are 'kool,' just not cost-effective. I am currently running Win7 on a dual-quad core with 8GB Ram and 3+TB high-speed storage.
    I used DeltaGraph for several years but their PostScript was version 1.0. I had a lot of problems with the files, such as not ungrouping grouped objects, font problems and difficulties applying newer effects -- even after re-saving to current version AI. At version 5, I querried Red Rock regarding upgrade of PS support but they said it was not in their plans. I also found that setting up some plots was terrifically complicated in DG. It was quicker to set up simple geometry in layered plots in Illustrator. I have not looked at DG 6 but will check on their PS status.
    I have not looked at importing Excel via PDF. I often do test plots from the source worksheets for reference in Excel but have never considered the results to be workable or usable in a published format. I will take another look at Excel per your suggestion.
    It sure would be great if AI charting were a bit more robust and reliable.

  • Barcode CODE 128 with large number (being rounded?) (BI / XML Publisher 5.6.3)

    After by applying Patch 9440398 as per Oracle's Doc ID 1072226.1, I have successfully created a CODE 128 barcode.
    But I am having an issue when creating a barcode whose value is a large number. Specifically, a number larger than around 16 or so digits.
    Here's my situation...
    In my RTF template I am encoding a barcode for the number 420917229102808239800004365998 as follows:
    <?format-barcode:420917229102808239800004365998;'code128c'?>
    I then run the report and a PDF is generated with the barcode. Everything looks great so far.
    But when I scan the barcode, this is the value I am reading (tried it with several different scanner types):
    420917229102808300000000000000
    So:
         Value I was expecting:     420917229102808239800004365998
         Value I actually got:         420917229102808300000000000000
    It seems as if the number is getting rounded at the 16th digit (or so, it varies depending of the value I use).
    I have tried several examples and all seem to do the same.  But anything with 15 digits or less seems to works perfectly.
    Any ideas?
    Manny

    Yes, I have.
    But I have found the cause now.
    When working with parameters coming in from the concurrent manager, all the parameters define in the concurrent program in EBS need to be in the same case (upper, lower) as they have been defined in the data template.
    Once I changed all to be the same case, it worked.
    thanks for the effort.
    regards
    Ronny

  • Ni reports crashes with large amounts of data

    I'm using labview 6.02 and am using some of the Ni reports tools to generate a report. I create, what can be, a large data array which is then sent to the "append table to report" vi and printed. The program works fine, although it is slow, as long as the data that I send to the "append table to report" vi is less than about 30kb. If the amount of data is too large Labview terminates execution with no error code or message displayed. Has anyone else had a similar problem? Does anyone know what is going on or better yet how to fix it?

    Hello,
    I was able to print a 100x100 element array of 5-character strings (~50 kB of data) without receiving a crash or error message. However, it did take a LONG time...about 15 minutes for the VI to run, and another 10 minutes for the printer to start printing. This makes sense, because 100x100 elements is a gigantic amount of data to send into the NI-Reports ActiveX object that is used for printing. You may want to consider breaking up your data into smaller arrays and printing those individually, instead of trying to print the giant array at once.
    I hope these suggestions help you out. Good luck with your application, and have a pleasant day.
    Sincerely,
    Darren N.
    NI Applications Engineer
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • Bug report? WITH statement in classic report LOV

    I've defined a "select list with query based lov" in a classic report column, and if I attempt a with clause
    with data as (Select 4 qty from dual)
    select round(100/qty*(level-1)) perc, round(100/qty*(level-1)) c
    from data
    connect by level <= qty +1I receive this error
    WWV_FLOW_UTILITIES.ERR_LOVORA-06550: line 1, column 45: PLS-00428: an INTO clause is expected in this SELECT statement
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 1249
    ORA-06512: at "SYS.WWV_DBMS_SQL", line 930
    ORA-06512: at "SYS.WWV_DBMS_SQL", line 999
    ORA-06512: at "APEX_040200.WWV_FLOW_DYNAMIC_EXEC", line 695
    ORA-06512: at "APEX_040200.WWV_FLOW_UTILITIES", line 927>
    It's fine when I remove the WITH and place my value inside the rest of the query.
    Expected?
    Application Express 4.2.1.00.08

    Query is fine... worked on it [url https://forums.oracle.com/forums/message.jspa?messageID=10902901#10902901]elsewhere first ;-)
    with data as (Select 4 qty from dual)
    select round(100/qty*(level-1)) perc, round(100/qty*(level-1)) c
    from data
    connect by level <= qty +1
    PERC                   C                     
    0                      0                     
    25                     25                    
    50                     50                    
    75                     75                    
    100                    100

  • Using DataSet with large datasets

    I have a product, like a shirt, that comes in 800 colors.
    I've created an xml file with all the color id's, names and RGB
    codes (5 attributes in all) and this xml file is 5,603 lines long.
    It takes a noticeably long time to load. I'm using the auto-suggest
    widget to then show subsets of this list based on ID or color name.
    Is there an example of a way to connect to a php-driven
    datasource, so I can query a database and return the matches to the
    auto-suggest widget?
    Thanks, Scott

    In my Googling I came across this Cold Fusion example:
    http://www.brucephillips.name/blog/index.cfm/2007/3/31/Use-Sprys-New-Auto-Suggest-Widget-T o-Handle-Large-Numbers-of-Suggestions

  • Pivot - Performance Issue with large dataset

    Hello,
    Database version : Oracle 10.2.0.4 - Linux
    I'm using a function to return a pivot query depending on an input "RUN_ID" value
    For example, i consider two differents "RUN_ID" (e.g. 119 and 120) with exactly the same dataset
    I have a performance issue when i run the result query with the "RUN_ID"=120.
    Pivot:
    SELECT   MAX (a.plate_index), MAX (a.plate_name), MAX (a.int_well_id),
             MAX (a.row_index_alpha), MAX (a.column_index), MAX (a.is_valid),
             MAX (a.well_type_id), MAX (a.read_index), MAX (a.run_id),
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC190', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC304050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC306050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC30050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC3011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC104050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC106050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC10050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC1011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC204050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC206050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC20050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC2011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC80050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC70050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'RAW0', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'RAW5030', a.this_value,
                          NULL
             MAX (a.dose), MAX (a.unit), MAX (a.int_plate_id), MAX (a.run_name)
        FROM vw_well_data a
       WHERE a.run_id = :app_run_id
    GROUP BY a.int_well_id, a.read_index
    Run the query :
    SELECT Sql_FullText,(cpu_time/100000) "Cpu Time (s)",
                    (elapsed_time/1000000) "Elapsed time (s)",
                    fetches,buffer_gets,disk_reads,executions
    FROM v$sqlarea
    WHERE Parsing_Schema_Name ='SCHEMA';
    With results :
    SQL_FULLTEXT     Cpu Time (s)     Elapsed time (s)     FETCHES     BUFFER_GETS     DISK_READS     EXECUTIONS
    query1 (RUN_ID=119)      22.15857     3.589822     1     2216     354     1
    query2 (RUN_ID=120)      1885.16959     321.974332     3     7685410     368     3
    Explain Plan for RUNID 119_
    PLAN_TABLE_OUTPUT
    Plan hash value: 3979963427
    | Id  | Operation                          | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                      |   261 | 98397 |   434   (2)| 00:00:06 |
    |   1 |  HASH GROUP BY                     |                      |   261 | 98397 |   434   (2)| 00:00:06 |
    |   2 |   VIEW                             | VW_WELL_DATA         |   261 | 98397 |   433   (2)| 00:00:06 |
    |   3 |    UNION-ALL                       |                      |       |       |            |          |
    |*  4 |     HASH JOIN                      |                      |   252 | 21168 |   312   (2)| 00:00:04 |
    |   5 |      NESTED LOOPS                  |                      |   249 | 15687 |   112   (2)| 00:00:02 |
    |*  6 |       HASH JOIN                    |                      |   249 | 14442 |   112   (2)| 00:00:02 |
    |   7 |        TABLE ACCESS BY INDEX ROWID | PLATE                |    29 |   464 |     2   (0)| 00:00:01 |
    |*  8 |         INDEX RANGE SCAN           | IDX_PLATE_RUN_ID     |    29 |       |     1   (0)| 00:00:01 |
    |   9 |        NESTED LOOPS                |                      | 13286 |   544K|   109   (1)| 00:00:02 |
    |  10 |         TABLE ACCESS BY INDEX ROWID| RUN                  |     1 |    11 |     1   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | PK_RUN               |     1 |       |     0   (0)| 00:00:01 |
    |  12 |         TABLE ACCESS BY INDEX ROWID| WELL                 | 13286 |   402K|   108   (1)| 00:00:02 |
    |* 13 |          INDEX RANGE SCAN          | IDX_WELL_RUN_ID      | 13286 |       |    46   (0)| 00:00:01 |
    |* 14 |       INDEX UNIQUE SCAN            | PK_WELL_TYPE         |     1 |     5 |     0   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | WELL_RAW_DATA        | 26361 |   540K|   199   (2)| 00:00:03 |
    |* 16 |       INDEX RANGE SCAN             | IDX_WELL_RAW_RUN_ID  | 26361 |       |    92   (2)| 00:00:02 |
    |  17 |     NESTED LOOPS                   |                      |     9 |   891 |   121   (2)| 00:00:02 |
    |* 18 |      HASH JOIN                     |                      |     9 |   846 |   121   (2)| 00:00:02 |
    |* 19 |       HASH JOIN                    |                      |   249 | 14442 |   112   (2)| 00:00:02 |
    |  20 |        TABLE ACCESS BY INDEX ROWID | PLATE                |    29 |   464 |     2   (0)| 00:00:01 |
    |* 21 |         INDEX RANGE SCAN           | IDX_PLATE_RUN_ID     |    29 |       |     1   (0)| 00:00:01 |
    |  22 |        NESTED LOOPS                |                      | 13286 |   544K|   109   (1)| 00:00:02 |
    |  23 |         TABLE ACCESS BY INDEX ROWID| RUN                  |     1 |    11 |     1   (0)| 00:00:01 |
    |* 24 |          INDEX UNIQUE SCAN         | PK_RUN               |     1 |       |     0   (0)| 00:00:01 |
    |  25 |         TABLE ACCESS BY INDEX ROWID| WELL                 | 13286 |   402K|   108   (1)| 00:00:02 |
    |* 26 |          INDEX RANGE SCAN          | IDX_WELL_RUN_ID      | 13286 |       |    46   (0)| 00:00:01 |
    |  27 |       TABLE ACCESS BY INDEX ROWID  | WELL_CALC_DATA       |   490 | 17640 |     9   (0)| 00:00:01 |
    |* 28 |        INDEX RANGE SCAN            | IDX_WELL_CALC_RUN_ID |   490 |       |     4   (0)| 00:00:01 |
    |* 29 |      INDEX UNIQUE SCAN             | PK_WELL_TYPE         |     1 |     5 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("WELL_RAW_DATA"."RUN_ID"="WELL"."RUN_ID" AND
                  "WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
       6 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
       8 - access("PLATE"."RUN_ID"=119)
      11 - access("RUN"."RUN_ID"=119)
      13 - access("WELL"."RUN_ID"=119)
      14 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      16 - access("WELL_RAW_DATA"."RUN_ID"=119)
      18 - access("WELL"."RUN_ID"="WELL_CALC_DATA"."RUN_ID" AND
                  "WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
      19 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      21 - access("PLATE"."RUN_ID"=119)
      24 - access("RUN"."RUN_ID"=119)
      26 - access("WELL"."RUN_ID"=119)
      28 - access("WELL_CALC_DATA"."RUN_ID"=119)
      29 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
    Explain Plan for RUNID 120_
    PLAN_TABLE_OUTPUT
    Plan hash value: 599334230
    | Id  | Operation                           | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                           |     2 |   754 |    24   (5)| 00:00:01 |
    |   1 |  HASH GROUP BY                      |                           |     2 |   754 |    24   (5)| 00:00:01 |
    |   2 |   VIEW                              | VW_WELL_DATA              |     2 |   754 |    23   (0)| 00:00:01 |
    |   3 |    UNION-ALL                        |                           |       |       |            |          |
    |*  4 |     TABLE ACCESS BY INDEX ROWID     | WELL_RAW_DATA             |     1 |    21 |     3   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS                   |                           |     1 |    84 |     9   (0)| 00:00:01 |
    |   6 |       NESTED LOOPS                  |                           |     1 |    63 |     6   (0)| 00:00:01 |
    |   7 |        NESTED LOOPS                 |                           |     1 |    58 |     6   (0)| 00:00:01 |
    |   8 |         NESTED LOOPS                |                           |     1 |    27 |     3   (0)| 00:00:01 |
    |   9 |          TABLE ACCESS BY INDEX ROWID| RUN                       |     1 |    11 |     1   (0)| 00:00:01 |
    |* 10 |           INDEX UNIQUE SCAN         | PK_RUN                    |     1 |       |     0   (0)| 00:00:01 |
    |  11 |          TABLE ACCESS BY INDEX ROWID| PLATE                     |     1 |    16 |     2   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | IDX_PLATE_RUN_ID          |     1 |       |     1   (0)| 00:00:01 |
    |* 13 |         TABLE ACCESS BY INDEX ROWID | WELL                      |     1 |    31 |     3   (0)| 00:00:01 |
    |* 14 |          INDEX RANGE SCAN           | IDX_WELL_RUN_ID           |    59 |       |     2   (0)| 00:00:01 |
    |* 15 |        INDEX UNIQUE SCAN            | PK_WELL_TYPE              |     1 |     5 |     0   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN              | IDX_WELL_RAW_DATA_WELL_ID |     2 |       |     2   (0)| 00:00:01 |
    |* 17 |     TABLE ACCESS BY INDEX ROWID     | WELL_CALC_DATA            |     1 |    36 |     8   (0)| 00:00:01 |
    |  18 |      NESTED LOOPS                   |                           |     1 |    99 |    14   (0)| 00:00:01 |
    |  19 |       NESTED LOOPS                  |                           |     1 |    63 |     6   (0)| 00:00:01 |
    |  20 |        NESTED LOOPS                 |                           |     1 |    58 |     6   (0)| 00:00:01 |
    |  21 |         NESTED LOOPS                |                           |     1 |    27 |     3   (0)| 00:00:01 |
    |  22 |          TABLE ACCESS BY INDEX ROWID| RUN                       |     1 |    11 |     1   (0)| 00:00:01 |
    |* 23 |           INDEX UNIQUE SCAN         | PK_RUN                    |     1 |       |     0   (0)| 00:00:01 |
    |  24 |          TABLE ACCESS BY INDEX ROWID| PLATE                     |     1 |    16 |     2   (0)| 00:00:01 |
    |* 25 |           INDEX RANGE SCAN          | IDX_PLATE_RUN_ID          |     1 |       |     1   (0)| 00:00:01 |
    |* 26 |         TABLE ACCESS BY INDEX ROWID | WELL                      |     1 |    31 |     3   (0)| 00:00:01 |
    |* 27 |          INDEX RANGE SCAN           | IDX_WELL_RUN_ID           |    59 |       |     2   (0)| 00:00:01 |
    |* 28 |        INDEX UNIQUE SCAN            | PK_WELL_TYPE              |     1 |     5 |     0   (0)| 00:00:01 |
    |* 29 |       INDEX RANGE SCAN              | IDX_WELL_CALC_RUN_ID      |   486 |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("WELL_RAW_DATA"."RUN_ID"=120)
      10 - access("RUN"."RUN_ID"=120)
      12 - access("PLATE"."RUN_ID"=120)
      13 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      14 - access("WELL"."RUN_ID"=120)
      15 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      16 - access("WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
      17 - filter("WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
      23 - access("RUN"."RUN_ID"=120)
      25 - access("PLATE"."RUN_ID"=120)
      26 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      27 - access("WELL"."RUN_ID"=120)
      28 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      29 - access("WELL_CALC_DATA"."RUN_ID"=120)I need some advice to understand the issue and to improve the performance.
    Thanks,
    Grégory

    Hello,
    Thanks for your response.
    Stats are computed recently with DBMS_STATS package (case 2) and we have histogramm on 'RUN_ID' columns.
    I tried to use the deprecated "analyze" method (case 1) and obtained better results!
    DECLARE
       -- Get tables used in the view vw_well_data --
       CURSOR c1
       IS
          SELECT table_name, last_analyzed
            FROM user_tables
           WHERE table_name LIKE 'WELL%';
    BEGIN
       FOR r1 IN c1
       LOOP
          -- Case 1 : Analyze method : Perf is good --
          EXECUTE IMMEDIATE    'analyze table '
                              || r1.table_name
                              || ' compute statistics ';
          -- Case 2 : DBMS_STATS --
          DBMS_STATS.gather_table_stats ('SCHEMA', r1.table_name);
       END LOOP;
    END;The explain plans are the same as before
    Any explanations, suggestions ?
    Thanks,
    Gregory

  • Were do we report problems with wrong patches being recommended by UM?

    I guess for the most part is bad metadata that these patches make it down to us.
    But what's the best procedure to get these to be looked at ?
    Also, if we have a support contract, should we do it via those channels ?
    TIA...

    If you have a support contract, are being recommended incorrect patches, please raise a case and we will have the patch metadata looked at.

  • To update large dataset in columnar database (Sybase IQ)

    Hi,
    I want to update a column with random values in Sybase IQ.The no of rows are very large(approx 2 crore).
    I have created a procedure using cursor.
    it is working fine with low dataset but having performance issue with large dataset.
    Is there a workaround for this issue.
    regards,
    Neha Khetan

    Hi Eugene,
    Is it possible to implement this in BDB JE somehowYes, you can create a new separate database for storing the sets of integers. Each record in this database would be one partition (e.g., 1001-2000) for one record in the "main" database.
    The key to this database would be a two part key:
    - the key to the "main" database, followed by
    - the beginning partition value (e.g., 1001)
    For example:
    Main Database:
      Key     Data
       X      string/integer parameters for X
       Y      string/integer parameters for Y
    Integer Partition Database:
      Key     Data
      X,1     Set of integers in range 1-1000 for X
      X,1001  Set of integers in range 1001-2000 for X
      Y,1     Set of integers in range 1-1000 for Y
      Y,1001  Set of integers in range 1001-2000 for Y
       ...Two part keys are easy to implement with a tuple binding. You simply read/write the two fields for the record key, one after another, in the same way that you read/write multiple fields in the record data.
    Mark

  • Bug report - Finder crashes in Cover Flow mode with large files

    I just came back from the Apple store and was told that I have discovered another bug in Leopard. When in Cover Flow view in Finder and trying to browse directories with large ( multiple GB) files, Finder continually crashes and reboots, oftentimes with 2 new FInder windows.
    I created a new user on my MBP to remove the effect of any preferences and the problem repeated itself.
    Come up Apple... get on top of these bugs and issue some patches...
    Not the kind of new OS I expected from a top notch company like Apple.

    Ah... that'll be it then, they are 512 x 512. well i guess that bugs been hanging around for quite some time now, anyone got any ideas if anyone's trying to fix it? i guess making 128 x 128 icons wouldn't be the end of the world but it does seem like a step backwards.
    one thing that still confuses me... an icon file contains icons of all sizes, why does cover flow not select the right size icon to display?
    thanks for that info V.K., much obliged.
    regards, Pablo.

  • Bug report: Mail sends messages with empty bodies

    Over the last year, I have experienced a particularly irritating bug in Mail.app at least a dozen times. I finally have a good idea as to what causes it.
    The problem involves long email messages (often with attachments) that end up being sent with blank bodies (and no attachments). Even the copy in the "Sent" folder ends up blank, and several minutes or hours of work vanishes into thin air, not to be seen ever again.
    I finally realized that this bug only occurs when sending mail through our work SMTP server while outside the work firewall, and only as a result of a certain sequence of events. Here is what happens:
    When we connect to our work SMTP server from outside the local network and without going through the VPN, the SMTP server requires password authentication. If the current SMTP selection in Mail.app is the one that does not require authentication, the SMTP server rejects the message. At that point, Mail.app opens the email I am trying to send and brings up a modal dialog that says "Cannot send message using the server xxx.xxx -- The server response was: xxx@xxx relaying prohibited. You should authenticate first." The dialog also presents a drop-down list of SMTP server choices. I choose the password-authenticated version of the server and then click on "Use Selected Server" to send the message.
    This works almost all the time, but on occasion it ends up sending a blank message! If I have a long email, particularly with attachments such as PDFs that are rendered in the body of the message, it takes a few seconds for the mail message to be rendered underneath the modal dialog box. Since I am used to this STMP rejection behavior, sometimes I am too fast to choose another STMP server from the list and click on "Use Selected Server" before the mail message is rendered on screen! The result, invariably, is a blank email message that gets sent.
    I guess what is happening is that when the STMP server rejects the message and hands it back to Mail.app, the message gets copied into a buffer in order to be displayed on screen. Selecting another server and resending it immediately (before the message is copied into the buffer completely) causes the message body to get trashed.
    I hope that this description is adequate for Apple QA folks to replicate and isolate the problem (and hopefully fix it). One solution (although not the most elegant one) would be to disable the "Use Selected Server" action until the message is copied into the buffer and rendered on screen.

    This could be related to another bug reported here recently:
    E-mail looses all images if mail server doesn't accept outgoing email...
    You cannot count on Apple looking into this or even noticing it if you report it here, so I suggest you the same I suggested in the other thread, i.e. report it in one of the following places:
    http://www.apple.com/macosx/feedback/
    http://developer.apple.com/bugreporter/

  • Attribute TotalLines bug when used with ListView

    After populating a listView with 50 nodes, MyListView.TotalLines will equal
    51.
    MyListView.GetViewNodes().Items gives the correct answer. Use that instead.
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    In Reply to all fellow Apple fans: It is great that someone found a work-around to the iTunes 11.4 no syncing problem. I think that is fine for the short term. And  I am really glad that we customers have a forum to share information and experiences regarding Apple's products. However, as someone here said, we are all customers; and posting a problem here "IN HOPES THAT APPLE READS THESE POSTS" is NOT a way to resolve this issue or any issue.
    PLEASE CONTACT APPLE: Apple Service Center: 1-800-275-2273   - Once you get a ticket number then and only then is this documented and then Apple can follow-through and alert the engineers who will fix this.
    The "CWA2" work-around band-aid does several things in addition to reverting back to iTunes 11.3.1. :
    1) It ensures that Apple continues to know nothing of the problem.
    2) It ensures that the faulty code that is within 11.4 remains there and is perpetuated into future versions.
    3) It ensures that YOU can never upgrade to future versions. (If the resident faulty code remains and perpetuates)
    So once you have reverted back to 11.3.1, the problem APPEARS to have gone away (FOR YOU) but you have not solved the problem.
    Again, respectfully,
    1) Sending feedback to apple (using the "Report Bugs To Apple" menu item in Safari will get the information to someone, somewhere and add it to the pile of issues, complaints, suggestions, etc.
    2) CALLING the Apple service/support Center at  1-800-275-2273 will put you in touch with a real APPLE person who will begin the process of documenting and forwarding this issue on to those who will fix it.
    and
    3) emailing Apple directly will make sure that they get the information.
    lets get this fixed !!!!!!!!!

Maybe you are looking for