Using DataSet with large datasets

I have a product, like a shirt, that comes in 800 colors.
I've created an xml file with all the color id's, names and RGB
codes (5 attributes in all) and this xml file is 5,603 lines long.
It takes a noticeably long time to load. I'm using the auto-suggest
widget to then show subsets of this list based on ID or color name.
Is there an example of a way to connect to a php-driven
datasource, so I can query a database and return the matches to the
auto-suggest widget?
Thanks, Scott

In my Googling I came across this Cold Fusion example:
http://www.brucephillips.name/blog/index.cfm/2007/3/31/Use-Sprys-New-Auto-Suggest-Widget-T o-Handle-Large-Numbers-of-Suggestions

Similar Messages

  • Using Spry with large amounts of data

    I was wondering if it is plausible to use spy's Auto suggest
    feature with queries that return around 18,000 rows of data.
    What are the limitations of using spry with this kind of
    overhead?
    My current Situation
    I currently have a cfc that returns a query which is then
    converted to xml using a toXML function and then loaded into spry.
    At this time the xml that is created only contains around 500
    rows but when this is all hooked up to spry's auto suggest feature
    i am experiencing very slow load time.
    What am i doing wrong? My end result would like to be a
    lookup to over 18,000 employees

    in your text box, include an onChange attribute (i think it's
    supported) like this:
    onChange="yourCustomCall(yourformid)"
    Then each time that the text box is changed, a new spry call
    will be made to the server.
    Be warned though: This, like all other AJAX calls, can lead
    to very high server load. each time you make a change in that text
    box, you are making a CF call. Be sure your environment can scale
    to handle that load if it is a high-use site.

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • Problem using JTextPane with large styled docs

    I have an environment where we need to parse out large (5MB) log files, and I'm displaying them with colorized attributes such as logging level, timestamp, etc. The files are read in once and never changed, only displayed - the text is not editable, so we don't care about future updates to the doc (only highlighting text for find/copy). However we do need the entire document 'available' after load in order to do string searches and certain operations linked to the entry a piece of text is in.
    Parsing and building the styled document are relatively fast operations (a couple seconds), but when I set the document on the JTextPanel (which is inside a JScrollBar), it can take an enormous amount of time to display, and sometimes eats up every resource on the machine, eventually stoping dead without displaying at all (out of memory maybe?).
    I've read similar posts on the forum, but no one has provided a clear-cut solution to this problem. Is there a simple way to make my document display quickly? Alternatively, is there another way to implement this without using JTextPane?
    Any help is much appreciated.
    Thanks,
    lex

    In the past I tried to optimize ParagraphView and SectionView of the kit. Don't know whether it will help in your case. Hope so.
    import javax.swing.text.*;
    import java.awt.Rectangle;
    import javax.swing.SizeRequirements;
    import java.awt.Shape;
    public class OptimizedParagraphView extends ParagraphView {
        public OptimizedParagraphView(Element elem) {
            super(elem);
        protected void layoutMajorAxis(int targetSpan, int axis, int[] offsets, int[] spans) {
            //optimized
            int preferred = 0;
            int n = getViewCount();
            for (int i = 0; i < n; i++) {
                View v = getView(i);
                spans[i] = (int) v.getPreferredSpan(axis);
                offsets=preferred;
    preferred += spans[i];
    protected void layoutMinorAxis(int targetSpan, int axis, int[] offsets, int[] spans) {
    //optimized
    int n = getViewCount();
    for (int i = 0; i < n; i++) {
    View v = getView(i);
    int min = (int) v.getMinimumSpan(axis);
    offsets[i] = 0;
    spans[i] = Math.max(min, targetSpan);
    public int getResizeWeight(int axis) {
    //optimized
    return 0;
    public float getAlignment(int axis) {
    //opimized
    return 0;
    protected View createRow() {
    //optimized
    return new OptimizedRow(getElement());
    class OptimizedRow extends BoxView {
    SizeRequirements mimorRequirements;
    OptimizedRow(Element elem) {
    super(elem, View.X_AXIS);
    protected void loadChildren(ViewFactory f) {
    public AttributeSet getAttributes() {
    View p = getParent();
    return (p != null) ? p.getAttributes() : getElement().getAttributes();
    public float getAlignment(int axis) {
    if (axis == View.X_AXIS) {
    switch (StyleConstants.getAlignment(getAttributes())) {
    case StyleConstants.ALIGN_LEFT:
    return 0;
    case StyleConstants.ALIGN_RIGHT:
    return 1;
    case StyleConstants.ALIGN_CENTER:
    case StyleConstants.ALIGN_JUSTIFIED:
    return 0.5f;
    return 0;
    public Shape modelToView(int pos, Shape a, Position.Bias b) throws BadLocationException {
    Rectangle r = a.getBounds();
    View v = getViewAtPosition(pos, r);
    if ((v != null) && (!v.getElement().isLeaf())) {
    // Don't adjust the height if the view represents a branch.
    return super.modelToView(pos, a, b);
    r = a.getBounds();
    int height = r.height;
    int y = r.y;
    Shape loc = super.modelToView(pos, a, b);
    r = loc.getBounds();
    r.height = height;
    r.y = y;
    return r;
    public int getStartOffset() {
    int offs = Integer.MAX_VALUE;
    int n = getViewCount();
    for (int i = 0; i < n; i++) {
    View v = getView(i);
    offs = Math.min(offs, v.getStartOffset());
    return offs;
    public int getEndOffset() {
    int offs = 0;
    int n = getViewCount();
    for (int i = 0; i < n; i++) {
    View v = getView(i);
    offs = Math.max(offs, v.getEndOffset());
    return offs;
    protected void layoutMinorAxis(int targetSpan, int axis, int[] offsets, int[] spans) {
    baselineLayout(targetSpan, axis, offsets, spans);
    protected SizeRequirements calculateMinorAxisRequirements(int axis,
    SizeRequirements r) {
    mimorRequirements= baselineRequirements(axis, r);
    return mimorRequirements;
    protected SizeRequirements baselineRequirements(int axis, SizeRequirements r) {
    if (r == null) {
    r = new SizeRequirements();
    int n = getViewCount();
    // loop through all children calculating the max of all their ascents and
    // descents at minimum, preferred, and maximum sizes
    int span=0;
    for (int i = 0; i < n; i++) {
    View v = getView(i);
    // find the maximum of the preferred ascents and descents
    span = Math.max((int)v.getPreferredSpan(axis),span);
    r.preferred = span;
    r.maximum=span;
    r.minimum=span;
    return r;
    protected int getViewIndexAtPosition(int pos) {
    // This is expensive, but are views are not necessarily layed
    // out in model order.
    if(pos < getStartOffset() || pos >= getEndOffset())
    return -1;
    for(int counter = getViewCount() - 1; counter >= 0; counter--) {
    View v = getView(counter);
    if(pos >= v.getStartOffset() &&
    pos < v.getEndOffset()) {
    return counter;
    return -1;
    protected short getLeftInset() {
    View parentView;
    int adjustment = 0;
    if ((parentView = getParent()) != null) { //use firstLineIdent for the first row
    if (this == parentView.getView(0)) {
    adjustment = firstLineIndent;
    return (short)(super.getLeftInset() + adjustment);
    protected short getBottomInset() {
    float lineSpacing=StyleConstants.getLineSpacing(getAttributes());
    return (short)(super.getBottomInset() + mimorRequirements.preferred * lineSpacing);
    protected void layoutMajorAxis(int targetSpan, int axis, int[] offsets, int[] spans) {
    //optimized
    int preferred = 0;
    int n = getViewCount();
    for (int i = 0; i < n; i++) {
    View v = getView(i);
    spans[i] = (int) v.getPreferredSpan(axis);
    offsets[i]=preferred;
    preferred += spans[i];
    public int getResizeWeight(int axis) {
    //optimized
    return 0;
    import javax.swing.text.*;
    public class OptimizedSectionView extends BoxView {
    public OptimizedSectionView(Element elem) {
    super(elem,View.Y_AXIS);
    protected void layoutMajorAxis(int targetSpan, int axis, int[] offsets, int[] spans) {
    //optimized
    int preferred = 0;
    int n = getViewCount();
    for (int i = 0; i < n; i++) {
    View v = getView(i);
    spans[i] = (int) v.getPreferredSpan(axis);
    offsets[i]=preferred;
    preferred += spans[i];
    public int getResizeWeight(int axis) {
    //optimized
    return 0;
    public float getAlignment(int axis) {
    //opimized
    return 0;
    class MyViewFactory implements ViewFactory {
    * Creates view for specified element.
    * @param elem Element parent element
    * @return View created view instance.
    public View create(Element elem) {
    String kind = elem.getName();
    if (kind != null) {
    if (kind.equals(AbstractDocument.ContentElementName)) {
    return new LabelView(elem);
    else if (kind.equals(AbstractDocument.ParagraphElementName)) {
    return new OptimizedParagraphView(elem);
    // return new ParagraphView(elem);
    else if (kind.equals(AbstractDocument.SectionElementName)) {
    return new OptimizedSectionView(elem);
    // return new BoxView(elem, View.Y_AXIS);
    else if (kind.equals(StyleConstants.ComponentElementName)) {
    return new ComponentView(elem);
    else if (kind.equals(StyleConstants.IconElementName)) {
    return new IconView(elem);
    // default to text display
    return new LabelView(elem);
    regards,
    Stas

  • Hyper-v Live Migration not completing when using VM with large RAM

    hi,
    i have a two node server 2012 R2 cluster hyper-v which uses 100GB CSV, and 128GB RAM across 2 physical CPU's (approx 7.1GB used when the VM is not booted), and 1 VM running windows 7 which has 64GB RAM assigned, the VHD size is around 21GB and the BIN file
    is 64GB (by the way do we have to have that, can we get rid of the BIN file?). 
    NUMA is enabled on both servers, when I attempt to live migrate i get event 1155 in the cluster events, the LM starts and gets into 60 something % but then fails. the event details are "The pending move for the role 'New Virtual Machine' did not complete."
    however, when i lower the amount of RAM assigned to the VM to around 56GB (56+7 = 63GB) the LM seems to work, any amount of RAM below this allows LM to succeed, but it seems if the total used RAM from the physical server (including that used for the
    VMs) is 64GB or above, the LM fails.... coincidence since the server has 64GB per CPU.....
    why would this be?
    many thanks
    Steve

    Hi,
    I turned NUMA spanning off on both servers in the cluster - I assigned 62 GB, 64GB and 88GB and each time the VM started up no problems. with 62GB LM completed, but I cant get LM to complete with 64GB+.
    my server is a HP DL380 G8, it has the latest BIOS (I just updated it today as it was a couple of months behind), i cant see any settings in BIOS relating to NUMA so i'm guessing it is enabled and cant be changed.
    if i run the cmdlt as admin I get ProcessorsAvailability : {0, 0, 0, 0...}
    if i run it as standard user i get ProcessorsAvailability
    my memory and CPU config are as follows, hyper-threading is enabled for the CPU but I dont
    think that would make a difference?
    Processor 1 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    Processor 2
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    thanks
    Steve

  • Use oraDav with large files (1GB+)?

    My customer has many terabytes of images, many of which are over 1GB in size. Using interMedia w/oraDav looks very interesting. For example, the users could access the images from a client/server application via WebFolders from their Windows workstations. A concern is that the users could not begin browsing an image without downloading their entire image. Does oraDav respect the HTTP Content-Range header? This would allow the client application to randomly access portions of an image file.

    I believe that the answer is yes. Have you tried it?

  • Is anyone working with large datasets ( 200M) in LabVIEW?

    I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?

    Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
    Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
    I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
    Hope this helps!
    Chris
    Attachments:
    Mean Derivation.JPG ‏20 KB

  • Database login windows when using DataSet ( CR2008 )

    After executing this code, I get a database login window when using a DataSet, wondering why I am asked for user/pwd when using DataSets.

    Thanks Jonathan for the tip. I tried it before and after the setDatabaseLogin call yesterday, but got the same results. I'm still going to follow your suggestion because that makes more sense.
    Turns out it was the configuration of my ODBC/System DSN that was causing the problem. I reconfigured the System DSN by clearing the check box to provide username/password and now I don't get the Database Login anymore.
    I found a different blog post that answered the OLE DB issue I was seeing where the DatabaseName was coming up blank:
    Login Error Database at Runtime
    Haven't tried it yet and probably won't because I got the Viewer to work with ODBC the way I like it.
    Thanks for the help!

  • Writing the file to a specific directory on server using DATASET

    Hello Friends
    I know that using dataset command we can create a file, write to a file, read from a file etc.,
    But by default it uses the directory DIR_SAPUSERS which is the ./ directory.
    But this particular directory DIR_SAPUSERS also has other system related log files etc.,
    Is there a way to make the command DATASET write to a user created directory instead of this ./ ??
    Any suggestions or comments will be highly appreciated.
    Thanks
    Ram

    IF NOT p_ufile IS INITIAL.
    OPEN DATASET p_ufile FOR INPUT IN TEXT MODE.
    IF sy-subrc <> 0.
    EXIT.
    ENDIF.
    here p_ufile will contain the path where ever u need to place the file i.e. use specific path.... ex: "usr/sap/bin/interface"

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Convert xml into SAP using dataset

    Hi All,
    How to convert XML into itab using dataset, in this conversion have any function module available? please give me a sample program (having any). and material also.
    Thanks,
    Suresh maniarasu

    Hi,
    First you need to get the XML file Data into SAP using the  Function Module and can populate the data into an inernal table.
    TEXT_CONVERT_XML_TO_SAP
    DMC_CONVERT_XML_TO_TABLE
    or you can use the following classes
    CL_RSRD_CONVERTER_XML
    CL_WDR_XML_CONVERT_UTIL
    CL_EXM_IM_ISHCM_CONV_XML_SAP
    Thank U,
    Jay....

  • Store Procedure execution using dataset

    hi friends
    can anybody tell me how to execute oracle stored procedure using dataset in C#
    Thnx
    Monika

    Double post?
    execute store Procedure in Oracle

  • The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.

    The below function is giving me the hours difference what I wanted, but today it is giving us the below error: 
    Msg 535, Level 16, State 0, Line 1
    The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.
    Please Help..
    ALTER FUNCTION [dbo].[GetHoursExcludingWeekdays](@StartDate datetime2,@EndDate datetime2)
    returns decimal(12,3)
    as
    begin
        if datepart(weekday,@StartDate) = 1
            set @StartDate = dateadd(day,datediff(day,0,@StartDate),1)
        if datepart(weekday,@StartDate) = 7
            set @StartDate = dateadd(day,datediff(day,0,@StartDate),2)
        -- if @EndDate happens on the weekend, set to previous Saturday 12AM
        -- to count all of Friday's hours
        if datepart(weekday,@EndDate) = 1
            set @EndDate = dateadd(day,datediff(day,0,@EndDate),-2)
        if datepart(weekday,@EndDate) = 7
            set @EndDate = dateadd(day,datediff(day,0,@EndDate),-1)
        declare @return decimal(12,3)
        set @return = ((datediff(second,@StartDate,@EndDate)/60.0/60.0) - (datediff(week,@StartDate,@EndDate)*48))
        return @return
    end
    ReportingServices

    You'll get this error if the difference between the start and end date is greater that about 68 years due to the "second" DATEDIFF specification.  Perhaps the dates are greater than the expected range due to a data quality issue. 
    Taking the advice from the error message, you could use minutes instead of seconds like the example below the version below.  This could still result in the error of the difference is greater than a couple of hundred years, though.  You might consider
    validating the dates and returning NULL if outside expected limits.
    ALTER FUNCTION [dbo].[GetHoursExcludingWeekdays](@StartDate datetime2,@EndDate datetime2)
    returns decimal(12,3)
    as
    begin
    if datepart(weekday,@StartDate) = 1
    set @StartDate = dateadd(day,datediff(day,0,@StartDate),1)
    if datepart(weekday,@StartDate) = 7
    set @StartDate = dateadd(day,datediff(day,0,@StartDate),2)
    -- if @EndDate happens on the weekend, set to previous Saturday 12AM
    -- to count all of Friday's hours
    if datepart(weekday,@EndDate) = 1
    set @EndDate = dateadd(day,datediff(day,0,@EndDate),-2)
    if datepart(weekday,@EndDate) = 7
    set @EndDate = dateadd(day,datediff(day,0,@EndDate),-1)
    declare @return decimal(12,3)
    set @return = ((datediff(minute,@StartDate,@EndDate)/60.0) - (datediff(week,@StartDate,@EndDate)*48))
    return @return
    end
    GO
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Using SIT connection manager with large simulink models is extremely slow

    Hello,
    I'm trying to use a large simulink model in LabVIEW but once the DLL is correctly generated and the SIT connection manager is invoked to explore the model sinks, sources and parameters, it takes hours to generate the model connections tree. Afterwards, when the connections tree is completed, it is impossible to handle it because every operation performed takes a lot of time and memory (i.e. to expand a block to see which parameters are inside).
    The version of SIT I'm using is 2.0.3 with LabVIEW 7.1.
    Is there anybody experienced with large simulink models and SIT?
    Thanks and regards.
    Ignacio Sánchez Peral
    SENER Ingeniería y Sistemas S.A.
    Control Systems Section
    Aerospace Division
    C/ Severo Ochoa, 4 (PTM).
    28760 Tres Cantos (Madrid) Spain.
    [email protected]
    Tel + 34 91 807 74 34
    Fax + 34 91 807 72 08
    http://www.sener.es

    The VI in the Driver VI called SIT Initialize Model.vi has an input called time step (sec) (-1: use model time step) which does what you want it to. It doesn't actually affect the time step of the solver used in the built model DLL, just the rate at which the main base rate loop actually runs in real time. In fact, base rate loop period would be a better name for this control. If you set it, you won't alter your model, but you will be able to adjust the rate of the base rate loop.
    Simply create a control from this input terminal on your driver VI and fill in an appropriate period in seconds. Make sure to set this value as default in the control so that the Driver VI remembers it.
    You will have to take into account that your model will still think it's running at the time step it was compiled at. So your model simulation time and the actual wall clock time won't match up.
    Jarrod S.
    National Instruments

  • How to avoid using dataset as data source

    Hi all,
    using dataset cause producing report slow. could anyone tell me how to use a stored procedure directly as data source?
    Thank you very much
    Clara

    You may also want to determine where exactly is the slow part of this happening. Remember that to get a report, there is a number of steps, other than connecting and retrieving data from the datasource.
    E.g.;
    Load of the runtime, load of the report, processing of the report, formatting of the report, etc.
    If you do find that datasets are the issue, than Don's suggestion is excellent.
    Ludek

  • Anyone using 3.5.1 with large JVMs?  Thoughts?

    Hello all,
    I'm very interested in the possibilities with large sized storage nodes. We've been running 3.4.1 for about a year or so, with 2.5GB JVMs and rock solid performance after some careful GC tuning. What I've observed is that the app needs ~800MB of the heap for itself, leaving 1.7GB for storage. We're running 24 nodes, with that costing us 19GB of potential storage space (800 * 24). If we could run, for example, 1 node (fictitious example) with a 60GB JVM, using 800MB for the application, we end up with ~60GB of storage space vs. ~40GB. That's a lot more room :)
    Obviously, GC tuning is going to be different with a JVM of this size.
    Has anyone had any experience with large JVMs? What OS? What GC params did you use?
    I'll be doing some testing in the next few weeks and will post back results, but wanted to see what kinds of results others were having.
    Thanks for your time.
    chris

    Hi Chris
    May first observation about 3.5 is increased memory overhead for distributed cache scheme.
    On 32 bit JVM, Coherence 3.4 overhead per entry is about 116 bytes per entry and 3.5 overhead is 164 bytes per entry (in both cases it is a lower bound).
    I'm working with large number of quite small objects ~80 bytes (using POF), for me additional 48 bytes are very sensitive.
    I believe it is a trade off to speed up partition migration process.
    Unfortunately I cannot say anything about perforce, I didn't test it.
    I'm looking for results of your tests.
    Regards,
    Alexey

Maybe you are looking for

  • Media Encoder 2014 ohne FLV Support

    Hallo Forum, als CC 2014 rausgekommen ist, war ich schon ziemlich überrascht das der ME CC 2014 kein .flv mehr Unterstützt. Durch  komplizierte De- und Installationen konnte ich dann den ME CC 2014 löschen und den ME CC wieder holen. Nach dem aktuell

  • Update error installing smart2go on N95

    I've updated to the latest firmware (V11.0.026) and have downloaded the Generic Windows version of Smart2go as the N95 is not listed there but when I go to update it I always get an "Update Error!" notification. Frustrating to say the least, it seems

  • Macbook Pro will not start up.

    My macbook pro just locked up.  When I reset it, it did not/will not start up. It stays on the light blue screen with a file folder with a question mark on it. What do I do?

  • Indesign CC crashes after startup - Windows 8

    Hi everyone, Have been searching for an answer to this but I haven't seen anyone having this problem on a Windows machine. Hopefully someone here can help me!. I installed Creative Cloud a couple of days ago and installed Indesign CC for the 30 day t

  • OIM11g - Approval Workflow

    Hi All, I have developed a 3 level approval workflow using soa composites. After second level, the request status goes to "Alerted" and the assignee is "XELSYSADM". What is this status all about. I cannot search for this request id using xelsysadm ac