64-bit LabVIEW - still major problems with large data sets

Hi Folks -
I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
Then enqueue:
I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
I have other CT imaging programs that are running very well with these large data sets.
This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
Sincerely,
Don

First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
So I definitely noticed an improvement.
I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
data out of arrays to minimize data copies.  I expect I might have to
consider this strategy now here as well.  Just a thought.
What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
Don

Similar Messages

  • Problem with a data set: DIAdem crashes

    Hi,
    I've got a problem with a data set. When I want to zoom in DIAdem-View, DIAdem crashes with the following message (translated from German ;-):
    error type: FLOAT INEXACT RESULT or FLOAT INVALID OPERATION or FLOAT STACK CHECK
    error address: 00016CB8
    module name: gfsview.DLL
    I've got some similar data set not showing such problems. Further on I scanned the data a bit, but in the 59000 points I didn't see anything special. I did try to delete "NOVALUE"s as well, but after that there still exist "NOVALUE"s.
    Does anyone have an idea what to look for?
    Thanks,
    Carsten

    Carsten,
    Could you please upload you Citadel database to the following FTP site:
    ftp.ni.com/incoming
    If you want to compress (ZIP) and/or put a password on the data, that's fine. Please send me a private email at [email protected] (with the file name and password if you put one on the file) once you have uploaded the file and I will check it out.
    Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Problem with large data report

    I tried to run a template I got from release 12 using data from the release we are using (11i). The xml file is about 13,500 kb. when i run it from my desktop.
    I get the following error (mostly no output is generated sometimes its generated after a long time).
    Font Dir: C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\fonts
    Run XDO Start
    RTFProcessor setLocale: en-us
    FOProcessor setData: C:\Documents and Settings\skiran\Desktop\working\2648119.xml
    FOProcessor setLocale: en-us
    I assumed there may be compatibility issues between 12i and 11i hence tried to write my own template and ran into same issue
    when i added the third nested loop.
    I also noticed javaws.exe runs in the background hogging a lot of memory. I am using Bi version 5.6.3
    I tried to run the template through template viewer. The process never completes.
    The log file is
    [010109_121009828][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setData(InputStream) is called.
    [010109_121014796][][STATEMENT] Logger.init(): *** DEBUG MODE IS OFF. ***
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setTemplate(InputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutput(OutputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutputFormat(byte)is called with ID=1.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setLocale is called with 'en-US'.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.process() is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.generate() called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] createFO(Object, Object) is called.
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] oracle.xdo Developers Kit 10.1.0.5.0 - Production
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] Scalable Feature Disabled
    End of Process.
    Time: 436.906 sec.
    FO Formatting failed.
    I cant seem to figure out whether this is a looping or large data or BI version issue. Please advice
    Thank you

    The report will probably fail in a production environment if you don't have enough heap. 13 megs is a big xml file for the parsers to handle, it will probably crush the opp. The whole document has to be loaded into memory and perserving the relationships in the documents is probably whats killing your performance. The opp or foprocessor is not using the sax parser like the bursting engine does. I would suggest setting a maximum range on the amount of documents that can be created and submit in a set of batches. That will reduce your xml file size and performance will increase.
    An alternative to the pervious approach would be to write a concurrent program that merges the pdfs using the document merger api. This would allow you to burst the document into a temp directory and then re-assimilate them it one document. One disadvantage of this approach is that the pdf is going to be freakin huge. Also, if you have to send that piggy to the printer your gonna have some problems too. When you convert it pdf to ps the files are going to be massive because of the loss of compression, it's gets even worse if the pdf has images......Then'll you have a more problems with disk on the server and or running out of memory on ps printers.
    All of things I have discussed I have done in some sort of fashion. Speaking from experience your idea of 13 meg xml file is just a really bad idea. I would go with option one.
    Ike Wiggins
    http://bipublisher.blogspot.com

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

  • Still major problem with packages

    Here is my record
    Saved in C:/database/record/record.java
    package database.record;
    public class Record
         {                                               //Data mebers
              public int idNumber;             // Key field identifying each product
              public int serial;               // Serial number situated on product
              public int quantity;             // Quantity of each product
              public String product;           // The product's name
              public String supplier;          // Name of the supplier
              public Record()
                                                   // Initialising data members
                        this.idNumber = 0;
                        this.serial = 0;
                        this.quantity = 0;
                        this.product = null;
                        this.supplier = null;
                                                         // Constructor for objects of class BeachRecord
              public Record(int Id, int SerialNumber, int Quantity, String ProductName, String SupplierName)
                        setID(Id);
                        setSerial(SerialNumber);
                        setQTY(Quantity);
                        setName(ProductName);
                        setSupplier(SupplierName);
                                                        // Set Id Number
                     public void setID(int id)
                        idNumber = id;
                  }                                     // End Method
                                                       // Get Id Number
                       public int getID()
                        return idNumber;
                  }                             //End Method
                                                      // Set Serial Number
                     public void setSerial(int serialNumber)
                        serial = serialNumber;
                  }                                     // End Method
                                                       // Get Id Number
                       public int getSerial()
                        return serial;
                  }                             //End Method
                                                      // Set Quantity
                     public void setQTY(int Quantity)
                        quantity = Quantity;
                  }                                     // End Method
                                                       // Get Id Number
                       public int getQTY()
                        return quantity;
                  }                             //End Method
                                                      // Set Product Name
                     public void setName(String productName)
                        product = productName;
                  }                                     // End Method
                                                       // Get Id Number
                       public String getName()
                        return product;
                  }                             //End Method
                                                      // Set Supplier Name
                     public void setSupplier(String supplierName)
                        supplier = supplierName;
                  }                                     // End Method
                                                       // Get Id Number
                       public String getSupplier()
                        return product;
                  }                             //End Method
       }// End Class BeachRecordThis class creates my text-file
    Saved also in C:/database/record/
    import java.io.FileNotFoundException;
    import database.record.Record;
    import java.lang.SecurityException;
    import java.util.Formatter;
    import java.util.FormatterClosedException;
    import java.util.NoSuchElementException;
    import java.util.Scanner;
    public class CreateTextFileNew
       private Formatter output; // object used to output text to file
       // enable user to open file
       public void openFile()
          try
             output = new Formatter( "clients.txt" );
          } // end try
          catch ( SecurityException securityException )
             System.err.println(
                "You do not have write access to this file." );
             System.exit( 1 );
          } // end catch
          catch ( FileNotFoundException filesNotFoundException )
             System.err.println( "Error creating file." );
             System.exit( 1 );
          } // end catch
       } // end method openFile
       // add records to file
       public void addRecords()
          // object to be written to file
          Record record = new Record();
          Scanner input = new Scanner( System.in );
          System.out.printf( "%s\n%s\n%s\n%s\n\n",
             "To terminate input, type the end-of-file indicator ",
             "when you are prompted to enter input.",
             "On UNIX/Linux/Mac OS X type <ctrl> d then press Enter",
             "On Windows type <ctrl> z then press Enter" );
          System.out.printf( "%s\n%s",
             "Enter account number (> 0), first name, last name and balance.",
          while ( input.hasNext() ) // loop until end-of-file indicator
             try // output values to file
                // retrieve data to be output
                           record.setID(input.nextInt());
                         record.setSerial(input.nextInt());
                         record.setQTY(input.nextInt());
                         record.setName(input.next()) ;
                          record.setSupplier(input.next());
                if ( record.getID() > 0 )
                   // write new record
                      output.format( "%d %s %s %.2f\n", record.getID(),
                      record.getSerial(), record.getQTY(), record.getSupplier(),
                      record.getName() );
                } // end if
                else
                   System.out.println(
                      "Account number must be greater than 0." );
                } // end else
             } // end try
             catch ( FormatterClosedException formatterClosedException )
                System.err.println( "Error writing to file." );
                return;
             } // end catch
             catch ( NoSuchElementException elementException )
                System.err.println( "Invalid input. Please try again." );
                input.nextLine(); // discard input so user can try again
             } // end catch
             System.out.printf( "%s %s\n%s", "Enter account number (>0),",
                "first name, last name and balance.", "? " );
          } // end while
       } // end method addRecords
       // close file
       public void closeFile()
          if ( output != null )
             output.close();
       } // end method closeFile
    } // end class CreateTextFileProblem is there:
    public class CreateTextFileTest
       public static void main( String args[] )
          CreateTextFile application = new CreateTextFile();
          application.openFile();
          application.addRecords();
          application.closeFile();
       } // end main
    } // end class CreateTextFileTestI get these errors:
    C:\database\record\CreateTextFileTest.java:8: cannot find symbol
    symbol : class CreateTextFile
    location: class CreateTextFileTest
    CreateTextFile application = new CreateTextFile();
    ^
    C:\database\record\CreateTextFileTest.java:8: cannot find symbol
    symbol : class CreateTextFile
    location: class CreateTextFileTest
    CreateTextFile application = new CreateTextFile();
    I use TextPad, and i changed the parameters to
    -classpath C:\ $File
    Can somebody help me fix this please

    ok i think that is fixed but i still get this:
    here are the 2 last classes again
    import java.io.FileNotFoundException;
    import java.lang.SecurityException;
    import java.util.Formatter;
    import java.util.FormatterClosedException;
    import java.util.NoSuchElementException;
    import java.util.Scanner;
    import com.deitel.jhtp6.ch14.AccountRecord;
    public class CreateTextFile
       private Formatter output; // object used to output text to file
       // enable user to open file
       public void openFile()
          try
             output = new Formatter( "clients.txt" );
          } // end try
          catch ( SecurityException securityException )
             System.err.println(
                "You do not have write access to this file." );
             System.exit( 1 );
          } // end catch
          catch ( FileNotFoundException filesNotFoundException )
             System.err.println( "Error creating file." );
             System.exit( 1 );
          } // end catch
       } // end method openFile
       // add records to file
       public void addRecords()
          // object to be written to file
          AccountRecord record = new AccountRecord();
          Scanner input = new Scanner( System.in );
          System.out.printf( "%s\n%s\n%s\n%s\n\n",
             "To terminate input, type the end-of-file indicator ",
             "when you are prompted to enter input.",
             "On UNIX/Linux/Mac OS X type <ctrl> d then press Enter",
             "On Windows type <ctrl> z then press Enter" );
          System.out.printf( "%s\n%s",
             "Enter account number (> 0), first name, last name and balance.",
          while ( input.hasNext() ) // loop until end-of-file indicator
             try // output values to file
                // retrieve data to be output
                record.setAccount( input.nextInt() ); // read account number
                record.setFirstName( input.next() ); // read first name
                record.setLastName( input.next() ); // read last name
                record.setBalance( input.nextDouble() ); // read balance
                if ( record.getAccount() > 0 )
                   // write new record
                   output.format( "%d %s %s %.2f\n", record.getAccount(),
                      record.getFirstName(), record.getLastName(),
                      record.getBalance() );
                } // end if
                else
                   System.out.println(
                      "Account number must be greater than 0." );
                } // end else
             } // end try
             catch ( FormatterClosedException formatterClosedException )
                System.err.println( "Error writing to file." );
                return;
             } // end catch
             catch ( NoSuchElementException elementException )
                System.err.println( "Invalid input. Please try again." );
                input.nextLine(); // discard input so user can try again
             } // end catch
             System.out.printf( "%s %s\n%s", "Enter account number (>0),",
                "first name, last name and balance.", "? " );
          } // end while
       } // end method addRecords
       // close file
       public void closeFile()
          if ( output != null )
             output.close();
       } // end method closeFile
    } // end class CreateTextFile public class CreateTextFileTest
       public static void main( String args[] )
          CreateTextFile application = new CreateTextFile();
          application.openFile();
          application.addRecords();
          application.closeFile();
       } // end main
    } // end class CreateTextFileTestThe CreateTextFileTest compiles now. i get an error in the CreateTextFile class:
    :\com\deitel\jhtp6\ch14\CreateTextFile.java:10: package com.deitel.jhtp6.ch14 does not exist
    import com.deitel.jhtp6.ch14.AccountRecord;
    ^
    C:\com\deitel\jhtp6\ch14\CreateTextFile.java:40: cannot access AccountRecord
    bad class file: .\AccountRecord.class
    class file contains wrong class: com.deitel.jhtp6.ch14.AccountRecord
    Please remove or make sure it appears in the correct subdirectory of the classpath.
    AccountRecord record = new AccountRecord();

  • Problem With Open Data sets

    Hi frnds,
    Problem details: We have to read file from applicaitn server, In the file data its having tab spaces between fields . when i m trying to read the file, Data is comming but , inthe place of Spaces its giving special character "#"..
    please help me on that
    Thanks,
    Rohini

    * Lets open the file on App. server to read the records
    OPEN DATASET V_FILE FOR INPUT IN TEXT MODE ENCODING DEFAULT."" WITH SMART LINEFEED.
    * if the file is
    found and we can read the records
    IF SY-SUBRC = 0.
    DO.                                                " 1 TIMES.
    CLEAR :  V_ZTMAT_MASTER_IB,
                            V_MATERIAL ,
                                V_PLNTID ,
                   V_LOCATN ,
                   V_MAKTX ,
                   V_MEINS ,
                   V_MATKL1.
    READ DATASET V_FILE INTO V_ZTMAT_MASTER_IB.
    IF SY-SUBRC <> 0.
    EXIT.
          ELSE.
    *       As the data is read as a single line into a variable,  split the data into respective fields of the internal table.
    SPLIT V_ZTMAT_MASTER_IB AT C_COMMA INTO:
    V_MATERIAL
    V_PLNTID
                                                   V_LOCATN
                                                   V_MAKTX
                                                   V_MEINS
                                                   V_MATKL1.
    *  Check it the split is succesful or not. If not send email.
            IF SY-SUBRC <> 0.
    CLEAR WA_CONTENTS_TXT.
              CONCATENATE 'The file :  ' V_FG_FILE ' ,  is not TILDE Delimited. Please check the file where material number is  '  V_MATERIAL
                            '  Thanks. ' INTO WA_CONTENTS_TXT .
              APPEND WA_CONTENTS_TXT TO TB_CONTENTS_TXT.
              CLEAR WA_CONTENTS_TXT.
            ELSE.
    *        Check if the correct data type is being passed or else send an email...
              CATCH SYSTEM-EXCEPTIONS
               ARITHMETIC_ERRORS = 1
               CONVERSION_ERRORS = 2.
                V_SNO = V_SNO + 1.
                MOVE :   SY-MANDT   TO WA_ZMATERIALMASTER1-MANDT,
                         V_SNO      TO WA_ZMATERIALMASTER1-SNO,
                         V_STATUS   TO WA_ZMATERIALMASTER1-STATUS,
                         V_MATERIAL TO WA_ZMATERIALMASTER1-MATERIAL,
                         V_PLNTID   TO WA_ZMATERIALMASTER1-PLNTID,
                         V_LOCATN   TO WA_ZMATERIALMASTER1-LOCATN,
                         V_MAKTX    TO WA_ZMATERIALMASTER1-MAKTX,
                         V_MEINS    TO WA_ZMATERIALMASTER1-MEINS,
                         V_MATKL1   TO WA_ZMATERIALMASTER1-MATKL.
                APPEND WA_ZMATERIALMASTER1 TO TB_ZMATERIALMASTER1
    ENDCATCH.
              IF SY-SUBRC = 2.
                V_SUBJECT1         = 'Subject : Z_ROHINI_ASSIGNMENT_4A'.
                V_ATTACHMENT_DESC1 = 'Z_ROHINI_ASSIGNMENT_4A: Error'.
                V_EMAIL1          = P_RECVR.
                V_SENDER_ADDRESS1 = P_SENDR.
                V_MTITLE1         = 'File error: Data type is diferent than expected in file'.
                V_FG_FILE1 = V_FILE.
                CONCATENATE 'In the file :  ' V_FG_FILE1 ' ,    data type is diferent than expected. Unable to move record(s). Please check the file  where material number is  '  V_MATERIAL  '.   Thanks.' INTO WA_CONTENTS_TXT1.
                APPEND WA_CONTENTS_TXT1 TO TB_CONTENTS_TXT1.
              ENDIF.
    *          ENDLOOP.
            ENDIF.
          ENDIF.
        ENDDO.  " END OF OPEN DATASET  DO
    *   Archiving Flat File
        PERFORM MOVE_FLATFILE.
      ENDIF.  " END OF OPEN DATASET SY-SUBRC IF...
      CLOSE DATASET V_FILE.

  • Loading Matrix Bound to UDO with Large Data Set

    Hello Experts,
    I have been looking on the forums for the best method out there to effectively load a Matrix that is bound to a User Defined Object (UDO).  In short, I will explain to you what I would like to do.  I have a form that has a matrix on it bound to a User Defined Object.  This matrix takes data stored in other UDO forms/tables and processes it to extract new information. 
    Unfortunately, the resulting dataset is quite large (up to 1000 rows).  I realize if this were just a "report" I could easily do this with a Grid.  I also realize if this were just a Matrix bound to a User Defined Table, I could bind it to a DataTable and perform the query that way.  However, since this is a Matrix bound to a DBDataSource (as I would like to have SAP handle any updates/finds) I believe my only options are to try and use a DBDataSource.Query method and try to work with Conditions. 
    The DBDataSource.Query method has not proven to be effective due to the complexity of the query and the multiple tables involved.  I have read from others on the forum that I could just load the matrix by temporarily databinding the matrix to a DataTable and then, after it is loaded, switch the databinding back to the DBDataSource but this does not work as it comes back with an error informing me (rightly so) that there are already rows in the matrix.
    One final option would be to use the User Interface (UI) to cycle through and update each cell of the matrix with the results of a recordset, but, as I said, this can be a large dataset and that could take hours (literally).
    In short, I was wondering if anyone out there can advise me on the most effective options I have.  Is there  a way to quickly load a matrix bound to a DBDataSource?  Is there someway I can load the matrix by binding it to a DataTable and then quickly move this information over to the DBDataSource (I already attempted this and the method I used was as slow as using the UI to update the Matrix)?  Are there effective ways to use the DBDataSource.Query method that I do not know much about (and cannot find many examples of how this functionality is truly used)?  Should I abandon the DBDataSource (though I believe this is the SAP preferred method) and, if so, is there another technique to appropriately update the database other than using DBDataSource?  Others have mentioned handling the updates to the database themselves but I am not sure what this means (maybe it means using SQL UPDATE/INSERT?).  Is there a ways to Flush matrix information to a DBDataSource if the DBDataSource was not used in the loading and is not currently bound to the matrix? 
    Sorry for the numerous amount of questions but thanks for the advise.

            Dim oForm As SAPbouiCOM.Form
            Dim creationPackage As SAPbouiCOM.FormCreationParams
            creationPackage = sbo_application.CreateObject(SAPbouiCOM.BoCreatableObjectType.cot_FormCreationParams)
            creationPackage.UniqueID = "MyFormID"
            creationPackage.FormType = "MyFormID"
            creationPackage.ObjectType = "UDO_TEST"
            creationPackage.BorderStyle = SAPbouiCOM.BoFormBorderStyle.fbs_Fixed
            oForm = sbo_application.Forms.AddEx(creationPackage)
            oForm.Visible = True
            oForm.Width = 300
            oForm.Height = 400
            Dim oItem As SAPbouiCOM.Item
            oItem = oForm.Items.Add("1", BoFormItemTypes.it_BUTTON)
            oItem.Top = 336
            oItem.Left = 5
            oItem = oForm.Items.Add("2", BoFormItemTypes.it_BUTTON)
            oItem.Top = 336
            oItem.Left = 80
    Now put and Edit box to DocEntry
            oItem = oForm.Items.Add("3", SAPbouiCOM.BoFormItemTypes.it_EDIT)
            oItem.Top = 5
            oItem.Left = 5
            oItem.Width = 100
            Dim oEditText As SAPbouiCOM.EditText = oItem.Specific
            oForm.DataSources.DataTables.Add("oMatrixDT")
            oItem = oForm.Items.Add("oMtrx1", SAPbouiCOM.BoFormItemTypes.it_MATRIX)
            oItem.Top = 20
            oItem.Left = 20
            oItem.Width = oForm.Width - 30
            oItem.Height = oForm.Height - 100
            Dim oMatrix As SAPbouiCOM.Matrix = oItem.Specific
            Dim oColumn As SAPbouiCOM.Column = oMatrix.Columns.Add("#", SAPbouiCOM.BoFormItemTypes.it_EDIT)
            oColumn.TitleObject.Caption = "#"
            oColumn = oMatrix.Columns.Add("oClmn0", SAPbouiCOM.BoFormItemTypes.it_LINKED_BUTTON)
            oColumn.TitleObject.Caption = "BP Code"
            Dim oLinkedButton As SAPbouiCOM.LinkedButton = oColumn.ExtendedObject
            oLinkedButton.LinkedObject = SAPbouiCOM.BoLinkedObject.lf_BusinessPartner
            ' Now bind Columns to UDO Objects in Add Mode
            oEditText.DataBind.SetBound(True, "@UDO_TEST", "DocEntry")
            oMatrix.Columns.Item("oClmn0").DataBind.SetBound(True, "@UDO_TEST1", "U_CARDCODE")
            oForm.DataBrowser.BrowseBy = "3"

  • Problems with large scanned images

    I have been giving Aperture another try since 1.1 came out, and I am still having problems with large tiff files derived from scanned 4x5 negatives. The files are 500mb or more, 16 bit RGB, with ProPhoto RGB or Ektaspace PS5 profiles, directly out of the scanner.
    Aperture imports the files correctly, and shows their thumbnails. When I select a thumbnail "Loading" is displayed briefly, and the the dreaded "Unsupported Image Format" is displayed. Sometimes "Loading" goes on for a while, and a geometric pattern (looking like a rendering of random memory) is displayed. Restarting Aperture doesn't help.
    Lower resolution (250mb, 16bit) files are handled properly. The scans are from an Epson 4870 scanner. I have tried pulling the scans into Photoshop and resaving with various tiff options, and as PSD with no improvement. I have the same problem with corrected/modified psd files coming out of Photoshop CS2.
    I am running on a Power Mac G5 dual 2ghz with 8gb of RAM and an NVIDIA GeForce 6800 GT DDL (250mb) video card, with all the latest OS and software updates.
    Has anyone else had similar problems? More importantly, is anyone else able to work with 500mb files of any kind? Is it my system, or is it the software? I sent feedback to Apple as well.
    dual g5 2ghz   Mac OS X (10.4.6)  

    I have a few (well actually about 100) scans on my system of >500Mb. I tried loading a few and am getting an inconsistent pattern of errors that correlates with what you are reporting.
    I imported 4 files and three were troubled, the fouth was OK. I imported another four files and the first one was OK and the three others had your reported error, also the previously good file from the first import was now showing the same 'unsupported' image' message.
    I would venture to say that if you shoot primarily 4x5 and work with scans of this size that Aperture is not the program for you--right now. I shoot 35mm and have a few images that I have scanned at 8000dpi on my Imacon 848 but most of my files are in the more reasonable 250Mb range (35mm @ 5000dpi).
    I will probably downsample my 8000dpi scans to 5000dpi and not worry to much about it. In a world where people believe that 16 megapixels is hi-res you are obviously on the extreme side.(Good for you!) You should definately file a bug report but I wouldn't expect much help anytime soon for your super-sized scans.

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • Major problems with Premiere CS4

    Hello there!
    I'm having major problems with CS4. I recently bought a new computer - essentials are as follows:
    Intel Core i7 920 2.66GHz
    2 x Kingston 3x2GB, DDR3 1333MHz (=12 GB RAM)
    Ati Radeon HD 5770 v2
    Asus P6T SE, LGA1366, X58, DDR3, ATX
    2 x 500GB, Barracuda 7200.12 (RAID1), system disk
    2 x 1TB Barracuda 7200.12, media disks
    Windows 7 Pro 64 bit
    Adobe CS4 Production Premium + latest updates
    The problem I have has mostly to do with Premiere although I think the rest of Production Premium softwares should be running better on this computer as well.
    For starters, I was highly disappointed when I couldn't view a single HD clip from the Premiere timeline (it was an .m2ts rip from a previously self-burned Bluray disc) at all. Rendering is very slow and even after that the video doesn't run normally, showing just a few individual frames per second. When I point the timeline at a certain spot, it takes many seconds until even that very frame is shown in the monitor window.
    And this is not all - I decided to try with a basic DV AVI clip, previously captured with my PPro 1.5 on my earlier computer (3.2ghz Pentium from 2005). It's a native file so it doesn't even need rendering and still the same problem occurs. With normal playback only about 6-10 frames are shown per second. It's absolutely ridiculous - even my 2004 laptop runs DV AVIs more smoothly with PPro 1.5.
    While I'm running the Premiere, the level of used RAM comes only up to around 24 percents. The processor, then again, comes close to a hundred when play is pressed on the timeline.
    I haven't had such immense problems with other Production Premium softwares (though as I said, I think they should be running faster as well). Otherwise the computer is running smoothly.
    A friend of mine pointed out that the graphics card I'm using isn't on the Adobe website's list of supported ones with Premiere, but afaik it shouldn't even be a graphics card problem with this high-end machinery.
    So - I sure could use some help and ideas about how to get the Premiere working. My main intention for buying a new computer and Adobe CS4 Production Premium was to start HD editing after all! Thanks beforehand!

    Hi
    Yeah, I hope he solves problem and knows how it got solved and tells us all...it's like being a detective !  " Mr Plum did it in the "svchost" room ! "
    I got a little lost on this....
    You are welcome and I'm glad that it paid off so nicely for you. One more  suggestion, if I may: I'm a big fan of a fixed pagefile size. When making a  fresh install of Windows, one of my first actions is to set a fixed pagefile on  a different disk (min=max) before installing anything else on that different  disk and removing the pagefile on the C drive. This will ensure that the  pagefile in on the fastest part of the drive and will never be  fragmented.
    I think it used to be that I made swapfiles, or swapdisks which now seems to be called pagefile.  That in itself is a little confusing semantic wise, because of the use of mempage stuff.  It used to be that base memory was 640k, with the upper memory (to make up 1024 kb) was mostly for drivers and stuff.  If more memory was needed than 640k (an event the inventor never thought possible...  "who could want more than the enormous amount of 640K ?", he said..)..the extended and expanded memory got invented to swap memory pages from that 640k as needed with the rest of RAM.  That's basically correct right ?  Sooooo, I don't know if that still goes on, but I suspect it does, unless someone totally changes the architecture of the computer and cpu and fsb and everything...  but I don't know what is going on nowadays.  Anyway, there were other ways besides windows OS to create virtual memory on the hard drive, and Photoshop was one of the programs that could do that, I think.
    Why MS would want to call the swapdisk virtual memory "pagefile" is beyond me.
    Anyway, be that as it may, I understand that making the pagefile first on a drive so it's close to the center of the drive (heads don't have to travel far to access).  But I haven't added a lot of drives to a system before, and I suspect I will be pretty soon...and this is the part I don't get...
    You add drives and put OS on them?  So they are bootable ?  I've added a drive or two but just formatted and used as more storage but didn't put OS on them.  So that part is confusing...
    Then you take the pagefile off the C: drive ?  Are you then using the new drive as your boot drive and the C: drive is too full to use accept to run the programs already installed ?
    Thanks again !
    Rod

  • Strange major problem with 975X PowerUp Intel MOBO

    I’m suddenly having a strange major problem with a MSI 975X Platinum PowerUP Edition V2.1 MS-7246 Intel socket 775 motherboard. Previously, it had given me no problems at all.
    It went like this:
    Powered on system in the morning, booting from a cold start.
    Hit DEL key to get into the BIOS.
    Missed the window and the system booted into Windows normally.
    Re-started the system from within Windows.
    Hit DEL key to get into the BIOS and was successful the second time.
    Hit the ENTER key on the first item “Standard CMOS Features”.
    The DATE, TIME and other information was displayed on the next screen, but at this point, the BIOS screen became completely unresponsive to any keyboard input.
    The only thing I could do at that point was to power off.
    Subsequent power on boot-ups resulted in the system always freezing at the BIOS splash screen after the video card sign-on.
    I have switched power supplies, hard drives, keyboards and video cards. I have disconnected everything except the video card, keyboard and one RAM module. I have switched the four RAM modules around and then reduced the RAM to one 256MB module in the proper slot. I even changed the battery on the MOBO, in case it had shorted out and removed and re-seated the BIOS chip. It still halts at the BIOS splash screen.
    The BIOS chip can’t be completely dead, because I powered off and pushed the BIOS reset button on the MOBO to reset the BIOS to the factory defaults. When I re-booted, I could tell that it had actually reset to the factory defaults because I had previously set the BIOS to reboot the system on resumption of power and the factory default was to remain off until the power button was pushed, which was now the default.
    I’m stumped! All I can think of is to get a new pre-programmed BIOS chip and see if that makes any difference, but I’m not sure this system is even worth the expense with no guarantee of success. Any suggestions on how I might get past the BIOS splash screen or do I for sure have a corrupt BIOS chip?

    System specs as per the posting guidelines -
    Board: MSI 975X Platinum PowerUP Edition V2.1 MS-7246 Intel socket 775
    Bios: Version (can’t recall & can’t check since it won’t go past the BIOS splash screen)
    VGA: ASUS EN7600GS nVidia GeForce 7600GS PCI-e 256MB DDR2 (also tried new XFX nVidia
             Geforce 210 PCI-e 2.0 512MB DDR3)
    PSU: Chiefmax 680W ATX (also tried new OCZ 700W ATX)
    CPU: Intel 2.8GHz LGA-775 Core2 Duo Dual-Core Processor
    MEM: 4 x Samsung 512MB 1Rx8 PC2-5300U (2GB total)
    HDD: Seagate Barracuda 250GB SATA2 (also tried Seagate Barracuda 500GB SATA2)
    COOLER: Intel OEM LGA 775 fan/heatsink C25897-001
    OC: No
    OS: Vista Home Premium 32-bit
    I have been through the BIOS reset procedure several times and pretty much as outlined. As I said before, I could tell that it had taken a reset to the factory defaults, because the power on option had changed.

  • I am experiencing some major problems with my MacBook Pro. I have had some issues with it turning on/off at random times, but today, when starting, I get the grey start-up screen and a recovery bar. After filling in approx 1/4 of the way, the machine dies

    I am experiencing some major problems with my MacBook Pro. I have had some issues with it turning on/off at random times, but today, when starting, I get the grey start-up screen and a recovery bar. After filling in approx 1/4 of the way, the machine dies. After starting it in recovery mode, it will not allow me to download OS X Mavericks- it says the disk is locked. Any ideas? I do not have a back-up and do not want to erase everything before I have explored my options. Help?

    try forcing internet recover, hold 3 keys - command, option, r - you should see a spinning globe
    most people will tell you to do both pram and smc resets (google) and if you still have issues, either clean install (easy) or troubleshoot (hard)

  • I had a major problem with my PC yesterday, and subsequently lost Mozilla Foxfire. When I reloaded it onto my PC it opened up with "Welcome to AOL - Mozilla Foxfire". I don't want AOL attached or tagged to Foxfire. How I do prevent that? And hopefully I h

    I had a major problem with my PC yesterday, and subsequently had to reload Mozilla Foxfire. When reloaded, it opened with "Welcome to AOL - Mozilla Foxfire". I don't want AOL associated or tagged with Foxfire. I went in to "Programs and Files" and deleted everything with AOL, including Quicktime. Tried loading Foxfire again, but it still opened with AOL tagged. I do use AOL for emails and some browsing, but I want to use Foxfire soley for browsing and search engine. And yes, I did also reload AOL. Can't seem to figure out why AOL is tagging onto Foxfire. Hopefully I have not lost all of my Foxfire Bookmarks - that would really suck.
    == User Agent ==
    Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 1.1.4322; InfoPath.2; .NET CLR 3.5.30729; .NET CLR 3.0.30729)

    Hello Larry.
    Hopefully this support article is what you need:
    http://support.mozilla.com/en-US/kb/How+to+set+the+home+page

Maybe you are looking for

  • 11.5.10.2(upgraded 10g database)

    I just installed 11.5.10.2 and upgraded to 10g then after im not able to use ls or any other os commands psz help me out when i type ls command i am getting following message ls: error while loading shared libraries: librt.so.1: cannot open shared ob

  • Cannot Connect to Database LS

    Hi, i have som problems connecting to database: Case 1: I've moved a IIS from one datacenter to another one. (installed all new) On VS, after install LS and all of the componets, can install good an APP, from my laptop. (using publish directly (webde

  • "the application install adobe flash player quit unexpectedly"

    when trying to install flash player update on my mac running osx 10.5.8 i get error message "the application install adobe flash player quit unexpectedly"  is my mac sabotaging the install or what? is there a setting i need to change in order for my

  • I need help from nokia plss abt nokia 3250

    i ve nokia 3250 and i need vedio call software my quest 3250 suport for this or nt coz really i need it my v 4.14 u ll get new one or this fone ll be on this software ? thax for listing to me

  • Weblogic cluster and routers

    Is it possible to setup a WLS cluster having two WLS boxes seperated by           switches/routers ?           - Bhupi