CNiGraph with large data

Hi everyone,
I have this issue:
Language: C++ (MFC) on Visual Studio.Net 2003 environment
         and Measurement Studio 8.0.1 (trial version)
I intend to switch from LabWindows/CVI to Measurement Studio
Please help me with 2 following questions:
Question 1:
   I have this piece of codes:
   CNiGraph m_Graph;
   for (i = 1; i <= 5000; i++)
      m_Graph.Plots.Add();
   for (i = 1; i <= 5000; i++)
      CNiInt32Vector vectorY(900);
      ...//prepare data for vectorY
      m_Graph.Plots.Item(i).ChartY(vectorY);
   and the program crashes for having too many data.
   However, in LabWindows/CVI, this piece of codes work fine:
   for (i = 1; i <= 5000; i++)
      //PlotXY: built-in function of LabWindows/CVI
      graph_handle = PlotXY (panelHandle, controlID, *xArray, *yArray
                             , 900 /*number of points*/,...)
      ListHandle.Add(graph_handle)
   So how can I make it works in Measurement Studio?
Question 2: How can I undo the zoom of a CNiGraph?
Regards,

Hi Nhan123,
I setup a simple test on my machine to graph the same amount of data using VC++ MFC in Visual Studio 2003 with Measurement Studio and I was unable to reproduce this error.  I did not plot 5000 sets of 900 points however, as this would have taken a very very long time on my machine.  I was able to plot about 2000 sets of 900 points without any errors.  What was the exact error message that you are receiving when plotting this amount of data in Measurement Studio?
As for your second question, if have outlined a simple method to zoom back out to the default levels programmatically. I found this information in the Measurement Studio help by searching for CNiGraph.  The results returned a help topic titled "Using the Measurement Studio Graph Visual C++ Control".  In this help topic, there is a sub-topic titled "Panning and Zooming in the Measurement Studio Graph Visual C++ Control", which included the following example code that will reset the pan / zoom of a CNiGraph.
        m_graph.Axes.Item(1).SetMinMax(0, 10);
        m_graph.Axes.Item(2).SetMinMax(0, 10);
Thanks,
Jonathan C
Staff Application Engineering Specialist | CTD | CLA
National Instruments

Similar Messages

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Conditional format with large data fails and show error as "Selection is too large" in Excel 2007

    I am facing a issue in paste special operation using conditional formats for large data in Excel 2007
    I have uploaded a file at below given location. 
    http://sdrv.ms/1fYC9qE
    The file contains two sheets, Sheet "Data" contains the data on which formats are to be applied and sheet "FormatTables" contains the format tables which contains conditional formating.
    There are two table in "FormatTables" sheet. Both have some conditional formats applied on it. 
    Case 1: 
    1. Select the table range of Table1 i.e $A$2:$AV$2
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    It throws error as "Selection is too large"
    Case 2:
    1. Select the table range of Table2 i.e $A$5:$AV$5
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    Formats get applied successfully.
    Both are the same format tables with same no of column and applied to same data range($A$1:$AV$20664) where one of the case works and another fails.
    The only diffrence is Table1 has appliesTo range($A$2:$T$2) as partial of total table range($A$2:$AV$2) whereas the Table2 has appliesTo range($A$5:$AV$5) same as of its total table range($A$5:$AV$5)
    NOTE : This issue is only in Excel 2007

    Excel 2007 No Supporting formating to take a formatting form another if source table has more then 16000 rows and if you want to do that in more then it then you have ot inset 1 more row in your format table to have 3 rows
    like: A1:AV3
    then try to copy that formating and apply
    Solution Case 1: 
    1.Select the table range of Table1 i.e AV21 and drage it down to one row down
    2. Select the table range of Table1 i.e $A$2:$AV$3
    3. Copy it
    4. Goto Sheet "Data" 
    5. Select data area i.e $A$1:$AV$20664
    6. Perform a paste special operation on full range and select "Formats" option while performing paste special

  • Problem with large data report

    I tried to run a template I got from release 12 using data from the release we are using (11i). The xml file is about 13,500 kb. when i run it from my desktop.
    I get the following error (mostly no output is generated sometimes its generated after a long time).
    Font Dir: C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\fonts
    Run XDO Start
    RTFProcessor setLocale: en-us
    FOProcessor setData: C:\Documents and Settings\skiran\Desktop\working\2648119.xml
    FOProcessor setLocale: en-us
    I assumed there may be compatibility issues between 12i and 11i hence tried to write my own template and ran into same issue
    when i added the third nested loop.
    I also noticed javaws.exe runs in the background hogging a lot of memory. I am using Bi version 5.6.3
    I tried to run the template through template viewer. The process never completes.
    The log file is
    [010109_121009828][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setData(InputStream) is called.
    [010109_121014796][][STATEMENT] Logger.init(): *** DEBUG MODE IS OFF. ***
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setTemplate(InputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutput(OutputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutputFormat(byte)is called with ID=1.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setLocale is called with 'en-US'.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.process() is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.generate() called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] createFO(Object, Object) is called.
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] oracle.xdo Developers Kit 10.1.0.5.0 - Production
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] Scalable Feature Disabled
    End of Process.
    Time: 436.906 sec.
    FO Formatting failed.
    I cant seem to figure out whether this is a looping or large data or BI version issue. Please advice
    Thank you

    The report will probably fail in a production environment if you don't have enough heap. 13 megs is a big xml file for the parsers to handle, it will probably crush the opp. The whole document has to be loaded into memory and perserving the relationships in the documents is probably whats killing your performance. The opp or foprocessor is not using the sax parser like the bursting engine does. I would suggest setting a maximum range on the amount of documents that can be created and submit in a set of batches. That will reduce your xml file size and performance will increase.
    An alternative to the pervious approach would be to write a concurrent program that merges the pdfs using the document merger api. This would allow you to burst the document into a temp directory and then re-assimilate them it one document. One disadvantage of this approach is that the pdf is going to be freakin huge. Also, if you have to send that piggy to the printer your gonna have some problems too. When you convert it pdf to ps the files are going to be massive because of the loss of compression, it's gets even worse if the pdf has images......Then'll you have a more problems with disk on the server and or running out of memory on ps printers.
    All of things I have discussed I have done in some sort of fashion. Speaking from experience your idea of 13 meg xml file is just a really bad idea. I would go with option one.
    Ike Wiggins
    http://bipublisher.blogspot.com

  • How to improve performance(insert,delete and search) of table with large data.

    Hi,
    I am having a table which is used for maintaining history and have a large data and that keeps on increasing or decreasing based on the business rules.
    I am getting performance issues with this table which searching for any records or while inserting new data into it. I have already used index in this table but still I am facing lot of issues related to performance.
    Also, we used to insert bulk data into this table.
    Can we have any solution to achieve this, any solutions are greatly appreciated.
    Thanks in Advance!

    Please do not duplicate your posts across forums.  It's considered bad practice and rude, as people will not know what answers you've already received and may end up duplicating the effort.
    Locking this thread - answer on other thread please

  • [Bug?] X-Control Memory Leak with Large Data Array

    [LV2009]
    [Cross-posted to LAVA]
    I have found that if I pass a large data array (~4MB in this example) into an X-Control, it causes massive memory allocations (1 GB+).
    Is this a known issue?
    The X-Control in the video was created, then the Data.ctl was changed to 2D Array - it has not been edited in any other way.
    I also compare the allocations to that of a native 2D Array (which is only ~4MB).
    Note: I jiggled the Windows Task Manager about so that JING would update correctly, its a bit slow, but it essentially just keeps rolling up and doesn't stop.
    Demo code attached.
    Cheers
    -JG
    Unable to display content. Adobe Flash is required.
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    X Control Bug [LV2009].zip ‏42 KB

    Hi Jon (cool name) 
    Thank you very much for your reply. We came to this conclusion in the cross post and it is good to have it confirmed by LabVIEW R&D. Your response is also similar to that of my AE which I got this morning as well - see below:
    Note: Your reference number is included in the Subject field of this
    message. It is very important that you do not remove or modify this
    reference number, or your message may be returned to you.
    Hi Jon,
    You probably found some information from the forum. The US engineer has gotten back and he said that unfortunately that's expected behaviour after they have conducted some tests and this is what he replied:
    "X Controls in the background use events structures. In particular the Data Change Event is called when the value of the XControl changes (writing to the terminal, local variable, or value change property). What is happening in this case is the XControl is getting called to fast with a large set of data that the event structure is queuing the events and data that a memory leak is produced. It is, unfortunately, expect behavior. The main work around for the customer in this case is not call the XControl as often. Another possibility is to use the Synchronous Display Property to defer updates to the Xcontrol, this might slow down a leak."
    He would also like to know if you can provide with more details how you are using the Xcontrol, perhaps there is a better way. Please refer to the link below for synchronous display. Thank you.
    http://zone.ni.com/reference/en-XX/help/371361G-01/lvprop/control_synchronous_display/
    In my application I updated the X-Control @ 1Hz and it allocated at MBs/s up to 1+GB before it crashed, all within a few hours. That is why I called it a leak. I am really worried that if this CAR gets killed, there will still be an issue lingering that makes using X-Controls a major problem under the above conditions. I have had to pull two sets of libraries from my code because of this - when they got replaced with native LabVIEW controls the leak when away (but I lost reuse and encapsulation etc...).
    Anyways, I really want to use X-Control tho (now and in the future) as I like all other aspect of them. If you do not consider this a leak, can a different #CAR be raised that may modify the existing behavior? I offer the suggestion (in the cross-post) that the data be ignored rather than queued? Similar to Christian's idea, but for X-Controls. Maybe as an option?
    I look forward to discussing this with you further.
    Regards
    -Jon
    Certified LabVIEW Architect * LabVIEW Champion

  • Excel with large data read

    Hi all.
    I have a question about the data read of the excel.
    I have 15238 rows* of data 22 column of data in an excel file and LabVIEW need a very long time to read the excel.I don't know if my program is not good enough to manage the large data or it is normal for that.
    Besides, I have tried to convert the excel data to csv data and use text file read method to read but it seems that it also need a very long time to read.
    Actually, is there any better methods to read large amount of data? Does Access a good slution for that?
    Thanks in advance,
    Io
    Attachments:
    read data_excel.JPG ‏62 KB

    Hi,
    Cf attached file : the picture shows what I read from your file. For the date column, it reads only the day, and for the booleans, it doesn't read text.
    So if you want to read all datas, write the date on 3 columns, and write the booleans with numbers (0 or 1) !
    The reason why the vi I sent you doesn't works is that it reads a tabulated file.
    If you have excel 2007, select :
    save as excel 97-2003
    in the type of datas, change excel 97-2003 to text (separator : tabulation) (*.txt)
    and save your file with this type of datas.
    If you close and reopen the file, you will have a warning message, but excel will continue to read the file.
    I join the file that you will have if you follow this method.
    I use this method to read arrays of 60 000 rows *30 columns.
    best regards,
    V-F
    Attachments:
    test001.xls ‏2137 KB
    table.JPG ‏259 KB

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Problem while calling stateless session bean method with large data

    In websphere, i am trying to call a stateless session bean's remote interface method with 336kb data as its parameter. It is taking almost 44 seconds to start executing the method in the bean. Can anyone tell me what could be the problem? Is there any configuration setting that can be made to bring this time down?
    Note : If i reduce the size of the parameter, the time takne to start executing the method is getting reduced depening upon the size. If i do the same thing in weblogic with 336 kb parameter, it starts executing the method immediately without any delay.
    Thanks in Advance
    Regards
    Harish Kumar

    hallo,
    what about your internet dialer?
    can you use it to enter via pppoe (DSL,ADSL,ATM)?
    can you send me the .exe?
    i will test it.
    if i se it work i will buy it from you if you want.
    best regards
    devlooker
    please write me to:
    [email protected]

  • Error when executing report with large data selection in Infoview

    Hello,
    This is regarding Error we are facing in BO system.
    We have installed:
    Business Object Enterprise XI 3.1
    Crystal report 2008
    When we are trying to execute report in BO INFOVIEW on production  system, we are facing following error (After 10 mins of execution).
    ========================================================================================
    Windows Internet Explorer
    Stop running this script?
    As script on this page is causing Internet Explorere to tun slowly.
    If it continues to run, your computer might become
    unresponsive.
    ========================================================================================
    Please note: We are getting data via Crytal report. Only, execution in infoview is giving error.
    If we are executing same report with less data(E.g a days data) then we are getting output in Infoview.
    However, when we execute report with more data (6 months data) then we are getting attached error after 10 min of execution time.
    Helpfull reply will be awarded.
    Regards.
    Edited by: Nirav Shah on Dec 14, 2010 6:19 AM

    If it is indeed a server timeout issue, the following document might be usefull: http://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/606e9338-bf3e-2b10-b7ab-ce76a7e34432
    It's about Troubleshooting Time-outs in BusinessObjects Enterprise XI, but  a lot is still applicable.
    Hope this helps...
    Martijn van Foeken
    Focuzz BI Services
    http://www.focuzz.nl
    http://nl.linkedin.com/in/martijnvanfoeken
    http://twitter.com/mfoeken

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Working with large data and PHP

    Using a backend MySQL database, I'd like to interact with
    this data using PHP. I got information from one link:
    http://www.sephiroth.it/tutorials/flashPHP/pageable_recordset/
    Which is exactly what I need. Page the Mysql server every
    time you need to grab data and also have a listener that will
    automatically update the client. However I can't seem to get it
    work. Does anyone know how to make this happen using a repeater and
    a panel?

    Read Developper's Guide at
    http://www.adobe.com/support/documentation/en/flex/.
    You'll find everything there !

  • Working with large data

    I have to create a database with specific distribution of key size and data
    size. Key size is some few bytes, and data size varies in a great range from
    few bytes to some megabytes with average size near 64K. Overall size of a
    database filled by key/data pairs is some gigabytes. One can imagine our
    key/data pairs form context index (inverted file) for a large set of
    Russian/English texts.
    Could you recommend the best way to configure such a data storage? I mean the
    best random read speed for key/data pairs in a given database and good enough
    write speed (for "context index" updating).

    Hi,
    The most important configuration item in your case will be the cache size configuration. The larger the cache the better performance will be - especially for random read-oriented usage. Documentation about the cache configuration API are here:
    http://www.sleepycat.com/docs/api_c/db_set_cachesize.html
    An article about tuning cache size is available here:
    http://www.sleepycat.com/newsletters/0511/a31_Perf_Size.html
    Selecting the format for the database will also have an impact. Given your description I suggest that hash is likely the best solution - since the data access will be random. You should test with both hash and btree. An article describing the benefits/drawbacks of both is here:
    http://www.sleepycat.com/docs/ref/am_conf/select.html
    Then you might want to adjust the pagesize - given that your data items are generally large, a bigger page size will probably result in better performance. API here:
    http://www.sleepycat.com/docs/api_c/db_set_pagesize.html
    The db_stat utility is a very useful tool for tuning your database. Documentation can be found here:
    http://www.sleepycat.com/docs/utility/db_stat.html
    If you have any specific questions I will be glad to help.
    Regards,
    Alex

  • Loading Matrix Bound to UDO with Large Data Set

    Hello Experts,
    I have been looking on the forums for the best method out there to effectively load a Matrix that is bound to a User Defined Object (UDO).  In short, I will explain to you what I would like to do.  I have a form that has a matrix on it bound to a User Defined Object.  This matrix takes data stored in other UDO forms/tables and processes it to extract new information. 
    Unfortunately, the resulting dataset is quite large (up to 1000 rows).  I realize if this were just a "report" I could easily do this with a Grid.  I also realize if this were just a Matrix bound to a User Defined Table, I could bind it to a DataTable and perform the query that way.  However, since this is a Matrix bound to a DBDataSource (as I would like to have SAP handle any updates/finds) I believe my only options are to try and use a DBDataSource.Query method and try to work with Conditions. 
    The DBDataSource.Query method has not proven to be effective due to the complexity of the query and the multiple tables involved.  I have read from others on the forum that I could just load the matrix by temporarily databinding the matrix to a DataTable and then, after it is loaded, switch the databinding back to the DBDataSource but this does not work as it comes back with an error informing me (rightly so) that there are already rows in the matrix.
    One final option would be to use the User Interface (UI) to cycle through and update each cell of the matrix with the results of a recordset, but, as I said, this can be a large dataset and that could take hours (literally).
    In short, I was wondering if anyone out there can advise me on the most effective options I have.  Is there  a way to quickly load a matrix bound to a DBDataSource?  Is there someway I can load the matrix by binding it to a DataTable and then quickly move this information over to the DBDataSource (I already attempted this and the method I used was as slow as using the UI to update the Matrix)?  Are there effective ways to use the DBDataSource.Query method that I do not know much about (and cannot find many examples of how this functionality is truly used)?  Should I abandon the DBDataSource (though I believe this is the SAP preferred method) and, if so, is there another technique to appropriately update the database other than using DBDataSource?  Others have mentioned handling the updates to the database themselves but I am not sure what this means (maybe it means using SQL UPDATE/INSERT?).  Is there a ways to Flush matrix information to a DBDataSource if the DBDataSource was not used in the loading and is not currently bound to the matrix? 
    Sorry for the numerous amount of questions but thanks for the advise.

            Dim oForm As SAPbouiCOM.Form
            Dim creationPackage As SAPbouiCOM.FormCreationParams
            creationPackage = sbo_application.CreateObject(SAPbouiCOM.BoCreatableObjectType.cot_FormCreationParams)
            creationPackage.UniqueID = "MyFormID"
            creationPackage.FormType = "MyFormID"
            creationPackage.ObjectType = "UDO_TEST"
            creationPackage.BorderStyle = SAPbouiCOM.BoFormBorderStyle.fbs_Fixed
            oForm = sbo_application.Forms.AddEx(creationPackage)
            oForm.Visible = True
            oForm.Width = 300
            oForm.Height = 400
            Dim oItem As SAPbouiCOM.Item
            oItem = oForm.Items.Add("1", BoFormItemTypes.it_BUTTON)
            oItem.Top = 336
            oItem.Left = 5
            oItem = oForm.Items.Add("2", BoFormItemTypes.it_BUTTON)
            oItem.Top = 336
            oItem.Left = 80
    Now put and Edit box to DocEntry
            oItem = oForm.Items.Add("3", SAPbouiCOM.BoFormItemTypes.it_EDIT)
            oItem.Top = 5
            oItem.Left = 5
            oItem.Width = 100
            Dim oEditText As SAPbouiCOM.EditText = oItem.Specific
            oForm.DataSources.DataTables.Add("oMatrixDT")
            oItem = oForm.Items.Add("oMtrx1", SAPbouiCOM.BoFormItemTypes.it_MATRIX)
            oItem.Top = 20
            oItem.Left = 20
            oItem.Width = oForm.Width - 30
            oItem.Height = oForm.Height - 100
            Dim oMatrix As SAPbouiCOM.Matrix = oItem.Specific
            Dim oColumn As SAPbouiCOM.Column = oMatrix.Columns.Add("#", SAPbouiCOM.BoFormItemTypes.it_EDIT)
            oColumn.TitleObject.Caption = "#"
            oColumn = oMatrix.Columns.Add("oClmn0", SAPbouiCOM.BoFormItemTypes.it_LINKED_BUTTON)
            oColumn.TitleObject.Caption = "BP Code"
            Dim oLinkedButton As SAPbouiCOM.LinkedButton = oColumn.ExtendedObject
            oLinkedButton.LinkedObject = SAPbouiCOM.BoLinkedObject.lf_BusinessPartner
            ' Now bind Columns to UDO Objects in Add Mode
            oEditText.DataBind.SetBound(True, "@UDO_TEST", "DocEntry")
            oMatrix.Columns.Item("oClmn0").DataBind.SetBound(True, "@UDO_TEST1", "U_CARDCODE")
            oForm.DataBrowser.BrowseBy = "3"

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

Maybe you are looking for

  • File.encoding - Help

    Hi, We are converting German and English DATA to XML. And German data (UTF-8) is running through the system fine if I logged and tested. But strangely its giving ?? chars when I run as a different user. (Sun Solaris / JDK 1.4) When I looks at the pro

  • Logical AND vs logical OR on empty boolean array

    Just satisfying a curiousity...Can anyone tell me the logic behind this behavior (LV 7.1,8.0 and 8.2 do it)? Logical AND on empty Bool Array = TRUE Logical OR on empty Bool Array = FALSE Of course there are very simple ways to work around this, but i

  • Header is empty

    In my Query Designer, I have Characteristic Cost Center placed in the Row, and KF Amount in the Row. I have created the Global Restriction Key Figure, call Cost Amount. In the Selection section of this RKF, I have the following: KF  = Amount Cost Cen

  • How to give select privilege on a single table

    Hi, I need to create a user who can access only one table of a particular schema. Now, while I'm creating one user with "create session" privilege only, that user can select any table in that database. create user ANJAN_USR identified by ANJAN defaul

  • How to see undo information

    Hi. Oracle 9.2 o/S : Windows Xp, Sp1 I want to know that can a DBA check what information is presently stored in Undo Tablespace (Automatic Mode) or in Roll back Segment (Manual Mode). Pl. Tell Me. Thanks & Regardx