Best way to read tracks of a MIDI file

What is the best way to read events (sequentially based on time) of the tracks in a midi file ? Should they be read serially (track 0 all the way through then track 1, etc) or parallel based on some measure of time (of all tracks)?
Edited by: yz2zy on Jul 24, 2009 11:18 PM

Depends on what you're doing to it, I suppose.

Similar Messages

  • What's the best way to read/write data from a file (preferabl​y a *.txt file)?

    As in the title.  l have revived a couple of old VIs to read and write three numbers and a 1D array of clusters to/from a *.txt file.  The functionality is not very user friendly, and it would also be useful if one could open the text files (manually - not through LabVIEW) and still be able to see/understand what was there.
    I was wondering if anyone would be able to come up with a more efficient and/or user friendly method (compatible with lv6.1) 
    James
    Never say "Oops." Always say "Ah, interesting!"
    Attachments:
    Read Spec.vi ‏110 KB
    Write Spec.vi ‏58 KB

    My primary goal is to have something that works and is easy and comprehensive to operate.  Generating a human-readable file is just a bonus but would be nice if it could be achieved.
    I enclose pictures of the initial file dialog (for both loading and saving the data - referred to as Spec(s) from hence forth), and of the front panel screen seen when  a) loading a spec and  b) saving a spec.  In the file dialog, you have to already know the exact string to input else you'll just be told the file doesn't exist (applies for both loading and saving).  When saving a spec, you cannot see any files previously saved, nor even any previous specs saved within the file.  This means that one can unwittingly overwrite specs without realising it.
    I'm not sure if I've explained this very well, but the current functionality means that far too much can go wrong.  Additionally, if you forget the file name, you'll have no way of knowing what it should be (the data files are stored on a 'locked' part of our network accessible only by Admin or through the LabVIEW executable
    Never say "Oops." Always say "Ah, interesting!"
    Attachments:
    File Dialog.JPG ‏23 KB
    Select The Required Test Spec.JPG ‏10 KB
    Name of specification.JPG ‏6 KB

  • What is the best way to read and manipulate large data in excel files and show them in Sharepoint

    Hi ,
    I have a large excel file that has 700,000 records in it. The excel file has a few columns that change every day.
    What is the best way to read the data form the excel file in fastest and most efficient way.
    2 nd Problem,
    I have one excel file that has many rows each row contain some data that has certain keywords.
    What I want is  to segregate the data of rows into respective sheets(tabs ) in the workbook.
    for example in rows have following data 
    1. Alfa
    2beta
    3 gama
    4beta
    5gama
    6gama
    7alfa
    in excel
    I want there to be 3 tabs now with each of the key words alfa beta and gamma.

    Hi,
    I don't really see any better options for SharePoint. SharePoint use other production called 'Office Web App' to allow users to view/edit Microsoft Office documents (word, excel etc.). But the web version of excel doesn't support that much records as well
    as there's size limitations (probably the default max size is 10MB).
    Regarding second problem, I think you need some custom solutions (like a SharePoint timer job/webpart ) to read and present data.
    However, if you can reduce the excel file records to something near 16k (which is supported rows in web version of excel) then you can use SharePoint Excel service to refresh data automatically in the excel file in SharePoint from some external sources.
    Thanks,
    Sohel Rana
    http://ranaictiu-technicalblog.blogspot.com

  • How is the best way to read data from an iphone if you lost your itunes data after a crash?

    How is the best way to read data from an iphone if you lost your itunes data after a crash?

    How is the best way to read data from an iphone if you lost your itunes data after a crash?

  • What's the best way for reading this binary file?

    I've written a program that acquires data from a DAQmx card and writes it on a binary file (attached file and picture). The data that I'm acquiring comes from 8 channels, at 2.5MS/s for, at least, 5 seconds. What's the best way of reading this binary file, knowing that:
    -I'll need it also on graphics (only after acquiring)
    -I also need to see these values and use them later in Matlab.
    I've tried the "Array to Spreadsheet String", but LabView goes out of memory (even if I don't use all of the 8 channels, but only 1).
    LabView 8.6
    Solved!
    Go to Solution.
    Attachments:
    AcquireWrite02.vi ‏15 KB
    myvi.jpg ‏55 KB

    But my real problem, at least now, is how can I divide the information to get not only one graphic but eight?
    I can read the file, but I get this (with only two channels):
    So what I tried was, using a for loop, saving 250 elements in different arrays and then writing it to the .txt file. But it doesn't come right... I used 250 because that's what I got from the graphic: at each 250 points it plots the other channel.
    Am I missing something here? How should I treat the information coming from the binary file, if not the way I'm doing?
    (attached are the .vi files I'm using to save in the .txt format)
    (EDITED. I just saw that I was dividing my graph's data in 4 just before plotting it... so It isn't 250 but 1000 elements for each channel... Still, the problem has not been solved)
    Message Edited by Danigno on 11-17-2008 08:47 AM
    Attachments:
    mygraph.jpg ‏280 KB
    Read Binary File and Save as txt - 2 channels - with SetFilePosition.vi ‏14 KB
    Read Binary File and Save as txt - with SetFilePosition_b_save2files_with_array.vi ‏14 KB

  • Best way to keep track of family members, etc

    Hi-
    I am a hobby photographer, and have been playing around with the trial version Aperture to replace iPhoto, and I have some questions.
    1. What is the best way to keep track of family members? For example, I like to basically keep track by setting keywords, adding a keyword for each family member & dog. Is this still the way to do it in aperture, or is there a different solution?
    2. If keywords are still the way, what is the fastest way to do this (in particular for multiple files at a time)? The various keyword buttons & shortcuts seem to only apply to one image at a time, whereas the batchchange option seems to only rely on freeform text-- which makes it likely that somewhere along the line I will get a typo.
    Thanks,
    -jamie

    You can apply a keyword to multiple images by selecting a group of images and dropping a keyword from the keyword HUD onto one of the selected images.
    I created keywords in the keyword HUD for family members, events (Christmas, Birthday, Vacation etc.), and some other things. Keywords you have already typed into an image will be in the keyword HUD already. I created a hierarchy of keywords such as:
    People
    -Family
    --(names)
    -Friends
    --(names)
    Events
    -Holidays
    --Christmas
    --Halloween
    Sorry for the dashes to show the indentations. I tried spaces but they didn't work for some reason.
    Message was edited by: Badunit

  • What's the best way to read JSON data?

    Hi all;
    What is the best way to read in JSON data? And is the best way to use it once read in to turn it into XML and apply XPath?
    thanks - dave

    jtahlborn wrote:
    without having a better understanding of what your definition of "use it" is, this question is essentially unanswerable. Jackson is a fairly popular library for translating json to/from java objects. the json website provides a very basic library for parsing to/from xml. which one is the "best" depends on what you want to do with it.Good point. We have a reporting product ([www.windward.net|http://www.windward.net]) and we've had a number of people ask us for JSON support. But how complex the data is and what they want to pull is all over the place. The one thing that's commin is they generally want to pull down the JSON data, and then put specific items from that in the report.
    XML/XPath struck me as a good way to do this for a couple of reasons. First it seems to map well to the JSON data layout. Second it provides a known query language. Third, we have a really good XPath wizard and we could then use it for JSON also.
    ??? - thanks - dave

  • Best way to read chars from InputStream

    Hope this is not a too newbie question.
    Suppose I have an unbuffered InputStream inputStream, what is the best way to read chars from it (in terms of performance)?
    Reader reader = new BufferedReader(new InputStreamReader(inputStream));
    reader.read()
    or
    Read reader = new InputStreamReader(new BufferedInputStream(inputStream))
    reader.read()
    Is there a difference between the two and if so, which one is better?
    thanks.

    If you are reading using a buffer of your own, then adding a buffer for binary data is a bad idea.
    However for text, using a BufferedInputStream could be better as it reduces calls to the OS.
    If it really matters, I suggest you do a simple performance test which runs for at least a few seconds to see what the difference is. (You should runt he test mroe than once)
    Edited by: Peter__Lawrey on 20-Feb-2009 21:37

  • Best way to read from a file

    What would be the best way to read from a file. Which classes do I need to use?
    I have to write a program, which reads data from a comma separated flat file, parse it and after inserting some busineess logic insert into a databse .
    I will have to read the data line by line.
    Any help????

    I would use:
         public void readData()
              try
                   data = new String[this.countRows("comp.txt")][];
                   BufferedReader br = new BufferedReader(new FileReader("comp.txt"));
                   for(int x = 0; x < data.length; x++)
                        StringTokenizer temp = new StringTokenizer(br.readLine(), "?");
                        data[x] = new String[temp.countTokens()];
                        for(int y = 0; y < data[x].length; y++)
                             data[x][y] = temp.nextToken();
              catch(Exception e)
                   System.out.println(e.toString());
         private int countRows(String f)
              int t = 0;
              try
                   BufferedReader brCountRows = new BufferedReader(new FileReader(f));
                   while(brCountRows.readLine() != null)
                        t++;
              catch(Exception e)
                   System.out.println(e.toString());
                   return t;
              return t;
         }It works deliciously!

  • This community is great -- very helpful. What is the best way to read "word" docs on my brand new iPad 2? Is there an app for that?

    This community is great -- very helpful. What is the best way to read "word" docs on my brand new iPad 2? Is there an app for that?

    If you're just reading them from email attachments, you can just open it to "view" the document.
    However, if you wish to do editing and work, you'll need an application. I use Documents to Go, but there are other versions that people are happy with - do a search in the App store for "office suite" or "word processing".
    The next challenge is getting docs back and forth. Doc2Go and others have their own ways to physically sync docs when connected to a computer; you can also email changed/revised/new documents from the iPad to yourself. However, I use a "cloud service" called DropBox that stores docs in the cloud (so I can get to them anywhere) - I paid for the Premium version of Docs2Go to allow that kind of syncing.

  • The best way to read properties file

    Hi folks,
    The best way to read properties file i.e.. using ResourceBundle or FileInputStream . if so how to do it , my properties file is n WEB-INF/classes/myprop.properties.It's urgent.
    Thanks & Regards,
    Rajeshwar.

    WEB-INF/classes should be in your classpath. The web container takes care of that.
    All you have to do is call ResourceBundle.getBundle("myprop").
    It'll append the .properties for you.
    http://java.sun.com/j2se/1.5.0/docs/api/java/util/ResourceBundle.html#getBundle(java.lang.String)

  • HT3847 How is the best way to separate copied MP3 from AIFF files in my library?

    How is the best way to separate duplicate MP3 from AIFF files in my library?

    Zice wrote:
    I want higher resolution then afforded in the original download.
    Then why are you converting iTunes purchases?
    You cannot get higher resolution by converting  the original. This goes for converting anything, not just iTunes purchases.
    Creating an AIFF will simply make the file 10 time as large with zero increase in quality.
    Don't really want to debate value of creating the new version.
    Agreed.
    You are simply wasting time and drive space converting iTunes purchases to AIFF.

  • Best way to spool DYNAMIC SQL query to file from PL/SQL

    Best way to spool DYNAMIC SQL query to file from PL/SQL [Package], not SqlPlus
    I'm looking for suggestions on how to create an output file (fixed width and comma delimited) from a SELECT that is dynamically built. Basically, I've got some tables that are used to define the SELECT and to describe the output format. For instance, one table has the SELECT while another is used to defined the column "formats" (e.g., Column Order, Justification, FormatMask, Default value, min length, ...). The user has an app that they can use to customize the output...which leaving the gathering of the data untouched. I'm trying to keep this formatting and/or default logic out of the actual query. This lead me into a problem.
    Example query :
    SELECT CONTRACT_ID,PV_ID,START_DATE
    FROM CONTRACT
    WHERE CONTRACT_ID = <<value>>Customization Table:
    CONTRACT_ID : 2,Numeric,Right
    PV_ID : 1,Numeric,Mask(0000)
    START_DATE : 3,Date,Mask(mm/dd/yyyy)The first value is the kicker (ColumnOrder) as well as the fact that the number of columns is dynamic. Technically, if I could use SqlPlus...then I could just use SPOOL. However, I'm not.
    So basically, I'm trying to build a generic routine that can take a SQL string execute the SELECT and map the output using data from another table to a file.
    Any suggestions?
    Thanks,
    Jason

    You could build the select statement within PL/SQL and open it using a cursor variable. You could write it to a file using the package 'UTL_FILE'. If you want to display the output using SQL*Plus, you could have an out parameter as a ref cursor.

  • Best way to import a 200GB single dump file

    I was given a 200GB size single dump file containing full export of a schema. can any please tell me whats the best way to import such a huge dmp file. I need to get this import done asap in QA for testing which will let us solve some production issues. step by step instructions if possible would be really helpful for me to complete my task asap.
    Thanks in Advance,
    krim.

    Hi Krim,
    Our dump files normally are never that big so that maybe you could face some other issue here.
    If your dump was a full DB schema dump like:
    $ exp username/password file=full.dmp parameter-list
    then the import should first drop the user in the target system
    SQL> drop user username cascade;
    this is to drop the existing schema before importing
    SQL> Create user according to your reqs
    $ imp username/password file=full.dmp full=y commit=y ignore=y
    Don't know which env you have to run this, but in our case for instance using an 8 X 1.6GHz Itanium2 Montecito a 14 GB dump takes about a couple of hours to import (with an EMC Clariion disk array). It's also true that Oracle imp (did you use exp or expdp ?) is not able as far as I understand to achieve parallelism like impdp where in case of multiple huge tables the import time could be sped up.
    Another thing you may want to check is if you have archive logging on, since the import will log there consuming time.
    Cheers,
    Mike

  • Best way to read data sources in parallel

    Hi,
    I'm looking for conceptual help as I start a project. I am trying to figure out the best way to get data from several sources at different timings and deliver them to a main vi.
    I have 4 systems, which each work well on their own (OK, one doesn't work yet, but let's assume that can be fixed
    One system reads from 2 pH meters on serial ports. The meters are slow to respond, so it takes about 2 minutes to read 4 channels of data. I save these data to a file every 10 minutes
    One system reads from a CO2 meter on the USB port. It reads the data every second, and does some averaging. Every 2-10 minutes, it saves the average to a file and then sends a command to the parallel port to switch the input to the meter.
    The third system reads from 6 valves, each on a serial port. These also take time, probably several minutes to poll all 6. These data will also be saved to a file.
    The 4th system reads a bank of temperature probes on the USB port. These get polled every few seconds and saved to a file every few minutes.
    Now that these individual routines are working, I am trying to create a front end that will display all the data in one place and allow me to set all the parameters from a single place. I would also like the possibility of using the data from one source at another place (for instance, having the output of the temperature probes sent to the pH meters to adjust their calibration). At this point, I get confused as to the best way to proceed.
    It seems like if I just want to read the data from each source, I could simply put all 4 routines together in a single vi (oh, what a mess that would be to read). Maybe I should start this way?
    However, if I want to have any communication between the different data sources, it seems like I will either need to use queues or VI server. I sort of envision a vi that lets me configure the various ports and the file operations and then can turn on monitoring of any or all of the various inputs. Each of them will do their thing at their own time and the main routine will simply display whatever data they deliver whenever they have new data. Fortunately, nothing is particularly time-critical, nor does it need to run fast.
    My questions: Am I correct in how I'm thinking about getting this to work?   Is there a clear choice between queues or vi server? I've looked at several examples of each, but without having done something like this before, it is hard for me to tell which is better.
    Thanks for any suggestions.
    mike

    Hi Mike,
    I think that you are on the right track with your thinking process. You might be able to implement this using queues. I'm not exactly sure how you would do it with VI server since it is just a set of functions that allows you to dynamically control front panel objects, VIs, and the LabVIEW environment. However, there are some great resources available with using queues for this type of application. I'm including the link to another discussion forum that had a very similar question to yours. There is a good example of using queues within this forum post. Also, there is a great example in NI Developer Zone about using queues and some other good ones in the NI Example Finder (just search 'queues' and you should get a few results). I hope this helps!
    Carla
    National Instruments
    Applications Engineer

Maybe you are looking for

  • HT204053 Can I use iCloud to share data (music, photos, etc.) with my wife's apple ID as well as mine?

    My wife and I have separate Apple IDs for our phones.  Is there a way to access photos taken in each device through iCloud without having to sync each device manually.  I would like to be able to share photos without having to sync each time.

  • Issue: SOAMANAGER using Web service in ECC 6.0

    Hi Experts, I created a WebService in SAP ECC6.0 using a Remote enabled function module which has one input field and Six output fields. In ECC 6.0 WSCONFIG,WSADMIN and LPCONFIG Tcodes are obsolete .So i have using tcode SOAMANAGER settings like Appl

  • Add SSD to DV7 (LC748EA#ACB)

    Hello! I want to add SSD OCZ Vector 150 to my notebook as a disk with Windows. I clone Win to this SSD and set it to my notebook as a disk 1. But I can't select my new ssd in BIOS as bootable disk. BIOS version HP F1.B Please help me)

  • ADF_FACES-30179 :The UIViewRoot is null

    Hey guys, I am using oracle jdeveloper 11g releaese 2 and weblogic 1036 running on 64 bit machine. I did deployment my application on the weblogic server, my ear and war is active on the enterprise manager. Running my application on the browser with

  • Customer exit variable for date range for the last day

    Hi, If user enters date ranges in variable suppose user enters 9/1/2010 to 9/15/2010 and we have to calculate currency conversion based on last day i..e 9/15/2010 . 08 2)User input is date range 08/01/2010 to 08/30/2010 I need from the customer exit