Analyzing large TDMS files in time invervals

We have TMDS files that are several hours long (sampled at 1024 S/s).
We would like to measure the number of zero crossings and peaks in blocks one second long over the entire recording.  I can see that FFT has a Time Interval function, but I don't see any function of this nature for statistics. 
As an additional "nice" to have we would like to have these blocks overlapping 75%.
Carsten Thomsen

Hi Carsten,
Are you wanting to convert a Tachometer pulse to a Revolutions channel?  If so, I have a VBScript I can send you.  If not, what do you do with the number of zero crossings and peaks?  Are you counting just negative peaks/zero crossings, just positive peaks/zero crossings, or both?  How do you define a peak?
Brad Turpin
DIAdem Product Support Engineer
National Instruments

Similar Messages

  • Analyzing large TDMS files

    I am developing an application that will measure an AC voltage at 100KHz for 4 hours. I need to calculate the voltage and frequency drift during 30 second intervals. My plan is to parse the data into small chunks (typically 1 second), calculate the RMS voltage and frequency, and determine the magnitude of drift that may be occurring. I have been able to collect data during short time intervals and analyzed it with the Tone Measurement VI and  RMS VI, but am not sure how to break up large data files into smaller chunks for analysis and store the results.

    When you read a TDMS file, you can specify how many samples to read and where to start reading from.  All you need to do is read the data in chunks and process each chunk individually.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Signal Express Large TDMS File Recording Error

    Hello,
    I have the following application and I am looking for some tips on the best way to approach the problem with Signal Express:
    I am attempting to using Signal Express 2009 (Sound and Vibration Assistant) to collect random vibration data on three channels over an extended period of time -- about 20 hours total.  My sample rate is 2kHz.  Sampling at that rate over that period of time invovles the creation of a very large TDMS file, which is intended for various types of analysis in signal express later or some other application later on.  One of the analysis functions to be done is a PSD (Power Spectral Density) plot to determine the vibration levels distributed over a band of frequencies during the log. 
    My original solution was to collect a single large TDMS file.  I did this with Signal Express recording options configured to save and restart "in current log" after 1 hour worth of data is collected.  I configured it this way because if there is a crash/sudden loss of power during data collection, I wanted to ensure that only up to an hours worth of data would be lost.  I tested this option and the integrity of the file after a crash by killing the SignalExpress process in the middle of recording the large TDMS file (after a few save log file conditions had been met).  Unfortunately, when I restart signal express and try to load the log file data in playback mode an error indicating "TDMS Data Corrupt" (or similiar) is displayed.  My TDMS file is large, so it obviously contains some data; however, Signal Express does not index its time and I can not view the data within the file.  The .tdms_index file is also present but the meta data.txt file is not generated.  Is there any way to insure that I will have at least partially valid data that can be processed from a single TDMS file in the event of a crash during mid-logging?   I don't have too much experience dealing with random vibration data, so are there any tips for generating vibration level PSD curves for large files over such a long time length?
    My solution to this problem thusfar has been to log the data to seperate .TDMS files, about an hour in length each.  This should result in about 20 files in my final application.  Since I want to take a PSD, which ends up being a statistical average over the whole time period. I plan on generating a curve for each of these files and averaging all 20 of them together to get the overall vibration PSD curve for the 20 hour time period.

    JMat,
    Based on the description of your application, I would recommend writing the data to a "new log" every hour (or more often). Based on some of my testing, if you use "current log" and S&V Assistant crashes, the entire TDMS file will be corrupted. This seems consistent with what you're seeing.
    It would be good if you could clarify why you're hoping to use "current log" instead of "new log". I'll assume an answer so I can provide a few more details in this response. I assume it's because you want to be able to perform the PSD over the entire logged file (all 20 hours). And the easiest way to do that is if all 20 hours are recorded in a continuous file. If this is the case, then we can still help you accomplish the desired outcome, but also ensure that you don't lose data if the system crashes at some point during the monitoring.
    If you use "new log" for your logging configuration, you'll end up having 20 TDMS files when the run is complete. If the system crashes, any files that are already done writing will not be corrupted (I tested this). All you need to do is concatenate the files to make a single one. If this would work for you, we can talk about various solutions we can provide to accomplish this task. Let me know.
    Now there is one thing I want to bring to your attention about logging multiple files from SignalExpress, whether you use "current log' or "new log". The Windows OS is not deterministic. Meaning that it cannot guarantee how long it takes for an operation to complete. For your particular application, this basically means that between log files there will be some short gap in time that the data is not being saved to disk. Based on my testing, it looks like this time could be between 1-3 seconds. This time depends heavily on how many other applications Windows has running at the same time.
    So when you concatenate the signals, you can choose to concatenate them "absolutely", meaning there will be a 1-3 second gap between the different waveforms recorded. Or you can concatenate them to assume there is no time gap between logs, resulting in a pseudo-continuous waveform (it looks continuous to you and the analysis routine).
    If neither of these options are suitable, let me know.
    Thanks, Jared 

  • Opening large tdms files in excel takes forever, can anything be done?

    After recording some daq channels for 4 hours my file is 120mb large, with about 20columns of data and 150k rows. Opening this in excel takes at least 5min, several not responding screen fades, and growing fear that all my data is impossible to get to.
    What can I do about this? Is there a split tdms file option I can use or a way to speed up excel maybe?
    Solved!
    Go to Solution.

    The solution is to not make the TDMS files so large. Perhaps modify the code so it starts a new file every 1/2 hour or something. Could you post a screen shot of the part of the code that is doing the saving?
    Something that just occurred to me is that the TDMS file format is optimized for writing - not reading so stuff is just sort of streamed into the file with an index keeping straight what goes with what. The results is a file where the data from individual channels can be very fragmented, like parts of a file on a hard drive. I think there is a routine for defraging a TDMS file. Alternately, you could save the data in a temporary file (not necessarily TDMS) and resave it to the final TDMS file after all the data is collected.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Break large TDMS file into small files

    hello all
    My TDMS file is around 3g, and needs changed into around 10M size files. 
    I ran McRorie's splitFiles.vi ‏15 KB in this page.  and set the number of samples per file as 5000000, however, I cannot get the results I need. every small file is only 1KB, and no data inside. What is the possible problem in this?
    Also, I tried to write a vi based on sample vi(Read TDMS File) by adding one "write to measurement file.vi". However, when I set the small file size as 10M Byte inside the"write to measurement file.vi", the first file could be around 20M, and the next few files may be correct as 10M, and then it just stop splitting, edding with a file even much larger than original file. I uploaded my vi here, maybe someone  can help to find some mistake for me. 
    Thanks very much!
    Wuwei Mao
    Solved!
    Go to Solution.
    Attachments:
    Read TDMS File.vi ‏54 KB

    Hi Wuwei,
    After giving the correct data type to TDMS Read node in splitFiles.vi, it works as expected. ( See the attached two VIs: createFile.vi and the modified splitFiles.vi)
    Because I don't know how you created your TDMS file, I write a new 3G bytes TDMS file, which has one group and one channel. The data type of samples is unsigned 16-bit Integer. The total number of samples is 1610612736. Then I set the number of samples per file to 5000000 as you did. So after splitting, each file size is 5000000*(16/8) bytes (around 10M bytes).
    Please make sure the followed steps have been done before you run the splitFiles.vi:
    1. The TDMS file will be split has been put on the proper path;
    2. The correct group and channel names have been given;
    3. The correct data type to TDMS Read node has been given.
    Because your second option using "write to measurement file.vi" to split TDMS file will lose some information, such as group and channel names. So I suggest using the method used by splitFiles.vi to accomplish your goal.
    Jie Zheng
    NI R&D

  • Large XML files containing time based data

    Hi,
    i'm having an extremely large XML files, gegas, it holds some info for displaying graphical representations for users, one of which is the updated view every time instance. i'm stuck with handling this large XML file, obviously i tried using a DOM parser, i got an out of memory exception, i though of using SAX as its event based, but then i'll be stuck going backward through the representation, i need users to go backward and forward representing the graphical representation at different time instances.

    It is gigabytes?
    And this file gets modified by some external application(s), rather than being created one time?
    And you want to keep reprocessing it at the same time other application(s) are modifying it?
    If yes to all of those, then I doubt there is any real solution except to copy it. If you attempt to sync it with the file system then you are either going to be locking it, preventing updates, or you are going to end up with corrupted data and thus ill formed xml.
    If the time data is historical in that a 'user' does something and a new entry is made and you want to report in various ways on that data then parsing it into a database would seem like reasonable solution.

  • How to get the last data point from a TDM file in LabVIEW?

    Hello,
    I am using LabVIEW to analyze some rather large TDM files, and I need a way to get only the last data point.  So far, the only way I have been able to accomplish this is by reading the entire file.  Is there a property in the TDM file or a function in LabVIEW that will allow me to get the index of the last item in a channel?  
    Thanks!
    Christina

    Do you want to avoid reading whole file and want to be able to reach or get the index of last value of channel? is there any specific reason? I am not sure you could do it without loading the whole file. But the easiest way would be just to use array functions "array size" would give you the index of last element. 
    -Nilesh
    Kudos are (always) welcome for the good post. :-)

  • Timing problems reading tdm files

    Hello NI community,
    my first post because of a very annoying problem with the Storage VIs from the File I/O VIs and Functions palette. I spent a long time to create a program for analyzing tdm files and I spent almost the same time trying to fix this problem now
    My program opens a single tdm file, reads in the data and analysis it, displays the result (e. g. 10 DBL values) and closes the tdm file afterwards, then it opens the next tdm file... . 
    The Problem is, that the execution time increases permanently e. g. starting at 50 ms after 4000 tdm files read it takes about 1 s! So it takes days to read in more than 10000 tdm files. Also it takes many minutes to close the program after reading many (e. g. 4000) files!
    Maybe the Storage VIs store the data in the background on a server or something else and do not release this data after closing the tdm file.
    Has anyone an idea how to fix this problem? How to release all resources after closing the tdm file. Is there an alternative method to read in tdm files without using the buggy Storage VIs.
    Converting tdm to tdms does not help, converting time increases the same way.
    Wait time (0,5 s) after closing the tdm file does not help.
    Settings for tdm VIs: open (read only)
    LabView2011 SP1
    Thanks in advance
    Daniel

    Hello Norbert,
    thank you for your reply!
    Attached you´ll find a simplified example for testing.
    It reads the same .tdm file multiple times. The behavior is the same than in my application.
    Copy the files on your computer, select the correct folder for the .tdm file in the VI and press start. You´ll see the execution time for opening the .tdm file, for reading the data and also for closing the .tdm file rising.
    For example "Read data" duration on my computer:
    Loop #:     Duration 2:   Factor to duration at start:     Stopping program:
    1              19 ms                                                ​    < 5 s
    1000         42 ms         2
    2000         63 ms         3
    4000         101 ms       5
    8000         185 ms       10
    10000       222 ms       12                                        > 3 min
    The tdm-files I need to read in are with .bin data files and bigger. Reading lasts about 100 ms for the first tdm-file. The factor is almost the same like in this example. So it takes about 1 s after reading 6000 tdm-files.
    Best Regards
    Daniel
    Attachments:
    Read_tdmFileMultipleTimesTest.vi ‏876 KB
    Read_tdmFileMultibleTimesTest.zip ‏1265 KB

  • DIAdem not saving all samples from TDMS file

    Hello to all,
    I have several large TDMS files (1.8Gb each) containing approx. 120 million sampels. When I try to open these fiels in Diadem it previews the correct amount of samples (120 Million), but when I try to save tha data in Matlab format it saves me only half the samples, 60 Million. Also, I have a Matlab script that opens the TDMS files and does exactly the same thing, it is not saving all samples. 
    Can you please advise me on this issue? Is there a way to automatically fragment large TDMS files into smaller ones?
    Best regards,
    Ion 

    Hello Jean Baptiste,
    It seems your TDMS file is fine since you have the correct number of data in DIAdem.
    It may comes from the conversion to the MatLab format. Maybe MatLab can't accept more than 60 millions data. You should ask that to the MatLab expert on their forum.
    However, if you want to fragment your TDMS file, you have to do it manually in DIAdem, when you load the file into the Data Portal, or you can automate the function by using VBScript. (Last panel in DIAdem)
    An other solution would be to fragment your TDMS file directly where you write it. (In LabVIEW maybe?)
    Regards,
    Benoit S. - Field Sales Engineer
    Certified LabVIEW Developer
    Certified TestStand Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Été de LabVIEW 2014
    12 présentations en ligne, du 30 juin au 18 juillet

  • Large swap file (900MB), but no pageouts

    My MacBook Air 13" with 4GB of RAM accumulates a large swap file over time, but pageouts are 0 (page ins: 1.1million). Anyone know why? Is it possible to see which process/application currently have pages stored in the swap file?

    Sorry to dig this old thread up, but I am seeing an identical behavior to the original poster, and I just wanted to say—you did an excellent job of explaining how page ins can be very large with no pageouts, but I don't think this explains the real mystery, which is that there is a large amount of swap space, and a large amount the system says is used, but there are no page outs. You have not explained how a swap file gan grow in usage with no page outs, and if I understand things correctly, this should not be possible.
    I'm having the same issue on my new MacBook Pro with Retina display. I have 16GB of RAM and for the most part I don't use more than 4-6GB of that—I bought it for the occasional times I need to do a lot of VM testing, but I haven't needed to do that yet. I consistently see my swap usage grow to be as large as 2-3GB with a total size for all the swapfiles in /var/vm being 3-4GB.
    I don't need the space, and the system isn't slow or anything. I just want to know how this is possible. I have been using Mac OS X for 10 years now, and working on linux servers for 5 years or so. I've never seen swap usage be more than 0KB when there are no page outs.
    I've attached some screenshots of what I am seeing:
    Screen capture from Activity Monitor.
    Screen capture from Terminal executing 'du -hsc /var/vm/swapfile*' to tally the total size of the swapfiles.
    I should note that it tends to take a day or two of use to start to see this, in a series of sleep cycles here and there. I put my laptop to sleep at night as well as to and from work, etc. It probably sleeps/wakes 5-7 times a day in all. I tend to notice that the usage creeps up, starting atound 50 MB, then I will notice it being a few hundred some time later. It really makes me wonder if this has to do with some kind of discrete vs. dedicated graphics switching or something, perhaps a very low level operation that is somehow avoiding getting counted by the system's resource tracking facilities. I have no idea, but I would love it if there were someone out there who could explain it or point me in the right direction.
    Thanks for your time.

  • Cannot load large CSV files in SignalExpress("Not enough memory to complete this operation" error)

    Hi guys,
    I'm new here and just  have browsed
    some of the related topics here regarding my problem but could not seem
    to find anything to help me fix this problem so I decided to post this.
    I currently have a saved waveform from an oscilloscope that is quite
    big in size(around 700MB, CSV file format) and I want to view this on
    my PC using SignalExpress. Unfortunately when I try to load the file
    using "Load/Save Signals -> Load From ASCII", I always get the "Not
    enough memory to complete this operation" error. How can we view and
    analyze large waveform files in SignalExpress? Is there a workaround on
    this? 
    Thanks,
    Louie
    P.S.>I'm very new to Signal Express and haven't modified any settings on it. 

    Hi Louie,
    Are you encountering a read-only message when you tried to save the boot.ini file? If so, you can try this method: right-click on My Computer >> Select "Properties", and go to the "Advanced" tab. Select "Settings", and on the next screen, there is a button called Edit. If you click on Edit you should be able to modify the "/3GB" tag in boot.ini. Are you able to change it in this manner? After reboot, you can reload the file to see if it helps.
    To open a file in SignalExpress, a contiguous chunk of memory is required. If SignalExpress cannot find a contiguous memory chunk that is large enough to hold this file, then an error will be generated. This can happen when fragmentation occurs in memory, and fragmentation (memory management) is managed by Windows, so unfortunately this is a limitation that we have.
    As an alternative, have you looked at NI DIAdem before? It is a software tool that allows users to manage and analyze large volumes of data, and has some unique memory management method that lets it open and work on large amounts of data. There is an evaluation version which is available for download; you can try it out and see if it is suitable for your application. 
    Best regards,
    Victor
    NI ASEAN
    Attachments:
    Clipboard01.jpg ‏181 KB

  • Write to measurement file Express VI - TDMS file has separate "channels" for each data point

    Im trying to write a VI to measure and record thermocouple data from an Advantech T/C DAQ. Using the "DAQNavi" express VI provided by them, connected to the Write to Measurement File express VI, I have managed to read in the data and create a TDMS file. However, when I open the TDMS file, each time step of temperature data is entered as a separate channel, instead of all of the channel data going into one tab. Obviously this is a huge problem as it creates hundreds of tabs after just a few seconds. Any thoughts as to what causes this?

    Hi glibby,
    How did you configure the Write to Measurement Express VI? Please select "one header only".
    If you have your own timestamps to write, please merge your timestamp channel and measurement channels with "Merge Signals" before passing them to the Write to Measurement.
    Best Regards,
    Mavis

  • Read time of tdm file increases

    Hello,
    I recently recorded a lot of data in tdm format. I originally saved it in an excessively organised and complex way which resulted in large header files and was very slow. I changed it and most of the data is fine, except 6 files in the old format. Just trying to read them to get the data out and then convert it into an easier format is taking far too long. Each file has ~15000 channel groups, each with 3 channels (I won't do this again...). I have witten a small vi to test the reading speed. Excluding opening and closing of the file, if I just find a channel group by name, reading one channel from that group takes about 1 second. This would still take days to read it all but would be acceptable. The strange thing is, is that if I do this in a loop for successive channel groups which is clearly necessary, it takes longer and longer each iteration. This is behaviour that I have always seen when reading tdm files, but it hasn't caused a serious problem before. I can't find any information about why though, so I hope I'm just doing something stupid. I have attached the speed testing vi (Labview 2010). Running it for 5 iterations, the time taken to read the channels is
    0.82304 s
    1.58809 s
    2.42514 s
    3.56820 s
    5.60632 s
    The channels it is reading all have the same number of values, and it makes no difference if I start at a different channel group (by adding a number to i in the loop).
    Does anyone have any explanation for this behaviour?
    Thank you very much for your help, and I'm sorry if it's something tht has been asked before,
    James 
    Solved!
    Go to Solution.
    Attachments:
    ReadBadTDMSpeedTest.vi ‏44 KB

    TDMS is an all binary format whereas TDM posses an XML header, otherwise they are very similar. Being a binary format you would expect it to read faster especially as you mentioned that the header was complex.
    There are a number of un-closed references in your code and I wonder if this was partly the issue, although this wouldn't explain why converting to TDMS fixed the problem?
    Beyond that, it would be nice to try this without using the express VIs to see if the problem still occurs. 
    Also, where you able to see if this was a memory issue? And if the read time continued to increase beyond what you posted?
    Anyway, I am glad we found a solution to your problem.
    Nick C.
    Cardiff University

  • Thread death and xml files when analyzing large datasets

    i analyze large data sets. for example:
    4000 persons that each have a time series with 8,000 data points. i do this linearly, so (i think) i am only running the
    *public static void main(String[] args) { .... }*
    for(Person person : PersonSet) {  // there are 4,000 people
      List<Data> data = DataSink.getData();  // each List has a size of 8,000
      analyzer.analyze(person, data); // i do not start any threads in this method
    }i've run into, and over come, java.lang.OutOfMemoryError issues.
    now, my program runs fine for over two minutes, then it crashes and shows this error:
    C:\Documents and Settings\David\My Documents\NetBeansProjects\Test\nbproject\build-impl.xml:419: The following error occurred while executing this line:
    C:\Documents and Settings\David\My Documents\NetBeansProjects\Test\nbproject\build-impl.xml:286: java.lang.ThreadDeath
    this is how i expand memory:
    -Xms1250m -Xmx1250m
    i cannot understand how run-time memory management relates to xml files.?
    my IDE is NetBeans 6.1

    i've never dealt with ANT scripts, so this is like my first "Hello World" programming with ANT.
    yet,
    i can't understand why the jvm would say that the origin of a crash in a program thats be running for over 2-minutes
    would be in an ANT file?
    #more build-impl.xml
        <target name="-init-macrodef-java">
            <macrodef name="java" uri="http://www.netbeans.org/ns/j2se-project/1">
                <attribute default="${main.class}" name="classname"/>
                <element name="customize" optional="true"/>
                <sequential>
    // line #286                <java classname="@{classname}" dir="${work.dir}" fork="true">   // <--- line #286
                        <jvmarg line="${run.jvmargs}"/>
                        <classpath>
                            <path path="${run.classpath}"/>
                        </classpath>
                        <syspropertyset>
                            <propertyref prefix="run-sys-prop."/>
                            <mapper from="run-sys-prop.*" to="*" type="glob"/>
                        </syspropertyset>
                        <customize/>
                    </java>
                </sequential>
            </macrodef>
        </target>
        <!--
                    =================
                    EXECUTION SECTION
                    =================
                -->
        <target depends="init,compile" description="Run a main class." name="run">
    // line #419        <j2seproject1:java>     // <-- line #419
                <customize>
                    <arg line="${application.args}"/>
                </customize>
            </j2seproject1:java>
        </target>
        <target name="-do-not-recompile">
            <property name="javac.includes.binary" value=""/>
        </target>
        <target depends="init,-do-not-recompile,compile-single" name="run-single">
            <fail unless="run.class">Must select one file in the IDE or set run.class</fail>
            <j2seproject1:java classname="${run.class}"/>
        </target>
    ............................thanks

  • Buffered TDMS file still very large

    Hello, I am trying to to stream to a TDMS file. I am writing approximately 10 channels of data at 20 Hz. I was creating a new file evey 5 mins or so and defragging the file once I was finished with it, which was giving me a reasonable size of file. I decided that I would like to make bigger files (over the period of hours) and the the defragging would be too computationally costly to do on the fly. I have been trying to set a buffer for the TDMS files but the resultant files are very large. You can tell the file isn't being writen until its closed (from watching its size in windows explorer) which suggests the file is being buffered. However its about 6 times bigger than an ASCII file of the same data, with and index almost as big again.
    Does anyone have any ideas why it would appear to buffer but not actually  reduce the data size. I'm running Windows XP, LabVIEW 10.0.
    When I create each new file I run this to set the buffers. (I have also hard wired in a buffersize of 100000 on a one minute file to no avail.
    Niall
    Thanks
    Niall

    I has this marked as the solution but it turns out it only helped one of my programs. I am still getting very large (fragmented) TDMS and index files for another program. The problem is that if I just defrag the (supposedly buffered) file, it interupts the data logging because it takes so long. I'm 99% sure that it *is* buffereing as it doesn't write till it closes the file, and if I use the read properties function it reads back a set buffer size. Here is the VI that actually writes the data. There bit at the top is for writing an optional acii file so you can ignore that.
    Its maybe a bit hard to see whats going on in the next one, but this is where the file is created before being passed to the setbuffer VI which I posted earlier. It also closes that last file
    Its really hacking me off now and holding me up from going no to other stuff. It would be great if someone had some ideas.
    Thanks
    Niall

Maybe you are looking for

  • Resetting a mobile client (new device ID)

    Hi, We often experience that mobile clients have to be re-installed due to synchronization problems. Using the Reset feature on the client does not solve the problem so we have to start from scratch each time. First the client is un-installed, a new

  • Windows XP Setup doesn't recognize my Boot Camp partition!  HELP!!!

    I just got my MacBook two days ago, and as soon as I got it, the first thing I did was partition the hard drive using Boot Camp Assistant, but didn't have an XP installation disk, so just left the partitions until I got the disk. But then today I got

  • I need some help figuring out why parts of my form don't work

    I have built a couple forms to use with my work, the first 1 is a write up sheet that I fill out while on a field job. When complete, I have an insert button that inserts times, dates customer info and employee info into a MYSQL database that is pert

  • How to include  images in BAM reports

    Hi all, In Oracle 11g BAM , i can see some already existing Demo reports, i the Active Studio. in that demo reports i can see a report called SLA Violation.Here in this SLA Vioaltion report there is Updating Orderded list within the list they are sho

  • Enabling/disabling buttons problem

    Hello, I'm using jdev 10.1.3.3.0 and I want to enable/disable buttons based on the value in a tableSelectOne. I wrote a function isNextButtonEnabled() in my backing bean and I have set the disabled option of the nextButton (=CoreCommandButton) to #{!