Cd Linear Simulation - Exponential Matrix Overflow

Hi everybody
i try to use the "CD linear simulation.vi" to implement two transfer function to "modify" a signal; the transfer funciton model are in polynomial form with a numerator and a denominator as usual. Sometimes when i make my .vi run appear a message error like "Exponential Matrix Overflow" and i need to stop my .vi. It usually appears when the transfer function denominator is of the second, first or third order, but if i build my trasfer function with two vector (one for the numerator and one for the denominator) i have no problem even if the den. and the num. are of the 8 order and have a lot of significant number. So.. do you have an idea about what the error message is refer to?? Have anyone the same problem? I try to understand but i still have the problem..
Advice are welcome..
Thank You
Nigeltorque

Thank you for answering Barp, on monday i will be in laboratory so i hope to solve my problem this days..
As you can see in the first .vi i attached, i take two signal (the input and the output of my DUT) and i extract Beta parameters that i use to model the non linearity of my DUT (the matrix rango have to be at the max value that is 30 in this case). The first example is very easy: the signals last two second and are generated at 50Khz. For my project i have to "modify", to "filter" the output signal with two Transfer Function that i previously extract with  labview: so i decide to use the "CD linear simulation.vi" to implement the transfer function (i hope this is correct) because i obtein them like a ratio between a numerator and a denominator. In the second .vi i attached i just add the implementation of the two transfer function introducing the CD Linear Simulation.vi.
In this exemple i just write the transfer function, but if i try to obtein them from another .vi i obtein the Exponential matrix Error when the first TF have a denominator of an order grater then 1! And this is strange because if i wrote the transfer function directly i have no error even if the denominator of my TF is of the N-order. 
Moreover if i run the .vi filtering the signal i have to chang the value of tau, and start position if not i have an error, and the waveform graph becomes something really strange.. 
I try to understand why, but i have no idea right now. If you could check my .vi  and see what's wrong it will be really apprecciate.
Thank you very much
Nigeltorque
Nigeltorque
Attachments:
NONLINEAR ResponseMOD.vi ‏85 KB
NONLINEAR ResponseMOD + TF.vi ‏167 KB

Similar Messages

  • Input for CD Linear Simulation.vi in a Simulation Loop

    Hi I'm new to LabView and I was trying to implement a real time estimator for a linear system, so a steady
    state gain is acceptable (estimator doesn't need to be adaptive). It will be
    implemented in real time target (cDAQ). The estimator will get the acquired
    data (there are 2 inputs and 1 output) from the real system.
     I have learned and tried few different ways to do it but finally decided to try using CD State
    Estimator.vi (standalone) in a Simulation Loop. My questions are:
     1. I found that the only way to use the estimator model with the acquired input is by
    using CD Linear Simulation.vi. Is there another alternative to do the
    estimation?
     2. However, with CD Linear Simulation.vi I had problem: According
    to the LabView help, if the input signals are more than 1, we have to specify
    the input as 2D array with smaller dimension as the number of channels and the
    larger dimension as number of points in each channel.
     While for every execution in the loop I will get only 2 x 1 data (see appended array 3 in the
    attached example file). How can I be able to use the measured data as the two
    inputs for the CD Linear Simulation.vi?
     Thank you.
    Attachments:
    2015_03_31_Test DAQ in sim loop.vi ‏299 KB

    Hi Mike,
    Thanks for the reply.
    I browsed through some examples and tried some things; here are the updates, maybe useful for others in the future.
    About the CD Linear Simulation.vi; In the example finder I found CDEx Kalman Filter.vi which uses CD Linear Simulation.vi to compare the estimation with real system states for two inputs.  The number of samples of the inputs was given as a constant (e.g. N samples), from which the array of N x 2 data is built and fed to CD Linear Simulation.vi. So up to now I can’t see a way of using CD Linear Simulation for feeding input to the estimator model built from CD State Estimator.vi in a “real time “ way.
    I look forward to hearing from anyone if it can be done.
    So, from here I tried another alternative to implement this real time estimator: using Discrete State Space Function. So I generate the estimator state space model in Mathscript and feed it to that function.
    I use kalman (mathscript) function to generate the estimator model.
    Interesting thing is, (although irrelevant with my original enquiry), that this kalman command seems not generate correct estimator model with multiple input (e.g. if in the original system there are inputs u1 and u2 with one output y, I expect the generated estimator model would have three inputs i.e. u1, u2, and y. But the generated estimator only has one input (??)).
    I remember that the kalman command from Matlab has the same problem which I neglected at that time.
    So this time in labView I also did the same thing as the last time, which is taking the gain generated from the same kalman command and build manually the A and B matrices of the estimator.
    I haven't browsed though to see if someone has used this kalman command for multi input system and noticed this. 
    Nevertheless, this alternative seems working well so hopefully it works well too with the RT target.
    Thanks.

  • Linear programming in matrix

    A farmer has *10* acres og land where he wants to grow potatoes(A) and tomatoes(B) in a combination where he uses both time and acres most optimal.
    1 acres potatoes cost 300 $ and takes *2* hours/week
    1 acres tomatoes cost 200$ and takes *0.5* hours/week
    The farmer uses *12* hours/week on his crop
    2*A + 0.5*B = 12
    1*A + 1*B = 10
    the result im looking for
    matrix 1 * matrix 2 = matrix 3
    _2_ _0.5_ * _12_ = _4.66_
    _1_ _1_ * _10_ = _5.33_
    Any way to do this in PL/SQL for ex using utl_nla

    Note the name of this forum is SQL Developer *(Not for general SQL/PLSQL questions)*, so for issues with the SQL Developer tool. Please post these questions under the dedicated SQL And PL/SQL forum.
    Regards,
    K.

  • Matrix Report overflows to other page

    We have a Matrix Report (Report Builder 6.0.8.20.1) with a lot of columns. The report is in landscape style, but still the matrix will not fit horizontally.
    The second part of the matrix will be displayed on the next page, while there is enough space to display the second part of the matrix on the same page, below the first part of the matrix.
    Does anyone know why Report Builder has this behaviour?
    Thanks in advance,
    Lennart de Vos
    PS: All properties Page Protect, Page Break Before/After have been set to "No". The property Print Direction of the repeating frames in the Matrix cannot be changed

    hello,
    if i understand correctly, the matrix overflows in the horizontal direction to another page and you would like this part to show up below the first part on the same page insead ? much like wrapping the matrix ?
    unfortunately reports does not support wrapping of layout objects. if a layout object exceeds the page in any direction the overflow part will be pushed to another page.
    thanks,
    ph.

  • (Control Design & Simulation) State-Space block doesn't give output

    I've tried and tried but can't get the State-Space block module to give me a graph / output.
    I have no idea what the problem is and hope that somebody can help me. The numbers and calculations work in Matlab (Simulink) but i can't get it to simulate in labview. 
    Anybody got any ideas?
    Solved!
    Go to Solution.
    Attachments:
    Cavities.vi ‏30 KB
    Matrixes.vi ‏27 KB

    The problem is that you are assuming that LabVIEW executes left to right. Dataflow doesn't work this way. Your code as this:
    does not tell LabVIEW that it has to execute everything from left to right. What is happening is that he is executing the 3 'island' of code in parallel and, in this case, it will have 'empty' values. You have to remove the local variables to make this work and dataflow paradigm will execute your code from left to right, as you want. here is the code:
    Also, one more thing. Your input to the "CD Linear Simulation" is all zero. That means that you are trying to input zero input to a linear system, which will give you a zero output to the response. You probably do not want that since zero as input doesn't give you any more information. If you want to see how to system goes to zero after initial conditions, you should use "CD Initial Response". Or you should modify the input signal to the system. Please study this shipping example to understand how to use Linear Simulation and Initial Response:
    C:\Program Files (x86)\National Instruments\LabVIEW 2012\examples\Control and Simulation\Control Design\Time Analysis\CDEx Time Domain Analysis.vi
    Hope this helps...
    Barp - Control and Simulation Group - LabVIEW R&D - National Instruments

  • Exposure slider not entirely linear ?

    Ive recently captured an image with a bright, red to orange sky in the background due to sunset conditions. With a linear preset in ACR (all tonal controls zero, parametric and point curve linear) theres no clipping indicated.
    Actually the Exposure slider has to be raised to +0.70 in order to drive relevant parts of the sky into clipping. But, the Exposure slider has to be set down to -0.70 in order to detach the histogram from the right end of scale.
    In between, its obvious that the histogram does not react linearly to the Exposure slider. Increasing it from 0.70 to +0.70 accumulates and compresses more and more data in the right corner of the histogram (up to RGB 254) without making the final step to clipping.
    Output space is set to ProPhoto RGB, Noise reduction and Sharpening are off, the Recovery slider for sure is at zero. Its CS4 + ACR 5.2.
    Any thoughts or explanations?
    Thanks!
    Peter

    > You can effectively get linear output from Camera Raw (with the exception of gamma encoding) by creating a custom color profile with the DNG Profile Editor with its Base Tone Curve set to Linear, then use the Camera Raw Defaults when processing your image and select that custom color profile.
    Eric, - Many thanks for joining discussion. So what would be the difference between a.) using such linearized custom profile created the way you describe with the DNG profile editor, and b.) employing a linear preset in ACR, means to set all tonal controls to zero, parametric/point curve linear while using the baseline matrix profile ?
    Or, let me ask straightforward: could there possibly be something behind my question, or am I barking up the wrong tree ?
    > The conversion of the demosaiced raw data from the camera's color space in sRGB, aRGB or ProPhoto RGB includes the non-linear transformation as well. This can not be avoided in ACR to my knowledge
    Again my understanding was that matrix-to-matrix conversion is essentially a linear transform (if this is the right term). Means that linear scaling and matrix-to-matrix conversion are essentially commutative, the sequence can be exchanged and the results will be the same (provided that the absolute value of the scaling factor is adjusted) and despite different matrix primaries and different gamma. However, I agree that there are a couple of constraints such as:
    /> out of gamut colors
    /> fancy 'gamma' e.g. with the sRGB TRC
    /> AbsCol conversion between different white points which can introduce additional clipping
    /> RelCol conversion between different white points (not sure about this one)
    So in order to check my approach of testing, I did some ColorChecker exercises. Processed it through ACR (linear preset) with different Exposure settings. Converted to 'linear gamma' ProPhoto RGB in Photoshop. Finally, all derived ratios R2:R1, G2:G1 and B2:B1 were approx the same, whether this is a color or gray patch which I think confirms linearity.
    Seems I still have to find out whats so special with my sunset sky. Or, the hue preserving feature comes in mind which was reported to cover ACRs tone curve(s). Could it be that it acts 'on' the Exposure slider as well, once a single channel is driven into clipping, thus, preventing this. Just a thought.
    Many thanks for your comments.
    Peter

  • Plot a state-space model

    Hi!
    I'm currently doing a project and I'm stuck and am in need for some help.
    I've done a state-space model that I'm using for this project. I've used Matlab (with and without simulink) to simulate this model and to plot its behaviour. But when I want to do this in LabVIEW, i get stuck.
    I dont want to use Mathscript 'cause then it would be using Matlab. 
    Does anybody have some ideas of how I can simulate and plot my state-space model from the  matrix parameters that i have?
    Look at the attachment, I want to plot x.
    Solved!
    Go to Solution.
    Attachments:
    State-Space Model.png ‏11 KB

    We have so many different ways to do this. Try to look at the following shipping examples:
    C:\Program Files (x86)\National Instruments\LabVIEW 2011\examples\Control and Simulation\Control Design\Model Construction\CDEx Creating SS Model from String Matrix.vi
    C:\Program Files (x86)\National Instruments\LabVIEW 2011\examples\Control and Simulation\Control Design\Model Construction\CDEx Rendering State-Space Equations.vi
    C:\Program Files (x86)\National Instruments\LabVIEW 2011\examples\Control and Simulation\Control Design\Time Analysis\CDEx Time Domain Analysis.vi
    C:\Program Files (x86)\National Instruments\LabVIEW 2011\examples\Control and Simulation\Simulation\Continuous Linear\SimEx state space.vi
    They show how you can use the product to create a State-Space (SS) model, show the rendered model on the front panel, do a linear simulation of a model and implement the state-space model in the Control and Simulation Loop, respectively.
    Notice also that we have an extensive amount of examples available for you. Hope this helps...
    Barp - Control and Simulation Group - LabVIEW R&D - National Instruments

  • ACR RGB2XYZ Conversion, basic function

    Hi
    I am trying to understand the color conversion chain when I am using DNG + ACR.
    I simulated all the steps that are described in the DNG Spec and displayed the results in a CIE-xy diagram, but the result doesn't make sense.
    I extracted the matrix, written in the ColorMatrix2 Tag of the Leica M8, (i know that his matrix is used by ACR and makes good results). This Matrix Iis called A,
    The matrix I transform to the inverse Matrix to get the RGB2XYZ matrix.
    B=Inv(A)
    There is no CameraCalibration Matrix or AnalogNeutralGain, so these don't have to be taken into account.
    When I calculate the primary colors of that matrix using (r g b ) * B, I get a triangle that is shifted quite a lot .
    The whitepoint (RGB=111) is on the purple line of the spectral colors
    It looks like there is a vector, that I don't take into account.
    Can anybody tell me, what I am doing wrong?
    Thank you very much, Jesko

    Oh I see what's going on. It is important to note that the ColorMatrix1 and ColorMatrix2 tags in the DNG spec describe a transformation from XYZ (D50) to NON-white-balanced linear camera coordinates. So your inverse transform (B) maps non-white-balanced linear camera coordinates to XYZ. If you are trying to apply B to white-balanced linear camera coordinates, then that is why you're not getting expected results. To summarize, you should apply B to the non-white-balanced linear camera coordinates, then apply the linear Bradford adaptation matrix to map the chosen illuminant white point to D50.
    (Note that there is an alternative method of mapping camera coordinates to XYZ, given by the forward matrix. This is also described in one of the DNG spec appendices, I believe.)

  • Customer Payment History Caluclation of Arrears BackLog in Days

    Hi All,
    Cn anybody explain me how does the system calculate the
    Average BackLog in Days & Arrears in Forecast Days in Customer Payment History
    T-Code S_ALR_87012177
    Any material will also help.
    Regards
    Sunil

    Program RFDOPR20 enables you to carry out a detailed analysis of the payment history of customers. It also contains a forecast of payment volumes and payment arrears, based on the existing payment history.
    Requirements
    Information on the payment history of a customer is only stored in the system if you select the field Rec.pmnt.hist. in the company code-dependent area of the customer master record.
    Output
    The following information on the payment history is displayed:
    Customer type
    Using as a parameter the payment volume in the periods under consideration, the system determines whether the customer in question is generally a net payer or a cash discount payer.
    The type of customer is determined as follows:
    If the customer usually utilizes the full payment period without claiming a cash discount deduction, he is a net payer.
    If the customer usually pays in time to claim a cash discount deduction, he is a cash discount payer.
    Arrears in days
    The average days in arrears are determined as follows: Every incoming payment in the periods concerned is multiplied by its days in arrears. These amounts are added together and then divided by the total amount from all the incoming payments. This process ensures that the days in arrears are weighted by the payment amount.
    Forecast for arrears in the next period
    This forecast is based on the customer type that the system determined and attempts to make a forecast for the arrears in the subsequent period based on the payment volume data and the total resulting from the above mentioned calculation involving the payment and days in arrears. In this case, only data from the chosen periods is used.
    To make this forecast, the system carries out separate forecasts for the payment volume and the total described above, and then determines the number of days in arrears by division.
    Linear, logarithmic, exponential and potential models are used for the forecast, the optimal model being chosen in each case.
    By choosing the menu option "Forecast" you can display more information, such as:
    Payment volume in the periods under consideration
    Trend for days in arrears (gradient of the curve for the days in arrears, based on the forecast value for the subsequent period)
    Specifications on how the forecast was created (including a forecast for the payment volume in the subsequent period)
    For the information described above, the entries for "Number of periods" and "With current period" are important. The "number of periods" determines how many periods are to be taken into account in the calculations (these are always the most recent periods). The current period, which is only taken into account if the checkbox "With current period" is marked, is an exception. If the current period is included, it distorts the forecast, if it contains postings which are not complete.
    Further information offered by the program includes:
    Volume of open items
    Breakdown of number of payments, payment volume and arrears by period, where the last values are both subdivided into payments with cash discount or net payments. In these cases, the days in arrears are determined by weighting the amounts by the volume of the payment in question. You can display the results as a bar chart by selecting the menu option "Graphic".

  • I need to plot the Least Squares Regression Line (not just calculate it with the LINEST function).  How?

    I need to plot the Least Squares Regression Line (not just calculate its values with the LINEST function).  Can anyone advise me on how to graph this line?

      Grapher.app in OS X Utilities is excellent for that : entering data from Numbers Excel etc. into a Grapher "Point Set", plotting all your data, computing and plotting a regression curve linear or exponential or polynomial.
      Unfortunately there is not a single word about point sets and regression in the Grapher Help.
    So, suggest you download (free) "Instruction for Use - Grapher on web site
    http://y.barois.free.fr/grapher/Documentation.html
    and go to page 31 to 37, "Initiation > Lesson 6 : Treating a point set (Regression curve), and also page 68, "Appendix 1. Points sets : from speadsheets to Grapher".
      If you try Grapher's regression curves, please tell us about the manual above : useful or not, too easy or difficult, readable or not etc.
      Thank you.
    YB24
    PS. Documentation about Grapher is also here : Google "Apple Grapher" > http://en.wikipedia.org/wiki/Grapher > Article > Talk > Grapher 1.1 to 2.3 (French and English) (11th september 2012)

  • PhD Laser Physicist with 15+ years LabVIEW experience - New England Area

    I have a PhD in Laser Physics and 15+ years experience writing successful LabVIEW software at the heart of R+D, test and manufacturing automation projects.
    My previous LabVIEW projects have included;
    Non-linear simulation tools
    User configurable automated test and measurement suites interfacing to 10's of commercial instruments, DAQ interfaces, vision systems and custom microprocessor boards.
    Custom  scripting language implementations on top of LabVIEW test suites.
    Multi-threaded data analysis code with virtual addressing to work with data sets larger than available RAM.
    Cameralink and GigE vision system interfacing and image processing analysis and  test suites
    Writing driver libraries for new instruments.
    Hardware Interfacing with DAQmx, IMAQ, GPIB, Serial, USB, GigE vision, Camera Link etc.
    Distributed network variable deployment for multi program and multi PC/ networked server/client interfacing solutions.
    Interfacing with test equipment such as oscilloscopes, spectrum analyzers, DMM, power supplies, cameras etc. etc.
    Excel report generation, data file export.
    I also have experience with other languages:
    C++ code for my own custom microprocessor solutions (MSP430 family) and development of existing C/C++ code for other target devices.
    Visual Basic
    AutoIT (for automation of legacy MS Windows software without documented APIs)
    Excel VBA
    I can also design and implement basic electronics (interfacing to microprocessors and DAQ boards)
    My specialty is designing complete turn-key test solutions including;  software, firmware, test equipment procurement, custom PC hardware assembly, electronics, microprocessors, test fixturing, documentation, report generation.
    My Linked in profile is at: http://www.linkedin.com/profile/view?id=2380460
    I live in Massachusetts and would consider opportunities in the New-England area.
    References and a copy of my Resume available on request (please PM me).
    Kind regards,
    Dr. Peter Reeves-Hall

    Hi Peter,
    I just saw your resume on NI site...are you still looking for employment in NE ?  We are manufacturer of PVD systems located in Scituate MA  looking for a Labview Engineer.
    If you are interested, please send me an e-mail to [email protected]
    Regards,
    Linda Tardie
    Exec. VP
    AJA International, Inc

  • Fade in fade out

    Hello,
    I am totally new to Soundbooth (I just switched from Sound Forge) and do not know how fade audio in and out. I know that there are the icons on the bottom of the screen for that, but no matter what I do, I cannot get it do do anything. If someone can please let me know how to do this as soon as possible, that would be GREAT!
    Thank you!

    Hi Jeanette,
    In addition to the one-click Fade buttons, which result in a fixed 5-second fade in/out, you'll see at the top left and right corners of your waveform are matching handles. They look like boxes: half light grey, half dark grey. You can grab these, move them left or right to adjust fade duration, then up and down to adjust the shape of the fade curve from logarithmic, to linear, to exponential.
    Durin

  • Kalman estimation - Thermocouple

    Hello
    The attached VI is a trial to estimate a correction for a thermocouple measurements. The reference transfer function is converted to a state-space model to use it for the estimation of the actual temperature.
    My problem is in the last block, the linear simulation VI, as I could not get the expected output. Could you please advice on how to configure the inputs to this VI.
    Thank you
    Attachments:
    Estimation.zip ‏24 KB

    Thanks Simyfren
    The model is simple, as you see in the VI, it was converted by by a built-in VI from a known transfer function. Moreover, it works in the discrete Kalman filter VI. So, I guess the problem might be in how the inputs excite the model in the linear simulation. 

  • Is it possible to simulate large-scal​e non-linear systems using the Simulation Toolkit?

    Hi,
    I am new to LabVIEW, having used Matlab/simulink for a few years. I am trying to simulate a relatively complex non-linear vehicle model. In Matlab/Simulink I would use an m-file to describe the system equations (i.e. x1_dot=..., x2_dot=..., etc.). Is there a similar method in LabVIEW? I have got a simple simulation running using the 'eval formula node', a very ungainly method of substituting variables, and an integrator in a 'simulation node'. It takes about an hour to run a simulation that takes 3 seconds in Matlab. I just need to know if it is realistic to do large-scale simulations in LabVIEW, or if it was not designed for that and I should persuede the management to go
    back to Matlab!!!
    If it is possible, where can I find help on the subject? I have spent a long time looking on the web and come up with very little.
    Many thanks in advance,
    Paul.

    If these are simple differential equations, easiest would be a for loop with a bunch of shift registers, one for each x1, x2, etc. containing the current values.
    In your case, you would calculate all the derivatives from the instantaneous values inside the loop, then add them to each value before feeding them to their respective shift register again.
    The attached very simple example shows how to generate an exponential decay function using the formula dx/dt= kx (and k is negative). The shift register in initialized with the starting condition x=1.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    DiffEQ.vi ‏33 KB

  • Does Java have any class for linear algebra/matrix computation

    Hi!
    I am new to Java. Would like to ask whether Java provides any class or package for linear algebra/matrix computation, before I switch to MATLAB or write the code by myself.
    Thank you

    Maybe you can find something here,
    http://math.nist.gov/javanumerics/

Maybe you are looking for