Semiconductor strain and non linear scales

Dear,
it's not completely clear to me if it's possible to read semiconductor strain gauges with NI hardware?
Do I need special hardware or do I use a normal bridge circuit analog input card (eg NI 9237)?
An added problem with semiconductor strain gauges is the need for linearization of the output because the
basic resistance versus strain characteristic is nonlinear. Is this possible in LabView?
As far as I know, I can only create linear scales in Ni Max?
Thanks in advance,
Ronny Comyn
Belgium

Yes you can read strain gauges.
No, you dont need special hardware, but it makes things alot easier as signal levels are already adapted.
There's a Convert Strain Gauge Reading which you can read the help text about, depending on the bridge type it'll use different (some are non linear) conversions.
/Y
LabVIEW 8.2 - 2014
"Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
G# - Free award winning reference based OOP for LV

Similar Messages

  • Keynote non-linear scale in line graph

    I'm trying to teach kids how to do line graphs and it doesn't help that Keynote treats the horizontal axis like it's a bar graph... even when you want a line graph. It takes the data points and spaces them evenly which can give you a non-linear scale. This is one of the classic mistakes kids make. Grapher can do it correctly but that's one more step.
    Any way to make it behave properly?
    aedan
    iMac G4 800, PowerBook G4 12, loads of older ones.   Mac OS X (10.4.5)   The SE/30 is the king of compacts.

    Keynote's graphing capabilities are relatively rudimentary, especially with regards to more technical needs. As far as I know, Keynote always treats X-axis labels categorically (it doesn't understand numeric values), and thus will always evenly space the datapoints.

  • Question about DBMS_PREDICTIVE_ANALYTICS and non-linear data

    Hi,
    I am executing DBMS_PREDICTIVE_ANALYTICS against some data, but it's not giving me good predictions for my null data. Should I be using different data mining techniques ?
    The data has the following fields 1) Position, 2) Odds, 3) Selection & 4) Event.
    Position and Odds and linear, since the smaller the number the higher the likelihood of winning, but Selection is a lookup based on a Name, so therefore non-linear, but can be used to look at previous form in the same data set. Event is a way to group Selection's into an Event.
    What's the best way to tackle this ?

    Sorry for delay, been on vacation.
    I'm trying to improve odds for predicting winners, given the form, odds, previous history, and form of other runners in the race.
    Take a look at data.betfair.com for a list of data elements available to me.
    I'm currently using the data where the latest_taken column has the highest value. i.e. the odds and ranking just before the race starts.

  • How to more than one scale (4.0 scale and 4.3 scale)

    Dear All,
    In our case( Korea Sogang University) customer used 4.0 scale for module appraisal but changed to 4.3 scale some years ago.
    In this case how can we use two scale ?.
    some students have 4.0 scale appraisal and 4.3 scale also.
    I want to total GPA as 4.3 scale bases becuase it is used currently.
    regards,
    jin dal

    Jin Dal,
    I think you can simplify this, depending on how you have actually defined the scales.  In an ideal world, you would have set up the scales in the following way:
    <u>KRDO (A-F)</u>
    A = 90,000
    B= 80,000
    C= 70,000
    etc., or some similar scale values
    <u>KRG1 (4.0 scale)</u>
    4.0 would map to 90,000 (NOT 100,000) - This means that the 'upper limit' of that linear quantity scale is something other than 4.0.
    <u>KRDR (A-F with +/-)</u>
    A+ = 93,000
    A0 = 90,000
    etc.
    If you did this, then you would not even need scale KRGV.
    Here is the problem.  You cannot just go and change your scales now to this simple model.  Why?  The appraisals you have already stored contain the absolute value (i.e. the value on the 100,000 scale).  They would then all be wrong, unless you went into each appraisal and re-stored it.
    I mainly describe that scenario for everyone else out there setting up their systems.  Leave some 'headroom' above you current maximum score in the GPA!  You may need it later.  This is described in the Base Configuration Cookbook in my sticky post.
    Unfortunately for you, I suspect you set up your scale KRDO with A = 100,000, and your linear scale KRG1 with 4.0 as the highest values (= 100,000).  That means you really do need the extra 4.3 scale of KRGV.
    You will probably not want to keep going into the IMG to change the scale for GPA.  In fact you might have a user bring up the performance indices for a pre 2001 student, and then immediately jump to a post-2001 student!  So, here are your possible solutions:
    1) Just have two different performance indices: one for pre-2001, one for post-2001
    2) For your GPA PI, use the 4.3 scale.  However, then you need to change your PI Calculation itself to look at the appraisal records.  For the pre-2001 students, you need to actually CHANGE (in memory; not in the database) the assigned grades from A to A0, B to B0, C to C0, etc.

  • Is it possible to simulate large-scal​e non-linear systems using the Simulation Toolkit?

    Hi,
    I am new to LabVIEW, having used Matlab/simulink for a few years. I am trying to simulate a relatively complex non-linear vehicle model. In Matlab/Simulink I would use an m-file to describe the system equations (i.e. x1_dot=..., x2_dot=..., etc.). Is there a similar method in LabVIEW? I have got a simple simulation running using the 'eval formula node', a very ungainly method of substituting variables, and an integrator in a 'simulation node'. It takes about an hour to run a simulation that takes 3 seconds in Matlab. I just need to know if it is realistic to do large-scale simulations in LabVIEW, or if it was not designed for that and I should persuede the management to go
    back to Matlab!!!
    If it is possible, where can I find help on the subject? I have spent a long time looking on the web and come up with very little.
    Many thanks in advance,
    Paul.

    If these are simple differential equations, easiest would be a for loop with a bunch of shift registers, one for each x1, x2, etc. containing the current values.
    In your case, you would calculate all the derivatives from the instantaneous values inside the loop, then add them to each value before feeding them to their respective shift register again.
    The attached very simple example shows how to generate an exponential decay function using the formula dx/dt= kx (and k is negative). The shift register in initialized with the starting condition x=1.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    DiffEQ.vi ‏33 KB

  • Lens Distortion Non-linear Distortion and Perspective Distortion

    Hello everyone, I'm a member of the vision team in the Intelligent Ground Vehicle team in Cal State Northridge, an autonomous robot that uses real time feed and laser data as sensors.
    So we need a fully undistorted video feed for us to do image analysis with the lowest processing time.
    We have a distortion situation with our current vision system where we have to fix Non-linear Distortion due to the use of a wide lens ( About 144 Deg ) and also Perspective Distortion of the projected image. The camera is mounted on top of our robot with an angle with the ground (about 45 Deg).
    I have been using labview for about 6 months, and have been trying to correct this distortion using Vision Assistant program to generate VI's.
    First I'll list my attempts here and then I will direct some questions.
    First attempt, a picture of a dot grid was taken with our camera and that picture was correct by Vision Assistant's nonlinear image calibration. Then a VI was generated by Vision Assistant, and we changed in the input from images to video.
    This method worked but not accurately.
    Second attempt, we tried to remap the image pixels using a correction function generated through Excel, this method did not work accurately either. I'm not even sure how to do this again since it was done in the past.
    So my questions are here:
    1) what's the best approach to correct the distortion in our camera? and how please.
    2) what are we doing wrong/right in general?
    3) Generally, how would I improve any vision VI to reach faster processing times?
    Everyone's help is very much appreciated,
    Omar
    Cal State Northridge
    The Intelligent Ground Vehicle team
    [email protected] 

    I'd like to correct the information provided by Bruce. It was correct up til Vision Development Module 2010. You need to update, Bruce ;-)
    In Vision Development Module 2011, we improved the accuracy and precision of our grid calibration, by computing a lens distortion model (Division or Polynomial Model), that take into account radial and tangential non-linear distortion of the lens. The new algorithm also allows to compute the internal parameters of the lens (focal length and optical center), that you can use to determine the relationship of the camera to the object under inspection.
    The microplane algorithm that Bruce described is still availale, as it can be used to solve applications in which the object you're trying to calibrate is non planar, and can be calibrated by wrapping the grid around the object.
    A calibration will be valid for a specific perspective plane, and will need to be recomputed if the camera moves or object is in a different plane.
    Typically, the calibration process is an offline process. For performance reasons, you don't necessarily want to correct the entire image (although VDM provides you with that function). As Bruce mentioned, most algorithms, like edge detection, pattern matching etc, return their data both in pixels and real-word data, talking into account the calibration information that you learned. For the few algorithms that only provide pixel results, you can use VIs to convert Pixels to Real-World. This operation is fast.
    What type of features are you trying to measure in the image? Can it be solved in 2D, or do you need 3D information? How many cameras do you have on the robot?
    -Christophe

  • Universe.applyOverload Method Run non-linearly slower and slower

    Universe.applyOverload Method Run non-linearly slower and slower, that is, for the 10th user and restriction we add, this method runs 1 second, however, for the 80th user and restriction we add, this method runs 8 seconds.
    Customers think it's a bug for our BOE Java SDK method, could I know why this method is non-linear, and any way to improve its running speed.
    The following is the code, for interator 80 times.
    while (iter.hasNext())
                        user = (rpt_users_t) iter.next();
                        IOverload overload = (IOverload) newObjs.add("Overload");
                        overload.setTitle(user.getBoezh().trim());
                        overload.setUniverse(universe.getID());
                        overload.setConnection(connectionID);
                        overload.getRestrictedRows().clear();
                        overload.getRestrictedRows().add("HZB0101_T","HZB0101_T.BRANCH_COMPANY_CODE='3090100' AND HZB0101_T.DEPARTMENT_CODE='"+ user.getBmdm() + "'");
                        overload.getRestrictedRows().add("BM_T","BM_T.DEPARTMENT_GROUP_CODE='" + user.getBmzdm().trim()+ "'");
                        infoStore.commit(newObjs);
                        // Commit to User
                        IInfoObject everyone = (IInfoObject) infoStore.query(
                                  "Select TOP 1 SI_ID " + " From CI_SYSTEMOBJECTS "+ " Where SI_KIND='User' " + " And SI_NAME='"+ user.getBoezh().trim() + "'").get(0);
                        int everyoneID = everyone.getID();
                        universe.applyOverload(overload, everyoneID, true);
                        //infoStore.commit(objs);
                        System.out.println(user.getBoezh() + " loading...");

    When invoking applyOverload multiple times, it's O(N^2) if you're granting rights to a User or UserGroup. 
    When granting, the applyOverload method retrieves all Overloads applied to the Universe and walks across each one,  granting the identified ones and removing rights from others.
    Sincerely,
    Ted Ueda

  • Non linearity between pixels and widht for linescan

    Hi
    I'm developing a width gauge based on a linescan camera.
    The gauge is measuring the surface of a steel strip moving at 200m/sec.
    I found a non linearity between the relation of pixels and real widht:
    1904mm X 6635,6pixels
    1308mm X 4616,7pixels
    814mm X 2880,6pixels
    I've tried the working distances between 2600 to 2800mm.
    The field of view is 2200mm.
    The focal lengh is 35mm.
    The size of CCD is 28.67mm (3.5um X 8192pixels - Basler raL8192-12gm).
    Somebody already faced this problem?
    Thanks,
    Alexandre.
    Solved!
    Go to Solution.

    Hi Greg
    Thanks for your idea. I implemented a polynomial from Excel according the attached. It's running well until now.
    Another problem that I'm facing is the oscilation on the image (https://www.dropbox.com/s/cboyglx74g8sd93/width%20gauge.mp4?dl=0) when the grid and filter are applied. I would like get stable the filtered image and grid from edge detector.
    Does anyone know how to solve this?
    Thanks for the moment.
    Attachments:
    calibrcurve_ml2.xlsx ‏20 KB

  • Non - linear to linear waveform reshaping

    Hello,
      Need to reshape a non linear waveform to a linear waveform example per the attached excel document.  Have a sensor output voltage that is non linear.  First objective is to reshape the waveform to linear y=mx+b and then convert that to y = - mx+b.  Considering I'm not a mathematician I need direction. 
    Thanks,
    Dave
    Attachments:
    hsen.xlsx ‏14 KB

    Why not use a table scale in MAX explorer, then when you read your DAQ measurement it will be scaled to mm. If you need to you can set up your scale in your LabVIEW code.
    Attachments:
    Table Scale.png ‏91 KB

  • Asset Lease accounting -non Linear lease payment

    We are currently having a business scenario where an asset is acquired on capital lease basis.
    Different depreciation area is maintained for leasing to account for interest on leasing.
    The payments are made in monthly for lease payment.
    The lease payment amount changes every year.
    The lease tab in asset calculates Net present value on the basis on lease payment entered in the leasing tab.
    Now since the lease payment varies every year the net present value calculation will be different(not as calculated in asset master -leasing tab).
    I am not aware if the asset module can take care of any non-linear payment scenario to calculate the NPV and interest accordingly.
    Example an asset ABC is acquired as capital/finance lease basis on
    monthly lease payment  for 4 years as follows:
    The said asset is to be capitalized  with net present value and we also need to ensure that interest calculation is done accordingly.
    1st year  1000/month
    2nd year  2000/month
    3rd year  1500/month
    4th year  1900/month
    Since in lease tab only single lease payment can be entered, how the different monthly lease payment can be entered so as to calculate correct NPV.
    Next how the correct interest calculation and net present value can be calculated correctly.
    If any one has come across such situation, please reply.
    Regards,
    Manish Garg

    Hi ,
    SAp 4.7 does not support non linear lease payment. Only the Equal lease instalment can be generated automatically when you will go for opening balance at the leasing tab of the display assets(AS03).
    But the ECC6.0 support the change over the lease instalment,
    Rgds
    Satyajit

  • Magic Moves not always Magic (non-linear slide orders)

    Magic Moves opens the possibility of non-linear slide orders with animated transitions. Previously in keynote (KN '08), as far as I could ascertain, this was impossible. Now a slide with several hyper-linked objects can enjoy an animation of the on screen objects to there new positions, where-ever they may be on the on the linked slides.
    Unfortunately, as in some other things Keynote, it seems flakey. While it works most of the time for some reason, even when duplicating the linking slide and repositioning the very same objects, the Move is delayed and instantaneous: decidedly not Magic!
    The joy of being able to finally make the fully interactive Keynote presentation that will slide objects the right amount automatically according to there destination slide seems possible. The great thing is I don't have to construct all the different moves linking 26 different pages to each other via various routes (8 links on each page, which happened to be 8x26 possible linking routes = 208 move builds: too much for prescriptive build/move animations).
    But if I can't get it too work 100% I can't really show my client. Any know issues with this Magic Move stuff?
    I can replicate problem but not predict when it won't work or decode what is triggering the problem. Searching forum only revealed comment the Magic Move is work in progress. Note I am not scaling any objects. Can upload if it helps. Part of the original Presentation with a linear narrative can be seen at:
    http://www.vimeo.com/3018559
    Essentially I am endeavouring to make all the images 'clickable' so the film strip glides (against the preset order) to become the centre project. Also like to add a left and right button. All this can be theoretically done with invisible rect's with hyperlinks and Magic moves on each slide.
    Thanks for reading though all this – hope it's clearer than mud.

    Thanks for the response, Kyn.
    As it happens writing that long explanation, I thought I'd look from another angle and returned to the file – finding the answer straight away.
    Conclusion:
    Hyperlinks never use Magic Move transition, even if it is selected transition for the slide. I was getting hoodwinked by the fact that when I had forward and back arrows with hyperlinks to other slides, those slides were before and after the current slide. So they were Magic Moving with a mouse click on the Hyperlink, but only when forward one slide or backward one slide. Any other linking distance failed to Magic Move.
    I've already made the feature request but *please second it*, this would be a very powerful feature. Think Non-Linear Presentations and Info Kiosks with seamless transitions. And interaction is an area Powerpoint has decided edge on Keynote so it can and needs to be closed! (PP is such a trial of an interface after KN).
    http://www.apple.com./feedback/keynote.html
    I'll post the .keynote as requested when uploaded.

  • Debugging using Eclipse and non-standalone 10g

    Hey - is it possible to perform remote debugging using Eclipse against the non-standalone version of the 10g App Server? I've tried some of the hints listed in this forum (the most promising seemed to be starting oc4j using this command-line command:
    java -classic -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=2727 -jar oc4j.jar, then connecting with the Eclipse debugger, which I managed to do - though I had to change one of the parameters because the designated file [a timezone file] could not be found - after the change things seemed to start up correctly. However, when I tried to hit http://localhost:2727 I got a "server not found or DNS error"). I don't know if the debugging works with the non-standalone version of 10g or not... or if I should be using some other startup command so hopefully y'all can help :)
    Thanks,
    Scott

    Scott, apparently you have fallen pray of one of the most common misunderstandings of oc4j. Oc4j, namely, Oracle Application Server Container for j2ee, is a COMPLETE j2ee server. It supports http and https by itself. It is also an ejb container and a lot more. It can be used as a standalone or, with a different configuration, integrated into Oracle Application Server as its core component.
    When you use the command "java -jar oc4j.jar ...", you are starting an instance of "standalone oc4j" or "oc4j standalone" as mentioned above, albeit from an OAS installation. In contrast, you can start one instance of Oracle Application Server by "<ORACLE_HOME>/opmn/bin/opmn startall", which will starts, besides several other components, one or several oc4j processes as its oc4j component. This OAS is an HTTP server that takes initial requests and routs the requests that are for OC4J to OC4J server. It may handle some of the requests by itself, or routs other http requests to other components, like mod_plsql. Finally, what we usually called "farm" is a collection of OAS clusters and instances that share the same OAS Infrastructure.
    What I have written is just a tiny mini introduction of the OAS. For starters, oc4j is for use by development and small-medium scale production deployments. Plese take a look at the book "Oracle Application Server Containers for J2EE Standalone User's Guide" and then, "Oracle Application Server Concepts". You can find them by
    1. go to http://otn.oracle.com
    2. select "documentation" pull-down menu and then "application server"
    3. go to the first "view library" or the one suits you.
    4. view the "Oracle Application Server Concepts"
    5. or go to "List of All Books" and you will see all of them!
    Well, the best place to self-teach oc4j is, besides googling with site:oracle.com, the oc4j homepage,
    http://www.oracle.com/technology/tech/java/oc4j/index.html
    The above is written in case you need more information. Now let me come to your specific questions.
    1. how to hit the OC4J instance directly, bypassing an HTTP server entry?Since what you started is an oc4j standalone, it does not make any sense to say "bypassing an HTTP server entry". Oc4j instance itself IS an http server. To access its http service, try
    http://<yourHostNameOrIP>:<httpPort>/<yourAppWebContextRoot>
    where the httpPort and yourAppWebContextRoot can be found in default-web-site.xml or http-web-site.xml. Which one or other web-site.xmls are effective is determined in <OC4J_HOME>/config/server.xml. Here the OC4J_HOME refers to your "C:/OraHome1/j2ee/home".
    2. how to get the HTTP server started via the command-line startup?As said above, you already got an HTTP server running.
    What I suggest for you is to play with oc4j standalone. You can come to OAS later:
    1. download oc4j standalone distribution, "Oracle Containers for J2EE 10g (10.1.3) Developer Preview 3", from here "http://www.oracle.com/technology/software/products/ias/preview.html"
    2. unzip the downloaded file. What you get is basically the same to a part of your existing "C:/OraHome". Go to j2ee/home. Run
    java -Xdebug -Xnoagent -Djava.compiler=none -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=3301 -jar oc4j.jar
    Make sure you are using java 1.4.2 sdk.
    3. "new" your remote debugger from Eclicpse and specify the hostname and port 3301.
    4. http://<yourHost>:8888/
    A further note: to debug jsp's, you can use Jdeveloper that allows you to set debug points directly in a jsp that is running in its embeded oc4j server.

  • No non-linear under voice-port to reslove hissing on NM-HDV

    Hello.
    I have a trouble with 2MFT on NM-HDV.
    I can hear hissing on call.
    So, i had configured no non-linear under voice-port, then i can't hear any hissing.
    Instead, I can hear echo on call.
    How can i fix it?
    Regard,
    john.

    Hell Venky.
    Thnak you for your helping.
    I tried to configure based on the web site what did you told me.
    But i couldn't success it to resolve echo issue.
    The voice gate have 12.4(1a) on Cisco 2851.
    So the solution to reslove echo doesn't fit in my case.
    As i mentioned, normally i can hear noise each i talk another persion. If i stop the conversation i can't hear noise. but i started to talk again, i can hear noise each time.
    Thus i tried to disable "no non-linear" parameter, i can hear echo instead noise.
    So i should choice if i can hear noise or echo.
    And also, i tried to change input gain and output attenuation from -6 to 14, but the each still can hear.
    out voip network toplolog is below.
    NEC Digital Phone---NEC PBX---Gateway---(cat6500---MSPP(WAN Area)---cat6500)---Gateway---NEC Digital Phone.
    If i make a call using Normal Phone (INot Digital Phone), I can't hear noise. Whenever i can hear noise with NEC Digital Phone.
    Here is the option on cisco 2851 with 12.4
    2851(config-voiceport)#?
    Voice-port configuration commands:
    bearer-cap Specify the bear capability
    busyout Configure busyout trigger event & procedure
    comfort-noise Use fill-silence option
    compand-type The companding type for this voice port
    connection Specify Trunking Parameters
    cptone Configure voice call progress tone locale
    default Set a command to its defaults
    description Description of what this port is connected to
    disc_pi_off close voice path when disconnect with PI received
    echo-cancel Echo-cancellation option
    exit Exit from voice-port configuration mode
    idle-detection Idle code detection for digital voice
    input Configure input gain for voice
    music-threshold Threshold for Music on Hold
    no Negate a command or set its defaults
    non-linear Use non-linear processing during echo cancellation
    output Configure output attenuation for voice
    playout-delay Configure voice playout delay buffer
    shutdown Take voice-port offline
    threshold Threshold [noise] for voice port
    timeouts Configure voice timeout parameters
    translate Translation rule
    translation-profile Translation profile
    trunk-group Configure interface to be in a trunk group
    voice-class Set voiceport voice class control parameters
    Please advise to me to reslove the problem.
    Regard,
    john.

  • Measure a Linear Scale Unit, with sine wave signal

    I have a Linear Scale Unit, with three outputs, Phase A, Phase B and Scale ABS point. The outputs signals are in sine wave, with Phase difference of 90º. I have a NI PCI-6034E, and I don't know how to read that signals and convert them in mm and also put them in LabView program. I only find Quadrature Encoders. I don't know if it is better read the signal by the counter or analog input line.
    Thanks and Best Regards

    Hi Luis,
    If your inputs are sine waves, you're probably best off using analog input, though if the voltages are correct, you might be able to use counters.
    The conversion from the voltage outputs to mm distance is dependent on the sensor, so you will have to look in its documentation or search the web to find the parameters for your device. Once you find these values, you should be able to convert with a pretty simple LabVIEW application based on the input signals, either using their exact values or counting cycles, whichever is appropriate for your sensor.
    If you need more help with the conversion, please post more detailed information on the encoding of your sensor.
    Thank you,
    Kyle Bryson
    National Instruments

  • Non-linear moves in Ken Burns effects

    Pans and zooms tend to ramp up in speed. This doesn't happen all the time, but enough that it is a problem. Anyone else dealing with this?

    this is not a bug, but a feature of iMHD... ;-))
    to be serious, iM Grandmaster Karl Petersen had mention this often in this forum and gave some alert to Apple... because, in many cases, a non-linear movement look much better, in some absolutely not...- so, there should be some check-box.-
    only workaround so far: make the movement and the time a little longer; cut off the unwanted exaleration..- :-/

Maybe you are looking for