Accurate scaling of anatomical data

I am using AI to import anatomical tracing data (paths and symbols). The data coordinates are in millimeters.  I create an artboard as follows:
The file import code reads the data and then passes the bounds data to the art routine, CreateDataArt, as minv and maxv.  As can be seen (abRect) the bounds of the data are 13 mm wide by 14 mm top to bottom.  Two paths are then drawn into the artboard as shown here:
Note that the rulers are in millimeter units and that the art board is nowhere the correct size.
Any clues would be appreciated.
Charlie Knox
AccuStage

Regardless of what ruler unit you're using in Illustrator, the API always* takes points/pixels as its unit. So if you have values in millimeters and you want that to show up as the correct distance, etc., you'll need to multiply it out correctly to points and use that value when talking to the API.
*In CS6, there is a method that lets you tell the API to use whatever the current ruler unit is showing. I haven't tried it though, so I find it safer to just always convert to points. I have couple of simple functions that just check the current ruler unit and do any conversion (if necessary). Then I use those ToRulerUnit() or FromRulerUnit() anywhere I take inputs from the UI (or from an imported file as it sounds like in this case).

Similar Messages

  • Conversion from scaled ton unscaled data using Graph Acquired Binary Data

    Hello !
    I want to acquire temperature with a pyrometer using a PCI 6220 (analog input (0-5V)). I'd like to use the VI
    Cont Acq&Graph Voltage-To File(Binary) to write the data into a file and the VI Graph Acquired Binary Data to read it and analyze it. But in this VI, I didn't understand well the functionnement of "Convert unscaled to scaled data", I know it takes informations in the header to scale the data but how ?
    My card will give me back a voltage, but how can I transform it into temperature ? Can I configure this somewhere, and then the "Convert unscaled to scaled data" will do it, or should I do this myself with a formula ?
    Thanks.

    Nanie, I've used these example extensively and I think I can help. Incidently, there is actually a bug in the examples, but I will start a new thread to discuss this (I haven't written the post yet, but it will be under "Bug in Graph Acquired Binary Data.vi:create header.vi Example" when I do get around to posting it). Anyway, to address your questions about the scaling. I've included an image of the block diagram of Convert Unscaled to Scaled.vi for reference.
    To start, the PCI-6220 has a 16bit resolution. That means that the range (±10V for example) is broken down into 2^16 (65536) steps, or steps of ~0.3mV (20V/65536) in this example. When the data is acquired, it is read as the number of steps (an integer) and that is how you are saving it. In general it takes less space to store integers than real numbers. In this case you are storing the results in I16's (2 bytes/value) instead of SGL's or DBL's (4 or 8 bytes/value respectively).
    To convert the integer to a scaled value (either volts, or some other engineering unit) you need to scale it. In the situation where you have a linear transfer function (scaled = offset + multiplier * unscaled) which is a 1st order polynomial it's pretty straight forward. The Convert Unscaled to Scaled.vi handles the more general case of scaling by an nth order polynomial (a0*x^0+a1*x^1+a2*x^2+...+an*x^n). A linear transfer function has two coefficients: a0 is the offset, and a1 is the multiplier, the rest of the a's are zero.
    When you use the Cont Acq&Graph Voltage-To File(Binary).vi to save your data, a header is created which contains the scaling coefficients stored in an array. When you read the file with Graph Acquired Binary Data.vi those scaling coefficients are read in and converted to a two dimensional array called Header Information that looks like this:
    ch0 sample rate, ch0 a0, ch0 a1, ch0 a2,..., ch0 an
    ch1 sample rate, ch1 a0, ch1 a1, ch1 a2,..., ch1 an
    ch2 sample rate, ch2 a0, ch2 a1, ch2 a2,..., ch2 an
    The array then gets transposed before continuing.
    This transposed array, and the unscaled data are passed into Convert Unscaled to Scaled.vi. I am probably just now getting to your question, but hopefully the background makes the rest of this simple. The Header Information array gets split up with the sample rates (the first row in the transposed array), the offsets (the second row), and all the rest of the gains entering the for loops separately. The sample rate sets the dt for the channel, the offset is used to intialize the scaled data array, and the gains are used to multiply the unscaled data. With a linear transfer function, there will only by one gain for each channel. The clever part of this design is that nothing has to be changed to handle non-linear polynomial transfer functions.
    I normally just convert everything to volts and then manually scale from there if I want to convert to engineering units. I suspect that if you use the express vi's (or configure the task using Create DAQmx Task in the Data Neighborhood of MAX) to configure a channel for temperature measurement, the required scaling coefficients will be incorporated into the Header Information array automatically when the data is saved and you won't have to do anything manually other than selecting the appropriate task when configuring your acquisition.
    Hope this answers your questions.
    ChrisMessage Edited by C. Minnella on 04-15-2005 02:42 PM
    Attachments:
    Convert Unscaled to Scaled.jpg ‏81 KB

  • RFSA scaling complex I16 data

    I've successfully acquired complex I16 data using the niRFSA_FetchIQSingleRecordComplexI16 function but after computing the spectral result my amplitudes are ~15dBm too low.  I must be scaling the data incorrectly.  I'm assuming that the gain and offset values returned in the wfmInfo structure are applied to the real and imaginary values. I'm also assuming that the gain and offset values take the programmed reference level set with the niRFSAConfigureReferenceLevel into account. The gain scaling values do change when select a differect reference level.  
    Are these assumptions correct?  The gain value seems to be off by a factor of 4 (or so).
    I'm using the 5663 RFSA hardware configuration.   When I use the RFSA SFP the amplitudes are correct.

    Hi Timothy,
    To setup a measurement I call the following sequence of functions:
    niRFSA_ConfigureReferenceLevel,  -20dBm
    niRFSA_ConfigureRefClock, OnboardClock, 10e6
    niRFSA_ConfigureIQCarrierFrequency, 975e6
    niRFSA_ConfigureIQRate, 64e6
    niRFSA_ConfigureNumberOfSamples, 131072
    niRFSA_Commit
    To start the measurement I then call niRFSA_Initiate.
    Then niRFSA_CheckAcquisitionStatus is called repeatedly until it indicates
    that data is available.
    When data is available I call niRFSA_FetchIQSingleRecordComplexI16
    to get the int16 complex data.
    I then call niRFSA_GetScalingCoefficients to get the scaling coefficients and
    find that the gain and offset values are different than those returned in
    niRFSA_FetchIQSingleRecordComplexI16. I added this call to niRFSA_GetScalingCoefficients
    since the results I was getting with the gain returned by niRFSA_FetchIQSingleRecordComplexI16
    wasn't producing the correct result.
    I then use the gain and offset values to scale the int16 complex values into real32 complex values.
    I'm puzzled as to why the scaling coefficients are different between 
    niRFSA_GetScalingCoefficients and niRFSA_FetchIQSingleRecordComplexI16.
    For this setup the gain value returned by niRFSA_GetScalingCoefficients is 2.88301e-5
    and the gain value returned by niRFSA_FetchIQSingleRecordComplexI16 is 4.61151e-6. 
    When I apply either of these gain values to the data and compute the spectrum my
    signal amplitudes are not correct.
    Hope this help,
    Gary

  • What data count is accurate?  My Verizon,  #data or usage text/email alert?

    Hi,
    I'm back with Verizon after 2 years with Sprint.   With Sprint the three lines never I had (two iphones and a HTC android phone) never exceeded 2gb so I opted for the 2 Gigabyte data plan with verizon.  Now I have two iphones and one Galaxy S3.  The way the phone are used is about the same....or a little less since now my mom's iphone runs off her wifi when at home.  The other two phones run off my home WiFi, plus I typically work at home.  The 2nd Iphone is my business cell, so its use is primarily for calls and a few emails.
    I monitor the usage ever few days using my verizon and the vz website.    Yesterday those counts showed 1.64 total usage.  However I received text messages and emails last night saying I used 90% of my limit.   This morning I went jogging (and used Pandora).  I received another text/email saying I had gone over my data usage.  I immediately checked the verizon app and website, and I was not over.  There are just two days left in the billing cycle.  I checked the my verizon app, my verizon website and #data just now.  They all show 1.78, which seems accurate.  Also online the last activity logged was less than an hour ago and included data from my jog earlier.
    So, how do I know if I'm really over.....or just being suckered into upgrading.? Is the text/email alert accurate? I've read several posts on this forum and others suggesting #data and the vz website are accurate and the texts are bogus.  Others claim my verizon show no overage but their bill does.
    Any helpful responses will be appreciated!

        I can understand the frustration of receiving different information about your data usage Gibb13. We appreciate you doing your part to monitor the usage and we can definitely help with the accurate amount. By dialing #Data on your mobile device, you will see the entire amount of shared data used on your account. The amounts you are viewing on your individual phone may be from your line only. You will need the entire amount used by all your lines which are higher than your individual usage. You should also get this information on via My-Verizon. We can also confirm the amount used if you contact us by dialing *611 or 1-800-922-0204. You can also send us a private message here with your name and number and we can call you to discuss the usage and help with changing your plan. Here is a list of current plans so that you can prevent overages on your bill http://bit.ly/xdDajD .
    KInquanaH_VZW
    Follow us on Twitter @vzwsupport

  • What is the best way to export the data out of BW into a flat file on the S

    Hi All,
    We are BW 7.01 (EHP 1, Service Pack Level 7).
    As part of our BW project scope for our current release, we will be developing certain reports in BW, and for certain reports, the existing legacy reporting system based out of MS Access and the old version of Business Objects Release 2 would be used, with the needed data supplied from the BW system.
    What is the best way to export the data out of BW into a flat file on the Server on regular intervals using a process chain?
    Thanks in advance,
    - Shashi

    Hello Shashi,
    some comments:
    1) An "open hub license" is required for all processes that extract data from BW to a non-SAP system (including APD). Please check with your SAP Account Executive for details.
    2) The limitation of 16 key fields is only valid when using open hub for extracting to a DB table. There's no such limitation when writing files.
    3) Open hub is the recommended solution since it's the easiest to implement, no programming is required, and you don't have to worry much about scaling with higher data volumes (APD and CRM BAPI are quite different in all of these aspects).
    For completeness, here's the most recent documentation which also lists other options:
    http://help.sap.com/saphelp_nw73/helpdata/en/0a/0212b4335542a5ae2ecf9a51fbfc96/frameset.htm
    Regards,
    Marc
    SAP Customer Solution Adoption (CSA)

  • ExtendScript CS6 doesn't handle GPS data properly

    I prefer to work in Photoshop and Bridge, rather than Lightroom.  I am currently using version CS6 on Windows 7 x64.  I occasionally need to update the title, description and/or GPS information of a photograph. (I have many photographs that predate GPS systems.)  I also maintain different several JPEG versions of each photograph generated from a base PSD with different cropping and resolutions that are intended for different purposes.  I wrote a Bridge script to check and update the metadata of all versions of one or more images in a batch after updating PSD metadata.  The script works very well except for the GPS latitude and longitude data.
    The problem is that using the XMPMeta object that comes with ExtendScript Toolkit 3.8.0.13 (the latest version, I believe) the GPS coordinates are returned as string data in an "Adobe format", e.g. “35,24.3467N”. This format does not always accurately represent the underlying EXIF data.  (I have also written my own script for getting and setting the GPS data in Photoshop.)
    Furthermore, comparisons between PSD and JPEG formats are impossible because XMPMeta.getProperty only returns up to two decimal places for minutes when querying a PSD file, whereas it returns up to four decimal places for JPEGs.  I have checked the actual EXIF GPS data for a PSD image and a JPEG image generated from that PSD with EXIFTool and the data is identical, but XMPMeta.getProperty returns “35,24.3467N” from the JPEG and “35,24.35N” from the PSD. Setting GPS coordinates using four decimal places produces the same EXIF data for both PSD and JPEG images.
    The English translation of the “Exif Version 2.3” standard description for GPSLatitude states:
    The latitude is expressed as three RATIONAL values giving the degrees, minutes, and seconds, respectively. If latitude isexpressed as degrees, minutes and seconds, a typical format would be dd/1,mm/1,ss/1. When degrees and minutes are used and, for example, fractions of minutes are given up to two decimal places, the format would be dd/1,mmmm/100,0/1.
    It is clear that XMPMeta.getProperty does not always return a string that accurately represents the original EXIF data and it can be inconsistent between PSD and JPEG images.
    Is there some way to obtain an accurate representation of GPS data using ExtendScript?
    ...Jim

    I have seen some Photoshop Scripts on the Web the retrieve and use the exif GPG data.   But from what you write I would think your doing the same as those scripts do. Something like this:
    // Open Map from GPS Location.jsx
    // Version 1.0
    // Shaun Ivory ([email protected])
    // Feel free to modify this script.  If you do anything interesting with it,
    // please let me know.
    // JJMack I just used my sledge hammer on Shaun's Photoshop script
    // I sucked in his include file and removed his include statement
    // then I commented out his dialog and just Goolge Map the GPS location
    <javascriptresource>
    <about>$$$/JavaScripts/GoogleMapGPS/About=JJMack's Google Map GPS.^r^rCopyright 2014 Mouseprints.^r^rScript utility for action.^rNOTE:Modified Shaun's Ivory just Google Map GPS location</about>
    <category>JJMack's Action Utility</category>
    </javascriptresource>
    // Constants which identify the different mapping sites
    c_MapWebsites =
        "google",
        "mappoint",
        "virtualearth"
    var c_nDefaultMapWebsiteIndex = 2;
    var c_strTemporaryUrlFile = "~/TemporaryPhotoshopMapUrl.url";
    var c_strPhotoCaption = "Photo%20Location";
    // EXIF constants
    c_ExifGpsLatitudeRef   = "GPS Latitude Ref"
    c_ExifGpsLatitude      = "GPS Latitude"
    c_ExifGpsLongitudeRef  = "GPS Longitude Ref"
    c_ExifGpsLongitude     = "GPS Longitude"
    c_ExifGpsAltitudeRef   = "GPS Altitude Ref"
    c_ExifGpsAltitude      = "GPS Altitude"
    c_ExifGpsTimeStamp     = "GPS Time Stamp"
    c_ExifMake             = "Make"
    c_ExifModel            = "Model"
    c_ExifExposureTime     = "Exposure Time"
    c_ExifAperture         = "F-Stop"
    c_ExifExposureProgram  = "Exposure Program"
    c_ExifIsoSpeedRating   = "ISO Speed Ratings"
    c_ExifDateTimeOriginal = "Date Time Original"
    c_ExifMaxApertureValue = "Max Aperture Value"
    c_ExifMeteringMode     = "Metering Mode"
    c_ExifLightSource      = "Light Source"
    c_ExifFlash            = "Flash"
    c_ExifFocalLength      = "Focal Length"
    c_ExifColorSpace       = "Color Space"
    c_ExifWidth            = "Pixel X Dimension"
    c_ExifHeight           = "Pixel Y Dimension"
    function GetRawExifValueIfPresent(strExifValueName)
        // Empty string means it wasn't found
        var strResult = new String("");
        // Make sure there is a current document
        if (app.documents.length)
            // Loop through each element in the EXIF properties array
            for (nCurrentElement = 0, nCount = 0; nCurrentElement < activeDocument.info.exif.length; ++nCurrentElement)
                // Get the current element as a string
                var strCurrentRecord = new String(activeDocument.info.exif[nCurrentElement]);
                // Find the first comma
                var nComma = strCurrentRecord.indexOf(",");
                if (nComma >= 0)
                    // Everything before the comma is the field name
                    var strCurrentExifName = strCurrentRecord.substr(0, nComma);
                    // Is it a valid string?
                    if (strCurrentExifName.length)
                        // Is this our element?
                        if (strCurrentExifName == strExifValueName)
                            // Everything after the comma is the value, so
                            // save it and exit the loop
                            strResult = strCurrentRecord.substr(nComma + 1);
                            break;
        return strResult;
    // Convert a Photoshop latitude or longitude, formatted like
    // this:
    //      Example: 47.00 38.00' 33.60"
    // to the decimal form:
    //      Example: 47.642667
    // It returns an empty string if the string is in an unexpected
    // form.
    function ConvertLatitudeOrLongitudeToDecimal(strLatLong)
        var nResult = 0.0;
        // Copy the input string
        var strSource = new String(strLatLong);
        // Find the first space
        nIndex = strSource.indexOf(" ");
        if (nIndex >= 0)
            // Copy up to the first space
            strDegrees = strSource.substr(0, nIndex);
            // Skip this part, plus the space
            strSource = strSource.substr(nIndex + 1);
            // Find the tick mark
            nIndex = strSource.indexOf("'");
            if (nIndex >= 0)
                // Copy up to the tick mark
                strMinutes = strSource.substr(0, nIndex);
                // Skip this chunk, plus the tick and space
                strSource = strSource.substr(nIndex + 2);
                // Find the seconds mark: "
                nIndex = strSource.indexOf("\"");
                if (nIndex >= 0)
                    // Copy up to the seconds
                    strSeconds = strSource.substr(0, nIndex);
                    // Convert to numbers
                    var nDegrees = parseFloat(strDegrees);
                    var nMinutes = parseFloat(strMinutes);
                    var nSeconds = parseFloat(strSeconds);
                    // Use the correct symbols
                    nResult = nDegrees + (nMinutes / 60.0) + (nSeconds / 3600.0);
        return nResult;
    function GetDecimalLatitudeOrLongitude(strExifLatOrLong, strExifLatOrLongRef, strResult)
        var strResult = "";
        // Get the exif values
        strLatOrLong = GetRawExifValueIfPresent(strExifLatOrLong);
        strLatOrLongRef = GetRawExifValueIfPresent(strExifLatOrLongRef);
        // If we were able to read them
        if (strLatOrLong.length && strLatOrLongRef.length)
            // Parse and convert to a decimal
            var nResult = ConvertLatitudeOrLongitudeToDecimal(strLatOrLong);
            // If we are in the southern or western hemisphere, negate the result
            if (strLatOrLongRef[0] == 'S' || strLatOrLongRef[0] == 'W')
                nResult *= -1;
            strResult = nResult.toString();
        return strResult;
    function CreateGoogleMapsUrl(strLatitude, strLongitude)
        return "http://maps.google.com/maps?ll=" + strLatitude + "," + strLongitude + "&spn=0.01,0.01";
    function CreateMappointUrl(strLatitude, strLongitude)
        return "http://mappoint.msn.com/map.aspx?L=USA&C=" + strLatitude + "%2c" + strLongitude + "&A=50&P=|" + strLatitude + "%2c" + strLongitude + "|39|" + c_strPhotoCaption +"|L1|"
    function CreateVirtualEarthUrl(strLatitude, strLongitude)
        return "http://virtualearth.msn.com/default.aspx?v=2&style=h&lvl=17&cp=" + strLatitude + "~" + strLongitude + "&sp=an." + strLatitude + "_" + strLongitude + "_" + c_strPhotoCaption + "_";
    function CMapSiteSelection()
        // Initialize default map provider
        this.Site = c_MapWebsites[c_nDefaultMapWebsiteIndex];
        return this;
    function ShowMyDialog(strLatitude, strLongitude)
        // Use the default website
        var strCurrentSite = c_MapWebsites[c_nDefaultMapWebsiteIndex];
        dlgMain = new Window("dialog", "Choose a Map Website");
        // Add the top group
        dlgMain.TopGroup = dlgMain.add("group");
      dlgMain.TopGroup.orientation = "row";
      dlgMain.TopGroup.alignChildren = "top";
      dlgMain.TopGroup.alignment = "fill";
        // Add the left group
        dlgMain.TopGroup.LeftGroup = dlgMain.TopGroup.add("group");
        dlgMain.TopGroup.LeftGroup.orientation = "column";
        dlgMain.TopGroup.LeftGroup.alignChildren = "left";
        dlgMain.TopGroup.LeftGroup.alignment = "fill";
        dlgMain.AspectRatioGroup = dlgMain.TopGroup.LeftGroup.add("panel", undefined, "Map Website");
        dlgMain.AspectRatioGroup.alignment = "fill";
        dlgMain.AspectRatioGroup.orientation = "column";
        dlgMain.AspectRatioGroup.alignChildren = "left";
        // Add radio buttons
        dlgMain.virtualEarth = dlgMain.AspectRatioGroup.add("radiobutton", undefined, "Windows Live Local (aka Virtual Earth)");
        dlgMain.virtualEarth.onClick = function virtualEarthOnClick()
            strCurrentSite = "virtualearth";
        dlgMain.mappoint = dlgMain.AspectRatioGroup.add("radiobutton", undefined, "MSN Maps && Directions (aka MapPoint)");
        dlgMain.mappoint.onClick = function mappointOnClick()
            strCurrentSite = "mappoint";
        dlgMain.google = dlgMain.AspectRatioGroup.add("radiobutton", undefined, "Google Local (aka Google Maps)");
        dlgMain.google.onClick = function googleOnClick()
            strCurrentSite = "google";
        // Set the checked radio button
        if (strCurrentSite == "google")
            dlgMain.google.value = true;
        else if (strCurrentSite == "mappoint")
            dlgMain.mappoint.value = true;
        else
            dlgMain.virtualEarth.value = true;
        // Add the button group
        dlgMain.TopGroup.RightGroup = dlgMain.TopGroup.add("group");
        dlgMain.TopGroup.RightGroup.orientation = "column";
        dlgMain.TopGroup.RightGroup.alignChildren = "left";
        dlgMain.TopGroup.RightGroup.alignment = "fill";
        // Add the buttons
        dlgMain.btnOpenSite = dlgMain.TopGroup.RightGroup.add("button", undefined, "Open");
        dlgMain.btnClose = dlgMain.TopGroup.RightGroup.add("button", undefined, "Exit");
        dlgMain.btnClose.onClick = function()
            dlgMain.close(true);
        dlgMain.btnOpenSite.onClick = function()
            // Which website?
            var strUrl = "";
            switch (strCurrentSite)
            case "mappoint":
                strUrl = CreateMappointUrl(strLatitude, strLongitude);
                break;
            case "google":
                strUrl = CreateGoogleMapsUrl(strLatitude, strLongitude);
                break;
            case "virtualearth":
            default:
                strUrl = CreateVirtualEarthUrl(strLatitude, strLongitude);
                break;
            // Create the URL file and launch it
            var fileUrlShortcut = new File(c_strTemporaryUrlFile);
            fileUrlShortcut.open('w');
            fileUrlShortcut.writeln("[InternetShortcut]")
            fileUrlShortcut.writeln("URL=" + strUrl);
            fileUrlShortcut.execute();
        // Set the button characteristics
        dlgMain.cancelElement = dlgMain.btnClose;
        dlgMain.defaultElement = dlgMain.btnOpenSite;
        dlgMain.center();
        return dlgMain.show();
    function GoogleMap(strLatitude, strLongitude)
      strUrl = CreateGoogleMapsUrl(strLatitude, strLongitude)
      try{
      var URL = new File(Folder.temp + "/GoogleMapIt.html");
      URL.open("w");
      URL.writeln('<html><HEAD><meta HTTP-EQUIV="REFRESH" content="0; ' + strUrl + ' "></HEAD></HTML>');
      URL.close();
      URL.execute();   // The temp file is created but this fails to open the users default browser using Photoshop CC prior Photoshop versions work
      }catch(e){
      alert("Error, Can Not Open.");
    function OpenMapUrl()
        // Get the latitude
        var strDecimalLatitude = GetDecimalLatitudeOrLongitude(c_ExifGpsLatitude, c_ExifGpsLatitudeRef);
        if (strDecimalLatitude.length)
            // Get the longitude
            var strDecimalLongitude = GetDecimalLatitudeOrLongitude(c_ExifGpsLongitude, c_ExifGpsLongitudeRef);
            if (strDecimalLongitude.length)
                //ShowMyDialog(strDecimalLatitude, strDecimalLongitude);
      GoogleMap(strDecimalLatitude, strDecimalLongitude);
    function Main()
        if (app.documents.length > 0)
            OpenMapUrl();
        else
            alert("You don't have an image opened.  Please open an image before running this script.");
    Main();

  • How can I convert from Modbus raw data to engineering units using a formula?

    Within Lookout, I have several Modbus numerical input types that do not have a linear corespondence to the Engineering values they represent.  How can I display these values accurately using a formula to convert from the raw data to an engineering value?

    I don't quite understand your reply.  I'm using Lookout 6.0.2, logged in as Administrator, in Edit Mode.  The Modbus object is named RTU06_SAV.  The Active member is 30002 with an alias of SAVfmSMT_RSL.
    Following your instructions, I opened Object Explorer and right-clicked on RTU06_SAV. 
    This opened a menu containing:  Refresh, Cut, Copy, Rename, Delete, Edit connections..., Edit Data Member Configuration, Configure Network Security and Properties.
    I assumed that I should select Edit Data Member Configuration, but maybe I'm wrong. 
    Within Data Member Configuration I can set up Linear Scaling between Raw data and Engineering data.  I know how to do that, but what I need to know is how to convert Raw data to Engineering data using a formula representing a non-linear transformation (such as a converion to a logarithmic value or perhaps a formula derived by fitting the formula to a curve on a calibration chart).
    Once I have this my Engineering data can be represented on a control panel as both a numeric value AND as a correctly reading Gauge.  It can also be properly represented on a HyperTrend graph.
    What do you suggest?

  • Cancel Date in Void Checks for Payment

    Ver. 2005, PL11: The cancel date (and update date) is not being set when voiding a check for payment which is associated with an outgoing payment.
    Likewise, the cancel date (and update date) is not being set when a check for payment is voided when canceling an outgoing payment.
    When reporting on payment activity, the ocho file is not accurate without the cancel date. Any idea if this is a setting? A bug? Fixed in 2007? Ideas on a work-around?

    Ad-hoc payments, such as a cash on delivery order, an emergency employee loan or some other one-time payments are created using the check for payment screen, entered against an account number, not a vendor, and the "create journal entry" box is checked.
    The check is then immediately printed using the document printing. The payment wizard is invoked weekly, and it ignores these ad-hoc, paid and printed payments.
    Any ad-hoc checks for payment correctly have the cancel date and update date updated when using the void check for payment screen.
    Checks for payment created from the payment wizard (which are "associated" with a previously created outgoing payment) do not have the cancel date and update date updated in the check file (OCHO) when using the void check for payment screen.

  • Problem with Date / Time Title - Showing Inacurate Information

    I use iMovie to make videos for my job. The videos need to have an accurate running time and date displayed on them. I use the "Date/Time" Title option and the running time and date appear on the video. BUT- there is a problem. The time is wrong.
    When I import a video clip (from my Panasonic HDC-TM80) into iMovie, I can see the "time and date created" as well as the "length" of the video clip. The time and date created shows the time and date of when the clip ENDS. When I overlay the "Date/Time" Title, it starts the time (at the beginning of the clip) from the created time (which is actually the end).
    Example:
    Clip Created (end of clip): 3/23/11 10:30:00am
    Length of clip: 12 min. 35 sec.
    The actual Date/Time Title should show 10:17:25am at the beginning and runs through 10:30:00am (the end).
    But instead it shows 10:30:00am at the beginning and runs through 10:42:35am. This obviously results in a 12 min. 35 sec. inaccuracy.
    Is this a problem with the software? Is there a way to contact someone to alert them of this problem? Or am I missing something simple?
    Thanks in advance for anyone's help.
    -as a quick fix, I adjusted the time and date on the video clips so that it would show the correct info, but I'm usually working with several clips at a time and it is laborious to change each one.

    I don't have an answer but was wondering how you like the TM80? Except for the date/time issue, is it compatible with iMovie'11?
    Thanks!

  • Best way to control and read data from multiple instruments?

    Hello,
    I'm building an application to test power supplies involving multiple pieces of equipment over a couple of different comm busses. The application will also send control instructions to some of the instruments, and read data from some of the instruments. The reading and control profiles will not run on the same schedule (variable length control steps, configurable read interval).
    I was thinking of using a queued statemachine (producer/consumer) for the control profile and another to read the data, but I got concerned that there would be collisions between sending control commands and read commands to the same machine. Is there a suggested design pattern for implementing something like this?
    Timing of the commands isn't critical down to the milisecond, but I need to collect reasonably accurate timestamps when the data is read. The same is true for the control commands.
    Here are the instruments I'm going to use, if the are control, read, or both, and the communication method
    Instrument Funtions Comm Method
    Power Supply Read data Communicates to PMBus Adapter
    PMBus to USB Adapter Read data USB (Non-Visa)
    Switch control relays USB (VISA)
    Power Dist. Unit read data/control outlets SNMP (Ethernet)
    Electronic Load read data/control load GPIB (VISA)
    Thermal Chamber read data/control temp Ethernet (VISA)
    Thanks,
    Simon

    Hello, there is a template in LV called "Continuous measurement and Logging".
    It can give you some idea how to properly decouple the "GUI Event handler" top loop (where your Event structure is) from the DAQ and other loops.
    You do not need to totally replicate the above example, but you can find there nice tricks which can help you at certain points.
    The second loop from the top is the "UI message loop". It handles the commands coming from the top loop, and regarding to the local state machine and other possible conditions and states, it can command the other loops below.
    During normal run, the different instrument loops just do the data reading, but if you send a control command from the top loop to a certain instrument loop (use a separate Queue for every instrument loops), that loop will Dequeue the control command, execute it, and goes back to data reading mode (in data reading mode the loop Dequeu itself with a "data read" command automatically). In this way the control and data read modes happen in serial not in parallel which you want to avoid (but I think some instrument drivers can even handle parallel access, it will not happen in really parallel...).
    In every instrument loop when you read a value, attach a TimeStamp to it, and send this timestamp/value couple to the DataLogging loop via a Queue. You can use a cluster including 3 items: Source(instrument)/timestamp/value. All the instrument loops can use the same "Data logging" Queue. In the Datalogging while loop, after Dequeue-ing, you can Unbundle the cluster and save the timestamp and data-value into the required channel (different channel for every instrument) of a TDMS file.
    (Important: NEVER use 2 Event structures in the same top level "main" VI!)

  • How to efficiently log multiple data streams with TDMS

    Ok, first off, I'll admit I am completely clueless when it comes to logging, TDMS in particular.  That said, I'm trying to work out the best way to log some data from an existing LabVIEW-based control system, so that users can later access that data in the event of catastrophic failure or other situations where they might want to see exactly what happened during a particular run.
    I've got a total of between 6 and 12 data points that need to be stored (depending on how many sensors are on the system).  These are values being read from a cRIO control system.  They can all be set to Single data type, if necessary - even the one Boolean value I'm tracking is already being put through the "convert to 0,1" for graph display purposes.  The data is currently read at 100ms intervals for display, but I will be toying with the rate that I want to dump data to the disk - a little loss is OK, just need general trending for long term history.  I need to keep file sizes manageable, but informative enough to be useful later.
    So, I am looking for advice on the best way to set this up.  It will need to be a file that can be concurrently be read as it is being written, when necessary - one of the reasons I am looking at TDMS in the first place (it was recommended to me previously).  I also need an accurate Date/Time stamp that can be used when displaying the data graphically on a chart, so they can sync up with the external camera recordings to correlate just what happened and when.
    Are there specific pitfalls I should watch for?  Should I bundle all of the data points into an array for each storage tick, then decimate the array on the other end when reading?  I've dug through many of the examples, even found a few covering manual timestamp writing, but is there a preferred method that keeps file size minimized (or extraction simplified)?
    I definitely appreciate any help...  It's easy to get overwhelmed and confused in all of the various methods I am finding for handling TDMS files, and determining which method is right for me.

    I need to bump this topic again...  I'll be honest, the TDMS examples and available help are completely letting me down here.
    As I stated, I have up to 12 data values that I need to stream into a log file, so TDMS was suggested to me.  The fact that I can concurrently read a file being written to was a prime reason I chose this format.  And, "it's super easy" as I was told...
    Here's the problem.  I have multiple data streams.  Streams that are not waveform data, but actual realtime data feedback from a control system, that is being read from a cRIO control system into a host computer (which is where I want to log the data).  I also need to log an accurate timestamp with this data.  This data will be streamed to a log file in a loop that consistently writes a data set every 200ms (that may change, not exactly sure on the timing yet).
    Every worthwhile example that I've found has assumed I'm just logging a single waveform, and the data formatting is totally different from what I need.  I've been flailing around with the code, trying to find a correct structure to write my data (put it all in an array, write individual points, etc) and it is, quite honestly, giving me a headache.  And finding the correct way for applying the correct timestamp (accurate data and time the data was collected) is so uncharacteristically obtuse and hard to track down...  This isn't even counting how to read the data back out of the file to display for later evaluation and/or troubleshooting...  Augh!
    It's very disheartening when a colleague can throw everthing I'm trying to do together in 12 minutes in the very limited SCADA user interface program he uses to monitor his PLCs...  Yet LabVIEW, the superior program I always brag about, is slowly driving me insane trying to do what seems like a relatively simple task like logging...
    So, does anyone have any actual useful examples of logging multiple DIFFERENT data points (not waveforms) and timestamps into a TDMS file?  Or real suggestions for how to accomplish it, other than "go look at the examples" which I have done (and redone).  Unless, of course, you have an actual relevant example that won't bring up more questions than it answers for me, in which case I say "bring it on!"
    Thanks for any help...  My poor overworked brain will be eternally grateful.

  • TS3920 Date and Time display

    The smaller time in the header of my iPad4 is accurate, but the larger date and time in the background at the top is wrong. I think it may be the date I bought the iPad. The calendar app is correct as well as the date and time in settings. How do I change the larger display on the home screen?

    The time only appears on the top bar of the iPad's screen, the date is only on the icon of the Calendar app. If the date and time are showing on the screen itself then it might be on the wallpaper that you are using as the background image - try going into Settings > Brightness & Wallpaper and change the wallpaper

  • Date: showing the current quarter

    Hi,
    I want a field where you can chosse the current quarter. Is this possible? Has it something to do with date format? date.short, medium, long,...?
    Thanks in advance for helping me!
    Matthias

    First option:
    in Acrobat Pro I would write something like this (the "else" is still missing; second problem is that I can't make this for all years...)
    var 
    DatumsUhrzeitfeld4 = this.getAttribute("DatumsUhrzeitfeld4");
    if 
    (DatumsUhrzeitfeld4 = "August 2010"){DatumsUhrzeitfeld4
    = "Quartal 3"}
    if 
    (DatumsUhrzeitfeld4 = "September 2010"){DatumsUhrzeitfeld4
    = "Quartal 3"}
    if 
    (DatumsUhrzeitfeld4 = "Oktober 2010"){DatumsUhrzeitfeld4
    = "Quartal 4"}
    if 
    (DatumsUhrzeitfeld4 = "November 2010"){DatumsUhrzeitfeld4
    = "Quartal 4"}
    if 
    (DatumsUhrzeitfeld4 = "Dezember 2010"){DatumsUhrzeitfeld4
    = "Quartal 4"}
    Second option:
    http://forums.adobe.com/message/333985#333985
    Sorry, misread the issue. The Quarter will be based on the company you are working for . If their year-end is the same as a calendar yearend (Dec 31), then the example below will be accurate. Change the dates based on the company requirements.
    You can hardcode variables into your application.cfm page.
    quarter1start = "01-jan-";
    quarter2start = "01-apr-";
    quarter3start = "01-jul-";
    quarter4start = "01-oct-";
    The end of each quarter is programatically always the day before the start of the next quarter.
    I would add the 'year' to the equation dynamically, to keep it forward compatible.
    But where shall I write this?
    thx in advance 4 helping me!

  • Dynamically scaling of diagram axis

    hello experts,
    I have bookings on an ODS with each booking recording the number of days between date of booking and date of departure (e.g. 78 days or 5 days).
    Now I want to display a diagram with
    y-axis = number of bookings and
    x-axis = days between date of booking and date of departure descending from left to right dynamically scaled according to data values e.g.:
    100 | 75 | 50 | 25 | 0 days or
    25 | 20 | 15 | 10 | 5 | 0 days or
    500 | 400 | 300 | 200 | 100 | 0 days
    Any suggestions?
    Thanx
    Axel

    Hi Alex,
    Have you tried display "Values in reverse order" on Bex Chart?
    It is in Format Axis> Scale Tab
    Jaya

  • Delivery Date changed to past date while ATPing 2nd Line

    Hello Gurus,
    I posted a question a few days ago, but now that I found a new example i have more info of my problem.
    So here is the scenario. (we are set up for back order)
    Orders is created for 5000 units on 09.23.2008.
    we do not have enough stock to ship all. we only have 3000
    In our system or the way it is set up. line 1 shows 5000 with confirmed delivery 0,
    So we do an ATP check and a second line is created with qty 3000, a MAD date of 10.01.2008 and
    delivery date of 10.07.08. all is good up to here.
    so we still have 2000 to ship and the mad date is not until later in December: 12.01.2008
    so we go into line 3 and do an ATP check to see what promised delivery date will create.
    the system says 12.07.2008, ok all is good up to here.
    now if I SAVE this order, this is what happens.
    Line one for 5000 disappears
    Line 2 turns into Line 1 , but the delivery date changes from the actual date of 10.07.2008 to
    09.23.2008.
    Line 3 turns into Line 2 with the right date 12.07.2008.
    Line 2 turning into one is the one with the Issue !
    Are we doing something wrong ? is this a config problem ?
    Any feedback appreciated.
    thanks
    JD

    HI Mani,
    yeah I agree with you about the lines, but the only concern is the dates, the original delivery date was for example  march 7th for 5000, so we could not ship them all and we only shipped 3000 on April 7th, and we still had back order for 2000.
    those 2000, lets say have a Mat Ava Date of Today, so If I do an ATP check, the system would create a delivery for maybe November 10th.
    so far all is good, when I save the order in Va02, and the line 2 becomes line 1 and line 3 becomes line 2 ( which I am ok with )
    the only thing that I think is strange is that the new line 1, the one that accounts for the 3000 shipped on April 7th, now has a delivery date of March 7th.
    and that is not accurate, the requested delivery date was march 7th, but we shipped on April 7th, so why does the system override the actual date with the first date.
    that is what I think it is strange and I am not sure how to fix it.
    Let me know what you think and thanks in advance!
    JD

Maybe you are looking for

  • GRC 10 BRM Workflow configuration issue

    Hello all, Can you suggest me, how can i set a proper workflow for BRM, without using BRF+, so that I can create a role, which should go the the approver stage for approval. I used MSMP Default workflow but did not use the "BRF". Now i have stuck at

  • How to handle Stored Procedure and Views

    Dear All While dealing with Oracle database related scenario. I came across Stored Procedures and Views. Which are complex in nature. Using SAP XI how we can handle them ? Is JDBC adaptor is capable of that.? Can you help me Data type structure for o

  • FM 10 - Serious Internal Error When Copying/Pasting

    We regularly encounter a serious internal error when copying text from one FM doc to another. We also get the error (occasionally) when we run FrameScript. Does anyone know what could cause this? It appears to happen randomly (i.e. it doesn't appear

  • Frustating Import over iMovie as Final Cut throws errors

    I have recorded some WOW videos (using Apple intermediate codec, 25 fps) and Final Cut Pro fails, if I try to import the file directly. But iMovie imports it without any problem. Export to XML works sometimes, sometimes not. Exporting to Quicktime (u

  • Program Won't Start Up

    a few days ago i went to start a program from my dock and accidently hit my limewire program, which i didn't want to start. the icon began to bounce so i right clicked on it and hit 'force quit'. the program didn't start so i thought everything was o