RFSA scaling complex I16 data
I've successfully acquired complex I16 data using the niRFSA_FetchIQSingleRecordComplexI16 function but after computing the spectral result my amplitudes are ~15dBm too low. I must be scaling the data incorrectly. I'm assuming that the gain and offset values returned in the wfmInfo structure are applied to the real and imaginary values. I'm also assuming that the gain and offset values take the programmed reference level set with the niRFSAConfigureReferenceLevel into account. The gain scaling values do change when select a differect reference level.
Are these assumptions correct? The gain value seems to be off by a factor of 4 (or so).
I'm using the 5663 RFSA hardware configuration. When I use the RFSA SFP the amplitudes are correct.
Hi Timothy,
To setup a measurement I call the following sequence of functions:
niRFSA_ConfigureReferenceLevel, -20dBm
niRFSA_ConfigureRefClock, OnboardClock, 10e6
niRFSA_ConfigureIQCarrierFrequency, 975e6
niRFSA_ConfigureIQRate, 64e6
niRFSA_ConfigureNumberOfSamples, 131072
niRFSA_Commit
To start the measurement I then call niRFSA_Initiate.
Then niRFSA_CheckAcquisitionStatus is called repeatedly until it indicates
that data is available.
When data is available I call niRFSA_FetchIQSingleRecordComplexI16
to get the int16 complex data.
I then call niRFSA_GetScalingCoefficients to get the scaling coefficients and
find that the gain and offset values are different than those returned in
niRFSA_FetchIQSingleRecordComplexI16. I added this call to niRFSA_GetScalingCoefficients
since the results I was getting with the gain returned by niRFSA_FetchIQSingleRecordComplexI16
wasn't producing the correct result.
I then use the gain and offset values to scale the int16 complex values into real32 complex values.
I'm puzzled as to why the scaling coefficients are different between
niRFSA_GetScalingCoefficients and niRFSA_FetchIQSingleRecordComplexI16.
For this setup the gain value returned by niRFSA_GetScalingCoefficients is 2.88301e-5
and the gain value returned by niRFSA_FetchIQSingleRecordComplexI16 is 4.61151e-6.
When I apply either of these gain values to the data and compute the spectrum my
signal amplitudes are not correct.
Hope this help,
Gary
Similar Messages
-
How can I make a form show complex composite data?
How can I make a form show complex composite data?
We would like a form to display the following information:
Employee Names as the columns, 1 row for each calendar day, and each item in the block containing the total sales amount (and other information) for that employee for that day.
This would require the block to function as a (composite) crosstab report.
Here is an example:
We have employees: Jane, John, Jim, and Janice
We would like to see their total sales for the following days: 5/1 .. 5/5
Jane John Jim Janice
5/1 50 8 10 3 30 5 20 7
5/2 40 7 60 8 10 2 30 4
5/3 20 3 50 8 70 9 50 9
5/4 51 8 40 7 40 8 50 8
5/5 10 1 20 2 90 10 10 2
We have a table with records of each day, for each employee, and their sales for that day.
There is a possibility our list of employees can change as well as the number of days.
Anyone tried doing anything like this with a form? We don't want to just embed a report because we would like to include other data as well besides what is mentioned above and we would like them to be able to change some of that data. Thanks in advance for any help.
nullI have a VERY similar problem where I want to have this matrix on the form with six days worth of data (three columns for each day) for my data table. I have done as Momen has suggested and created a view (using SQLPLUS):
create table weekdays (weekday varchar2(3), date_x date);
drop view employee_absences_v;
create view employee_absences_v as
select weekday,
absence_date,
hours,
absence_code_lvid,
adjustment_status_lvid
from employee_absences ea, weekdays wd;
that hopefully provides a row for each day of the week for my data table. The view is created OK but I'm not sure how to get it into the form properly. Also, I am using the form to CREATE rows in the data table not just SELECT and DISPLAY and there may or may not be a row of data for a corresponding day which is going to be a problem for me using this view method, I think. Let me know how this goes and if you get your form to work and how? Momen, your method appeals to me but it's not 100% clear how to get it to work in a form?
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Tina Radosta ():
We tried to create the view (not with Designer because we don't have that). We were not successful in creating a view with a matrix. We would greatly appreciate your help.
Thanks in advance,
Tina<HR></BLOCKQUOTE>
null -
How to assign two complex type data in message payload
Hi ,
In my xsd file two complex type data is there ,
but when i am trying to add these in message type request and response payload , i can add only one payload,
is it any way to add 2 complex types in message types request and response payloads.
Regards
janardhanEach request and response has but a single payload. You can change the element of the payload to a single complex type from your XSD, but that's it. You can't assign more than one element to the request (or response) payload. Someone please correct me if I'm wrong.
-
Hi,
I am using analog DAQ to aquire +/-1V and streaming to binary file on HDD. I am using I16 representation with analog input ranges set to +/-5 Volts. My Y-scale on chart shows data range between +400 and -400 counts. I could not understand how it represents data from +-1V to +-400 range? I tried using SGL data type and it shows correct Y-scale range of +-1 Volts. Any suggestion how Labview converts data to I16 format?
Also, I am trying to format X-sclae and Y-scale, but some time it automatically changes upper and lower ranges of scale, minor gride divsions etc, any suggestions for customising grids and scale i.e. I need five grid divisions for every major divison on both X and Y scale. I am using simple chart.
I appreciate your eff
orts to reply my questions,
Thanks,
KishorHi Kishor,
1.
I think you have done the following: On AI Read.vi you have selected the type binary array. This will give the samples in I16. This I16 number is the number the A/D converter generates. This number will always be between -2028 and 2047 for 12 bit cards. You have to convert this number to a real voltage by yourself. Changing the type to scaled array you will get SGL. This value is scaled according to the settings in MAX for your channel. If you didn't configure any channels in MAX the range is always the range you used for the card.
2.
To avoid flipping of the scale and changing the grid divisions right click on the scale and uncheck "AutoScale". Unfortunatly you have no exact control how much divisions were generated.
Waldemar
Waldemar
Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
Don't forget to give Kudos to good answers and/or questions -
Conversion from scaled ton unscaled data using Graph Acquired Binary Data
Hello !
I want to acquire temperature with a pyrometer using a PCI 6220 (analog input (0-5V)). I'd like to use the VI
Cont Acq&Graph Voltage-To File(Binary) to write the data into a file and the VI Graph Acquired Binary Data to read it and analyze it. But in this VI, I didn't understand well the functionnement of "Convert unscaled to scaled data", I know it takes informations in the header to scale the data but how ?
My card will give me back a voltage, but how can I transform it into temperature ? Can I configure this somewhere, and then the "Convert unscaled to scaled data" will do it, or should I do this myself with a formula ?
Thanks.Nanie, I've used these example extensively and I think I can help. Incidently, there is actually a bug in the examples, but I will start a new thread to discuss this (I haven't written the post yet, but it will be under "Bug in Graph Acquired Binary Data.vi:create header.vi Example" when I do get around to posting it). Anyway, to address your questions about the scaling. I've included an image of the block diagram of Convert Unscaled to Scaled.vi for reference.
To start, the PCI-6220 has a 16bit resolution. That means that the range (±10V for example) is broken down into 2^16 (65536) steps, or steps of ~0.3mV (20V/65536) in this example. When the data is acquired, it is read as the number of steps (an integer) and that is how you are saving it. In general it takes less space to store integers than real numbers. In this case you are storing the results in I16's (2 bytes/value) instead of SGL's or DBL's (4 or 8 bytes/value respectively).
To convert the integer to a scaled value (either volts, or some other engineering unit) you need to scale it. In the situation where you have a linear transfer function (scaled = offset + multiplier * unscaled) which is a 1st order polynomial it's pretty straight forward. The Convert Unscaled to Scaled.vi handles the more general case of scaling by an nth order polynomial (a0*x^0+a1*x^1+a2*x^2+...+an*x^n). A linear transfer function has two coefficients: a0 is the offset, and a1 is the multiplier, the rest of the a's are zero.
When you use the Cont Acq&Graph Voltage-To File(Binary).vi to save your data, a header is created which contains the scaling coefficients stored in an array. When you read the file with Graph Acquired Binary Data.vi those scaling coefficients are read in and converted to a two dimensional array called Header Information that looks like this:
ch0 sample rate, ch0 a0, ch0 a1, ch0 a2,..., ch0 an
ch1 sample rate, ch1 a0, ch1 a1, ch1 a2,..., ch1 an
ch2 sample rate, ch2 a0, ch2 a1, ch2 a2,..., ch2 an
The array then gets transposed before continuing.
This transposed array, and the unscaled data are passed into Convert Unscaled to Scaled.vi. I am probably just now getting to your question, but hopefully the background makes the rest of this simple. The Header Information array gets split up with the sample rates (the first row in the transposed array), the offsets (the second row), and all the rest of the gains entering the for loops separately. The sample rate sets the dt for the channel, the offset is used to intialize the scaled data array, and the gains are used to multiply the unscaled data. With a linear transfer function, there will only by one gain for each channel. The clever part of this design is that nothing has to be changed to handle non-linear polynomial transfer functions.
I normally just convert everything to volts and then manually scale from there if I want to convert to engineering units. I suspect that if you use the express vi's (or configure the task using Create DAQmx Task in the Data Neighborhood of MAX) to configure a channel for temperature measurement, the required scaling coefficients will be incorporated into the Header Information array automatically when the data is saved and you won't have to do anything manually other than selecting the appropriate task when configuring your acquisition.
Hope this answers your questions.
ChrisMessage Edited by C. Minnella on 04-15-2005 02:42 PM
Attachments:
Convert Unscaled to Scaled.jpg 81 KB -
Loading complex report data into a direct update DSO using APD
Dear All,
Recently, I had a requirement to download the report data into a direct update DSO using an APD. I was able to perform this easily when the report was simple i.e it has few rows and columns. But I faced problems If the report is a complex one. Summing up, I would like to know how to handle the scenarios in each of the following cases:
1. How should I decide the key fields and data fields of the direct update DSO ? Is it that the elements in ROWS will go to the
key fields of DSO and the remaining to the data fields? Correct me.
2. What if the report contains the Restricted KFs and Calculated KFs? Do I have to create separate infoobjects in the BI
system and then include these in the DSO data fields to accommodate the extracted data ?
3. How do I handle the Free Characteristics and Filters ?
4. Moreover, I observed that if the report contains selection screen variables, then I need to create variants in the report and
use that variant in the APD. So, if I have 10 sets of users executing the same report with different selection conditions, then
shall I need to create 10 different variants and pass those into 10 different APDs, all created for the same report ?
I would appreciate if someone can answer my questions clearly.
Regards,
D. Srinivas RaoHi ,
PFB the answers.
1. How should I decide the key fields and data fields of the direct update DSO ? Is it that the elements in ROWS will go to the
key fields of DSO and the remaining to the data fields? Correct me.
--- Yes , you can use the elements in the ROWS in the Key fields, but in case you get two records with same value in the ROWS element the data load will fail. So you basically need to have one value that would be different for each record.
2. What if the report contains the Restricted KFs and Calculated KFs? Do I have to create separate infoobjects in the BI
system and then include these in the DSO data fields to accommodate the extracted data ?
Yes you would need to create new Infoobjects for the CKF's and RKF's in the Report and include them in your DSO.
3. How do I handle the Free Characteristics and Filters ?
The default filters work in the same way as when you yourself execute the reoprt. But you cannot use the Free characterisitics in the APD. only the ROWS and cloumns element which are in default layout can be used.
4. Moreover, I observed that if the report contains selection screen variables, then I need to create variants in the report and
use that variant in the APD. So, if I have 10 sets of users executing the same report with different selection conditions, then
shall I need to create 10 different variants and pass those into 10 different APDs, all created for the same report ?
--- Yes you would need to create 10 different APD's. Its very simple to create, you can copy an APD. but it would be for sure a maintance issue. you would have to maintain 10 APD's.
Please revert in case of any further queries. -
App for creating complex and data-enclosed forms
Hello guys,
This may be a wrong place to ask this but hey i don't know where to go, so here goes:
I'm looking for an app (OSX) with which i can create quite complex forms, and with that i mean the following example:
Like in Acrobat Reader Pro you can create forms, you can leave some text that is considered basic text and non-changeable and some fields in which users can fill up their own data. Better yet i'm looking to allow the user to only fill in the particular fields with only a few options.
For example: Let's say i want to create a mailing form in which the 'TO' field can only be filled up by a pop-up of selectable choices. I've seen this in some PDFs but i'm not sure Acrobat can do that. A friend of mine recommended SQL but that's a pain to install in OSX. So back to the example, when the user selects the TO field to fill up the destination of the letter, i would like to allow him to only fill up let's say one of 10 destinations (with complete address). So the 10 options pop-up in a menu from the field and he selects the destination and then the app does the rest by filling the street name, number etc...
How can i do that? In what program?
Thank you!
Andrei Dica from Bucharest Romania (pround Apple user )Ok here is what i want to do:
I want to simplify and optimize the work in my office. Most of the times we have to create reports or letters which share a common body text. That body text should remain unchanged but there are fields that i want to create to allow all the employees to fill up them depending on their needs. BUT not just fill up by typing because that would be the same as if it would have opened the document (.doc or .pages) delete and type what they want.
I want to create fields but with options poping out of them when the user clicks on it. For example: let's say an employee wants to do a report about a company, and he already has the data in other documents about the company name, identification codes, address etc...Usually you work with multiple opened documents and copy/paste the required contents from one document to another. But i figured there has to be an easier way if a system can be implemented when the user can work on directly made templates for each task and type only the fields that need to be changed. I vision the process as a user completes some fields in a browser. When you type in the address bar of your browser "you" it automatically pops out a list of possible sites from which i can mention youtube.com and others. Now i understand that there has to be created a database first so that my employees and colleagues can fill 'from' but i'm willing to do that if on the long term it speeds up the work flow.
Another example is body text that need to be changed only by two values (words) much like numeric values 1 and 0. For example yes or no. Click on the field, pops up yes or no option. Click on the desired option and you're done. Print it, email it and job well done.
Is this a bit more clear about what i'm trying to achieve? -
Hi, there;
I'm facing some situation that is very new for me, so if some of you could give me some tips, there go couple of days that i'm trying.
scenario;
1 Entities;
5 (just five ) View Objects
8 (yes eight ) view Links
Here is the most complex data model that i ever faced
MasterA-->Detail1-->Detail2-->Detail3-->Detail4
+--->Detail4 ( because of FK not present in Detail3 )
MasterB-->Detail2-->Detail3-->Detail4
+----Detail4 (again FK not present in Detail3)
+---Detail3-->Detail4 (again for the same reason )
So MasterA and MasterB must control everything, just as an example MasterA == Employee and MasterB as Current Period.
Only Detail4 is editable and has an entity associated with it, it should have 1 line per period(masterB) and per employee(masterA);
Testing the Application Module everything seems to be fine.
At my JSP + Struts Application i have one jsp for setting the current line of MasterB and MasterA is ok at Login time.
Then i have another jsp page where detail2,Detail3 and detail4 will be part of it. It is suppose to the user to navigate detail2, detail3 and edit or create detail4.
Why Detail4 is not affect when the user command setCurrentRowWithKey link at Detail3 or Detail2 ?
Strange is the fact that if i submit an create event at detail4 all the FK's seams to be ok.Marcos,
Can you verify whether the problem occurs if you use Immediate mode instead of the default Batch Mode?
Here's something you can read about the two modes...
http://www.oracle.com/technology/products/jdev/collateral/papers/10g/adftoystore/readme.html#batchmode -
URGENT: Problem sending array of complex type data to webservice.
Hi,
I am writing an application in WebDynpo which needs to call External web service. This service has many complex data types. The function which I am trying to access needs some Complex data type array. When i checked the SOAP request i found that the namespace for the array type is getting blank values. Because of which SOAP response is giving exception.
The SOAP request is as below. For the <maker> which is an array ,the xmlns:tns='' is generated.
<mapImageOptions xsi:type='tns:MapImageOptions' xmlns:tns='http://www.themindelectric.com/package/com.esri.is.services.glue.v2.mapimage/'>
<dataSource xsi:type='xs:string'>GDT.Streets.US</dataSource>
<mapImageSize xsi:type='tns:MapImageSize'><width xsi:type='xs:int'>380</width><height xsi:type='xs:int'>500</height></mapImageSize>
<mapImageFormat xsi:type='xs:string'>gif</mapImageFormat>
<backgroundColor xsi:type='xs:string'>255,255,255</backgroundColor>
<outputCoordSys xsi:type='tns:CoordinateSystem' xmlns:tns='http://www.themindelectric.com/package/com.esri.is.services.common.v2.geom/'>
<projection xsi:type='xs:string'>4269</projection>
<datumTransformation xsi:type='xs:string'>dx</datumTransformation>
</outputCoordSys>
<drawScaleBar xsi:type='xs:boolean'>false</drawScaleBar>
<scaleBarPixelLocation xsi:nil='true' xsi:type='tns:PixelCoord'></scaleBarPixelLocation>
<returnLegend xsi:type='xs:boolean'>false</returnLegend>
<markers ns2:arrayType='tns:MarkerDescription[1]' xmlns:tns='' xmlns:ns2='http://schemas.xmlsoap.org/soap/encoding/'>
<item xsi:type='tns:MarkerDescription'><name xsi:type='xs:string'></name>
<iconDataSource xsi:type='xs:string'></iconDataSource><color xsi:type='xs:string'></color>
<label xsi:type='xs:string'></label>
<labelDescription xsi:nil='true' xsi:type='tns:LabelDescription'>
</labelDescription><location xsi:type='tns:Point' xmlns:tns='http://www.themindelectric.com/package/com.esri.is.services.common.v2.geom/'>
<x xsi:type='xs:double'>33.67</x><y xsi:type='xs:double'>39.44</y>
<coordinateSystem xsi:type='tns:CoordinateSystem'>
<projection xsi:type='xs:string'>4269</projection>
<datumTransformation xsi:type='xs:string'>dx</datumTransformation>
</coordinateSystem></location></item>
</markers><lines xsi:nil='true'>
</lines><polygons xsi:nil='true'></polygons><circles xsi:nil='true'></circles><displayLayers xsi:nil='true'></displayLayers>
</mapImageOptions>
Another problem:
If the webservice is having overloaded methods , it is generating error for the second overloaded method.The stub file itself contains statment as follow:
Response = new();
can anyone guide me on this?
Thanks,
Mital.I am having this issue as well.
From:
http://help.sap.com/saphelp_nw04/helpdata/en/43/ce993b45cb0a85e10000000a1553f6/frameset.htm
I see that:
The WSDL document in rpc-style format must also not use any soapenc:Array types; these are often used in SOAP code in documents with this format. soapenc:Array uses the tag <xsd:any>, which the Integration Builder editors or proxy generation either ignore or do not support.
You can replace soapenc:Array types with an equivalent <sequence>; see the WS-I example under http://www.ws-i.org/Profiles/BasicProfile-1.0-2004-04-16.html#refinement16556272.
They give an example of what to use instead.
Of course I have been given a WSDL that has a message I need to map to that uses the enc:Array.
Has anyone else had this issue? I need to map to a SOAP message to send to an external party. I don't know what they are willing to change to support what I can do. I changed the WSDL to use a sequence as below just to pull it in for now.
Thanks,
Eric -
JAXB: Marshalling complex nested data structures?
Hello!
I am dealing with complex data structures:
Map< A, Set< B > >
Set< Map< A, B > >
Map< A, Map< B, Set< C > > >
Map< A, Set< Map< B, C > > >
[and so on](NOTE: In my case it doesn't matters if I use Set<?> or List<?>)
I wanted to write wrappers for JAXB for these data structures, but I have no idea how.
My idea was: write XmlAdapter with generics for Map, Set and List, but it failed:
"[javax.xml.bind.JAXBException: class java.util.LinkedList nor any of its super class is known to this context.]"
because I was using LinkedList<T> inside the XmlAdapter (it seems that from the point of view of JAXB,
LinkedList<T> is same as LinkedList<Object>)
Any ideas?According to sec 4.4 "By default if xml schema component for which java content interface is to be generated is scoped within a complex type then the java content interface should appear nested within the content interface representing the complex type. ".
So I doubt that worked with beta and you may have to represent this anonymous complex type as a named complex type to avoid the nesting.
Regards,
Bhakti -
Little help with complex XML data as data provider for chart and adg
Hi all,
I've been trying to think through a problem and Im hoping for
a little help. Here's the scenario:
I have complex nested XML data that is wrapped by subsequent
groupings for efficiency, but I need to determine if each inner
item belongs in the data collection for view in a data grid and
charts.
I've posted an example at the bottom.
So the goal here is to first be able to select a single
inspector and then chart out their reports. I can get the data to
filter from the XMLListCollection using a filter on the first layer
(ie the name of the inspector) but then can't get a filter to go
deeper into the structure in order to determine if the individual
item should be contained inside the collection. In other words, I
want to filter by inspector, then time and then tag name in order
to be able to use this data as the basis for individual series
inside my advanced data grid and column chart.
I've made it work with creating a new collection and then
looping through each time there is a change to the original
collection and updating the new collection, but that just feels so
bloated and inefficient. The user is going to have some buttons to
allow them to change their view. I'm wondering if there is a
cleaner way to approach this? I even tried chaining filter
functions together, but that didn't work cause the collection is
reset whenever the .refresh() is called.
If anyone has experience in efficiently dealing with complex
XML for charting purposes and tabular display purposes, I would
greatly appreciate your assistance. I know I can get this to work
with a bunch of overhead, but I'm seeking something elegant.
Thank you.Hi,
Please use the code similar to below:
SELECT * FROM DO_NOT_LOAD INTO TABLE IT_DO_NOT_LOAD.
SORT IT_DO_NOT_LOAD by WBS_Key.
IF SOURCE_PACKAGE IS NOT INITIAL.
IT_SOURCE_PACKAGE[] = SOURCE_PACKAGE[].
LOOP AT IT_SOURCE_PACKAGE INTO WA_SOURCE_PACKAGE.
V_SYTABIX = SY-TABIX.
READ TABLE IT_DO_NOT_LOAD into WA_DO_NOT_LOAD
WITH KEY WBS_Key = WA_SOURCE_PACKAGE-WBS_Key
BINARY SEARCH.
IF SY-SUBRC = 0.
IF ( WA_DO_NOT_LOAD-WBS_EXT = 'A' or WA_DO_NOT_LOAD-WBS_EXT = 'B' )
DELETE IT_SOURCE_PACKAGE INDEX V_SYTABIX.
ENDIF.
ENDIF.
ENDLOOP.
SOURCE_PACKAGE[] = IT_SOURCE_PACKAGE[].
ENDIF.
-Vikram -
How can I export complex FRF data in ASCII files especially UFF58 format?
Background:
I have a problem saving analysed data into a format that another software program (MODENT) can import.
I am using "SVA_Impact TEST (DAQmx).seproj", to acquire force and acceleration signals from a SISO hammer model test, and process up to the stage of an FRF and coherence plot. I need to export the FRF information into MODENT, meaning it must contain both magnitude and phase (real and complex) information in one file. (MODENT does not need the coherence information, which is only really used during the testing process to know we have good test data. The coherence information can already easily be saved into ASCII text file, and kept for test records - not a problem).
MODENT is a speciaised modal analysis package, and the FRFs from tests can be imported into its processing environment using its "utilities". The standard is a UFF58 file containing the FRF information.
Labview does not seem to be able to export the FRF data in UFF58 format. In general, Labview can export to UFF58, but only the real information, not the complex information - is this correct?
Is there a way to export the real and complex information from FRFs using one file.
I am working on this problem from both ends:
1. MODENT support, to help interface into MODENT.
2. Lavbiew support, to try to generate an exported file format with all necessry analysed data.Hi
Have a look at the "seproj" and "vi" versions I mentioned in this thread - both available from the supplied example VIs in Signal Express and Labview. I've also attached these here.
The "seproj"version has what are called "limits" for the input data, meaning it cheks certain criteria to make sure the test data sentfor analysis (to give the FRF) is wihin certain limits: 1. "overload" where the amplitude must be below a set maxiumum, 2. "time", where the signal must deacy to zero within the time envelope. These are useful test check functions to have.
I looked at the conversion from SE to Labview VIs, but if I understand correctly, this essenetially embeds the SEproject in Labview, but does not allow the same use of viewing of the user windows during testing (eg seeing the overload plots, editing the overload levels, etc)?
Other settings:
-signal input range, sensitivity, Iex source, Iex value, dB reference, No. of samples and sample rate.
-triggering off the hammer, with all the setting optons shown in step setup for "trigger"
-all settings associated with the load and time limits; presented in a very useful format of either defining by X,Y coordinates, or by scalling.
-all config and averaging options for the freqency response.
The practise envirnoment in the SEProj version is also useful, as getting the technique right for using the hammer is not always easy, and it allows a quick check of basic settingsbefore doing the actual tests.
The Nyquist plot in the VI version is useful.
Happy New Year to everyone !
Attachments:
SVA_Impact Test (DAQmx) demo.seproj 307 KB
SVXMPL_Impact Test (DAQmx).vi 75 KB -
ADF FACES: af:table and complex column data
Using EA15.
I want to use an af:table to allow the in-place editing of the data. However, I have some fields that are not simple text entry fields. Two of them would ideally use af:selectOneList elements. This doesn't work given the limitations of how af:column elements work.
Is there anyone out there creating complex, in-place edit tables? If so, how are you doing it?
Thanks.I was mistakenly believing that you couldn't use a selectOneChoice in an af:table given some experimentation where they simply failed to render at all.
It turns out that I had a hadn'e created the appropriate setter method for the property. However, instead of making the data readOnly (as most of the ADF FACES controls do) it just didn't render anything. Thus making me think it just didn't work inside of a table.
FYI - in case this happens to anyone else. -
Accurate scaling of anatomical data
I am using AI to import anatomical tracing data (paths and symbols). The data coordinates are in millimeters. I create an artboard as follows:
The file import code reads the data and then passes the bounds data to the art routine, CreateDataArt, as minv and maxv. As can be seen (abRect) the bounds of the data are 13 mm wide by 14 mm top to bottom. Two paths are then drawn into the artboard as shown here:
Note that the rulers are in millimeter units and that the art board is nowhere the correct size.
Any clues would be appreciated.
Charlie Knox
AccuStageRegardless of what ruler unit you're using in Illustrator, the API always* takes points/pixels as its unit. So if you have values in millimeters and you want that to show up as the correct distance, etc., you'll need to multiply it out correctly to points and use that value when talking to the API.
*In CS6, there is a method that lets you tell the API to use whatever the current ruler unit is showing. I haven't tried it though, so I find it safer to just always convert to points. I have couple of simple functions that just check the current ruler unit and do any conversion (if necessary). Then I use those ToRulerUnit() or FromRulerUnit() anywhere I take inputs from the UI (or from an imported file as it sounds like in this case). -
Word 2007 complex autofill date feature
I need to write the date in a heading (within the text body) and I need Word 2007 to autofill the consecutive dates into the following
headings in the document.
Example: I want to be able to write "Monday, 2/ 02" and I want Word to autofill "Tuesday, 2/03" and "Wednesday, 2/04" in the following headings within the document.
Any tips?You could do this with field coding or with a macro. To see how to do it with field coding, check out my Microsoft Word Date Calculation Tutorial, at:
http://windowssecrets.com/forums/showthread.php/154368-Microsoft-Word-Date-Calculation-Tutorial
or:
http://www.gmayor.com/downloads.htm#Third_party
In particular, look at the item titled 'Calculate a Date Sequence'. Do read the document's introductory material.
Note that the calculations don't automatically update - you'll need to force them to (e.g. via Ctrl-A, F9).
Cheers
Paul Edstein
[MS MVP - Word]
Maybe you are looking for
-
On my HP printer Officejet4620 I get an error message "waiting for printer to become available". Help please?
-
Account ID (HKTID) is not updating in MIRO after saving the Invoice
Dear Gurus, While creating invoice in MIRO I have filled the House bank and Account ID fields in the payment tab. After saving the invoice the House bank is getting updated but the field Account ID is emptied so this account ID is not transmitted to
-
No AVIVO in 10.1 in XP with Radeon 4670
Dual booting XP sp3 and Win 7. Flash 10.1 in both. Radeon 4670. When playing a site such as HULU in Win 7 I can adjust the picture using the Video section of my ATI Catalyst controls. I can use dynamic contrast, sharpening, etc. But in XP nothing wor
-
Pavillion 21 won't power on but power light is on
My pavillion 21 will not come on although the power button lights up. The screen doesn't come on at all. Please help I've had it only a month and have no clue as to why it won't come on.
-
Upgrade strategy with regard to Intel chip.
Greetings all, I have an aging PPC G4 Dual 500 which has served me well for several years but is getting a bit pokey for some tasks. It is getting close to time for an upgrade. But we are at a cusp in Apple development, with the coming of Intel-based