GPS timestampi​ng continousl​y sampled data
Hi all,
I'm working on an application that samples 32 analog and 5 digital signals at a fixed rate of 4 kHz and I'd like to accurately timestamp each sample using a GPS time reference.
The timestamp accuracy should be better than 120 us. The application is written in C/C++ and uses the DAQmx C API.
The hardware I have available is the following:
* PCI-6224 card (A/D with 32 analog channels, lots of digital I/O and 2 timer/counters)
* PCI-6602 card (8 timer/counters and a 80 MHz clock)
* TrueTime XL-AK (model 600-000) GPS time reference, with serial ASCII output, IRIG-B timecode output and PPS (pulse-per-second) output.
Unfortunately my GPS is not equiped with a 1 MHz or 10 MHz reference output. Also the IRIG-B output is not easily useable because it is AM modulated on top of a 1 kHz carrier (therefore not compatible with a digital input and all 32 analog inputs have already been allocated)
I plan to implement this application by doing all timing generation on the PCI-6602 and doing all sampling (analog/digital) on the PCI-6224. A pair of counters is configured to do GPS timestamping as demonstrated in the document in this thread:
http://forums.ni.com/ni/board/message?board.id=250&message.id=8560
By routing the same sample clock and digital edge start trigger to all blocks that sample data (analog or digital), the data from all measurement tasks would be synchronised to each other. This would result in 3 streams of parallel data: timestamp values (second + sub-second count), analog data (32 channels) and digital data (say 32 bits for each sample).
However I suspect that the counter arrangement from the thread mentioned earlier does not produce *buffered* timestamps. This would make it impossible to use it as I just described above. Is this correct?
Does anyone know another solution?
I've been thinking about the following alternatives:
1) Sample the PPS signal at a higher rate to more accurately find out where the first sample in the current second starts. Then the timestamp for each sample can be calculated as Tsample = Tpps + Tfirst_sample + N/F, where Tpps = time (in seconds) of the last PPS pulse, Tfirst_sample = offset of first sample in the current second, N = sample number within the current second, F = sample frequency.
2) Same as above, but using buffered two-signal edge-separation measurement to get the offset between PPS and sample clock at even better accuracy.
Typically, (and sometimes necessarily), a sample clock signal used for counter measurements would be connected to a counter GATE pin. For the specific case of "encoder" measurement on the 6602, the Z-index signal must be physically wired to the counter's default GATE pin, so the sampling clock signal would need to be physically wired to the GATE pin of a different counter. Having wired the sample clock signal to a single GATE pin, both of the 2 counter measurement tasks can programmatically specify that their sampling clocks come from that single pin.
The M-series board offers more flexibility in programmatic PFI signal routing than the 6602. You can pretty much wire to any PFI pin and then programmatically specify which pin to use as Input (SOURCE) and which as Sample Clock (GATE). I still typically adopt the convention of using default pins whenever possible.
The terminology for counters can be a bit confusing. In my opinion, the old traditional NI-DAQ was a little more cryptic at first glance but much more consistent overall. The new DAQmx terminology tries to hide implementation details more, but in the process generates multiple aliases to the same pin assignment, depending on how you're using it.
A quick rundown. A SOURCE edge causes the count register to increment. Under DAQmx, this may be referenced as an "Input" or a "Timebase" terminal, depending on the measurement mode. When counting edges, it's an Input. When measuring period or frequency, it's a Timebase. Whatever it's called, the natural behavior at the counter hardware is to increment a count register on each active edge.
A GATE edge can cause a count value to be stored in the acquisition buffer. Under DAQmx, it may be referenced as a "Sample Clock" source when counting edges. When measuring period or frequency it may be a "Channel" or an "Input terminal." Again, whatever it's called, the behavior at the board level is to store an instantaneous count value into the data acq buffer on each active edge.
[There are exceptions, of course. The GATE pin is also used for "Pause Triggering", and can also be used to reset the count register for position measurement with Z-indexing enabled. And so on.]
-Kevin P.
Similar Messages
-
I’m a LabView and SignalExpress novice so I thought I’d try
this forum before trying the LabView forum.
I’m trying to acquire analog data with a GPS time stamp. My application requires that I acquire approximately
10 hours of data at a rate of 500kS/s.
The system is a PXI1000B containing 2X5922 cards and a 6682 card. Due to the long duration of the collect I’d
like to account for any drift in the sample clock.
I can see two options.
1. 1.
Use a NI-SCOPE Acquire step with a corresponding
step to get a GPS timestamp for each read length of data (I’ve set it to 512000). This might not work as there is no step for
GPS functions.
2.
2. Using a SMB cable route the CLK OUT from the
6682 to the CLK IN on the 5922. Choose
CLK IN on NI-SCOPE acquire – Advanced TAB. This seems too simple.
Again I’m a novice so any advice or help is appreciated.
ThanksDue to the quick response and my faulty memory, I gave you incorrect information. The 6682 10HMz clock is not synchronized to the GPS, which is why the example you reference requires the use of the Trimble Thunderbolt. The 6682 has a temperature stabilized clock (better than the default PXI backplane clock), but it is not used by default and there is no current method in SignalExpress to activate it.
The 6682 will give you GPS accurate timestamps for triggers. You can use this information by simultaneously triggering the 6682 and the 5922s with the same signal (watch out for line length issues) or triggering one with the other (watch out for delay issues). Check out the 6682 manual for details. Unfortunately, there are no native steps for doing this in SignalExpress. You would need to create one or more user steps to set up the 6682, get the trigger data, and replace the timestamp in the 5922 data. Much as I hate to admit it, it would probably be easier to do the whole thing in LabVIEW, if you have it available. SignalExpress does not currently support the 6682.
Once again, my apologies. If you need further assistance, let us know.
This account is no longer active. Contact ShadesOfGray for current posts and information. -
Time doesn't match sampled data?
Hallo all experts,
I write a LV code which reads data from USB 6211 and saves them with time instants in a text file, but the time instants don't correspond the sampled data. The time values are generated by elapsed time, after build array with the data read from DAQ, they are fed to the write to a text file. The test signal is 10 Hz, but the text file yields 0.2 Hz signal. How could I synchronize them?
Any tips are highly appreciated.
win2Don't use the "elapsed time" express VI for precision timings. It seems to have limited resolution (internally, it converts a timestamp to DBL).
You can use e.g. the tick count to keep track of the time. See the attached comparison. (still there will always be some subtle differences due to the software timings).
LabVIEW Champion . Do more with less code and in less time .
Attachments:
usb6211_forumMOD.vi 42 KB -
How to renormalize number of flows in Netflow Sampled data
Hi,
I am working on extrapolation(renormalization) of bytes/packets/flows from randomly sampled (1 out of N packets) collected data. I believe bytes/packets can be renormalized by multiplying bytes/packets value in exported flow record by N.
Now, I am trying to extrapolate number of flows. So far i have not got any information on it. Do you people have any idea on how flows can be renormalized from sampled data ?
Well, at the same time i have some doubts regarding this concept altogether -
1. In packet sampling, we do not know how many flows got dropped. Even router cache will not have entries for dropped flows
2. In flow sampling, router cache will maintain entries of all the flows and there may be some way by which one can know how many actual flows were there. But again there is no way to know values of individual attributes in missed flows like srcip/dstip/srcport/dstport etc.(though they are there in flow cache)
3. In case of sampling (1 out of N packets), we anyway multiply #packets and #bytes with N to arrive at estimate for total packets and bytes. When we multiply by N, it means we have taken into account all those packets as well which were NOT sampled. So, it means all the packets which flowed between source and destination have been accounted for. Then there are no missed flows, isn't it ? And if there do exist some missed flows then multiplication by N to extrapolate number of packets/bytes is not correct.
4. What is the use of count of flows anyways. Number of flows may vary depending upon the configuration such as active timeout etc. So, it does not provide any information about the actual flow between source and destination unlike number of packets and bytes.
Please share your thoughts.
Thanks,
DeepakThe simplest way is to call GetTableCellRangeValues with VAL_ENTIRE_TABLE as the range, next summing array elements.
But I don't understand your comment on checksum, so this may not be the more correct method for your actual needs: can you explain what do you mean?
Proud to use LW/CVI from 3.1 on.
My contributions to the Developer Zone Community
If I have helped you, why not giving me a kudos? -
I'm doing a scan around a line by sampling data 360 degrees for every value of z(z is the position on the line). So, that mean I have a double for-loop where I collect the data. The problem comes when I try to plot the data. How should I do?
Jonas,
I think what you want is a 3D plot of a cylinder. I have attached an example using a parametric 3D plot.
You will probably want to duplicate the points for the first theta value to close the cylinder. I'm not sure what properties of the graph can be manipulated to make it easier to see.
Bruce
Bruce Ammons
Ammons Engineering
Attachments:
Cylinder_Plot_3D.vi 76 KB -
For this sample data how to fulfill my requirement ?
For this sample data how to fulfill my requirement ?
with temp as
select 'MON' WEEKDAY,'9-10' TIMING,'I' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'9-10' TIMING,'II' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'9-10' TIMING,'III' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'10-11' TIMING,'I' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'10-11' TIMING,'II' CLASS FROM DUAL UNION
select 'TUE' WEEKDAY,'9-10' TIMING,'I' CLASS FROM DUAL UNION
select 'TUE' WEEKDAY,'9-10' TIMING,'II' CLASS FROM DUAL
select ?? (what will be the query ??)
How can i get output data in this way :
WEEKDAY TIMING CLASS
MON 9-10 I,II,III
MON 10-11 I,II
TUE 9-10 I,IIIf in 11g, you can use LISTAGG
with temp as
select 'MON' WEEKDAY,'9-10' TIMING,'I' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'9-10' TIMING,'II' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'9-10' TIMING,'III' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'10-11' TIMING,'I' CLASS FROM DUAL UNION
select 'MON' WEEKDAY,'10-11' TIMING,'II' CLASS FROM DUAL UNION
select 'TUE' WEEKDAY,'9-10' TIMING,'I' CLASS FROM DUAL UNION
select 'TUE' WEEKDAY,'9-10' TIMING,'II' CLASS FROM DUAL
select
WEEKDAY,
TIMING,
LISTAGG(CLASS,',') WITHIN GROUP (order by 1) as class_aggregate
from temp
GROUP by WEEKDAY,TIMING;
WEEKDAY TIMING CLASS_AGGREGATE
MON 9-10 I,II,III
MON 10-11 I,II
TUE 9-10 I,IIOther techniques for different versions are also mentioned here :
http://www.oracle-base.com/articles/misc/StringAggregationTechniques.php#listagg -
Need help in framing an SQL query - Sample data and output required is mentioned.
Sample data :
ID Region State
1 a A1
2 b A1
3 c B1
4 d B1
Result should be :
State Region1 Region2
A1 a b
B1 c dcreate table #t (id int, region char(1),state char(2))
insert into #t values (1,'a','a1'),(2,'b','a1'),(3,'c','b1'),(4,'d','b1')
select state,
max(case when region in ('a','c') then region end) region1,
max(case when region in ('b','d') then region end) region2
from #t
group by state
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
BPC 5 - Best practices - Sample data file for Legal Consolidation
Hi,
we are following the steps indicated in the SAP BPC Business Practice: http://help.sap.com/bp_bpcv151/html/bpc.htm
A Legal Consolidation prerequisit is to have the sample data file that we do not have: "Consolidation Finance Data.xls"
Does anybody have this file or know where to find it?
Thanks for your time!
Regards,
SantiagoHi,
From [https://websmp230.sap-ag.de/sap/bc/bsp/spn/download_basket/download.htm?objid=012002523100012218702007E&action=DL_DIRECT] this address you can obtain .zip file for Best Practice including all scenarios and csv files under misc directory used in these scenarios.
Consolidation Finance Data.txt is in there also..
Regards,
ergin ozturk -
I have a requirement to develop a PO/AP environment does any have sample data i can to test requisition and PO approval.
Can any send a sample PO/AP so i can with Data loaderHi VJ,
System will place a Hold only when there is mismatch between Purchase Data and Invoice data, if you get invoice details with all required details and this matches with PO ( even if the Receiving is entered in different PO line ), system will pass the Invoice.
Hope this answers your query, please let me know if you are looking for any specific information. -
EPM 11.1.2.2: Load sample data
I have installed EPM 11.1.2.2 and now I'm trying to load the sample data. I could use help locating instructions that will help with this. While earlier versions of epm documentation seemed to have instructions for loading the sample data, I have yet to find it for 11.1.2.2. Perhaps the earlier versions' steps are supposed to work on this version as well. So I read it. It says to initialize the app then in EAS right click on a consol icon where yuo have the option to load data. Unfortunately, I am using EPMA and I saw no option to initialize when I created the app. In addition, EAS doesnt have the console icon and right clicking on the application doesnt do anything. In short, prior documentation hasnt helped so far. I did find the documentation for EPM 11.1.2.2 but I have yet to locate within it the the instructions for loading the sample application and data. Does anyone have any suggestions?
I considered your comment about the app already existing but if this were true, I think the database would exist in EAS but it doesn't.
I've also deleted all applications, even re-installed the essbase components including the database so it would all be fresh, and then tried to create a new one but still no initialization option is available. At this point, I'm assuming there is something wrong with my installation. Any suggestion on how to proceed would be appreciated.
I've noticed that I dont have the create classic application option in my menu, only the transform from classic to epma option. I also can't navigate to http://<machinename>:8300/HyperionPlanning/AppWizard.jsp. When I noticed these things, I did as John suggested in another post and reselected the web server. This didnt make a difference so I redeployed apps AND selected the web server. Still not seeing it in the menu.
Edited by: dirkp:) on Apr 8, 2013 5:45 PM -
hi,
I have problems during the navteq sampla data import (http://www.oracle.com/technology/software/products/mapviewer/htdocs/navsoft.html). I'm follwing exactly the instructions from the readme file.
Import: Release 10.2.0.1.0 - Production on Thursday, 25 September, 2008 10:35:28
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/********@ORCL REMAP_SCHEMA=world_sample:world_sample dumpfile=world_sample.dmp directory=sample_world_data_dir
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "WORLD_SAMPLE"."M_ADMIN_AREA4" 394.5 MB 118201 r
After the last log message the systems hangs with 50% cpu time and nothing happens. What could be the problem? Only the table M_ADMIN_AREA4 contains the 118201 rows.
Regards, Arnd
Edited by: aspie on Sep 25, 2008 11:45 AMHi, I followed the instructions from the readme file:
1. Create a work directory on your machine. Change directory to your work
directory and download the world_sample.zip file into it. Unzip the
world_sample.zip file:
C:\> mkdir world_sample
C:\> cd world_sample
C:\world_sample> unzip world_sample.zip
2. In your work directory, open a Sqlplus session and connect as the SYSTEM
user. Create a user for the World Sample data (if you have not previously
done so):
C:\world_sample> sqlplus SYSTEM/password_for_system
SQL> CREATE USER world_sample IDENTIFIED BY world_sample;
SQL> GRANT CONNECT, RESOURCE TO world_sample;
3. This step loads the World Sample data into the specified user, and creates
the metadata required to view the sample data as a Map in MapViewer. Before
you begin, make a note of the full directory path of your work directory,
since you will be prompted for it. You will also be prompted for the user
(created in step 2), and the SYSTEM-user password. This step assumes that
you are already connected as the SYSTEM user, and that you are in your work
directory. To begin, run the load_sample_data.sql script in your Sqlplus
session. Exit the Sqlplus session after the script has successfully
concluded:
SQL> @load_sample_data.sql
Enter value for directory_name: C:\world_sample
Enter value for user_name: world_sample
Password: password_for_system
SQL> exit;
In step 2 I created a user dedicated tablespace. The same problem.
Regards Arnd. -
Dear all:
I want to write VB programs to sample data simultaneously from two channels in PCI 5922. The niscope driver has some example VB programs to sample data from one channel, for example, the "save to file ", the program works well to sample the data from one channel. When I modify it to sample data simultaneously from two channels, I always get error so I seek your help on how to write the program to sample two channels simulatenously. Thanks. I attached the sample program here and what I tried to modify is the "channel name" and "waveform()"
Regards
Andy
Attachments:
savetofile.doc 42 KBHi Bajaf, regarding the FFT of the four channels, the next link might be what your looking for:
http://digital.ni.com/public.nsf/allkb/862567530005f09c8625671b00739970
Respect the phase issue how are you doing the acquisition of the signals, are you doing two independent acquisitions? if you are controlling them as independent acquisitions try to synchronize them with triggers and the clock.
How much are they out of phase?
By the way we know have forums in Spanish
Best Regards
Benjamin C
Senior Systems Engineer // CLA // CLED // CTD -
Want to select query based on sample data.
My Oracle Version
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
I am creating inventory valuation report using FIFO method . here sample data .
create table tranx(
TRXTYPE varchar2(10) ,ITEM_CODE varchar2(10), RATE number,qty number
insert into tranx values('IN' ,'14042014', 457.2 ,10);
insert into tranx values('OUT','14042014', 0, 10);
insert into tranx values('IN','14042014', 458.1, 35);
insert into tranx values('OUT','14042014', 0, 11);
insert into tranx values('OUT','14042014', 0, 6);
insert into tranx values('IN',' 14042014', 457.2 ,10);
insert into tranx values('OUT',' 14042014', 0, 3);
insert into tranx values('OUT',' 14042014', 0, 4);
insert into tranx values('IN',' 14042014', 457.2, 20);
insert into tranx values('OUT',' 14042014', 0, 5);
insert into tranx values('OUT',' 14042014', 0, 9);
insert into tranx values('OUT',' 14042014', 0, 8);
current output
TRXTYPE ITEM_CODE RATE QTY
IN 14042014 457.2 10
OUT 14042014 0 10
IN 14042014 458.1 35
OUT 14042014 0 11
OUT 14042014 0 6
IN 14042014 457.2 10
OUT 14042014 0 3
OUT 14042014 0 4
IN 14042014 457.2 20
OUT 14042014 0 5
OUT 14042014 0 9
OUT 14042014 0 8above data populate based on first in first out . but out rate is not comes from that query. suppose fist 10 qty are OUT its rate same as IN. but when qty start out from 35 rate will be 458.1 till all 35 qty will not out. like out qty 11,6,3,4,5,9 now total are 38 .
when qty 9 will out the rate are 6 qty rate 458.1 and other 3 out rate of 457.2 means total value of 9 out qty value is 4120.20 .
Now 35 qty is completed and after that rate will continue with 457.2 till 10 qty not completed.
I think you understand my detail if not please tell me .
thanks
i am waiting your reply.As SomeoneElse mentioned, there is no row order in relational tables, so you can't tell which row is first and which is next unless ORDER BY is used. So I added column SEQ to your table:
SQL> select *
2 from tranx
3 /
TRXTYPE ITEM_CODE RATE QTY SEQ
IN 14042014 457.2 10 1
OUT 14042014 0 10 2
IN 14042014 458.1 35 3
OUT 14042014 0 11 4
OUT 14042014 0 6 5
IN 14042014 457.2 10 6
OUT 14042014 0 3 7
OUT 14042014 0 4 8
IN 14042014 457.2 20 9
OUT 14042014 0 5 10
OUT 14042014 0 9 11
TRXTYPE ITEM_CODE RATE QTY SEQ
OUT 14042014 0 8 12
12 rows selected.
SQL> Now it can be solved. Your task requires either hierarchical or recursive solution. Below is recursive solution using MODEL:
with t as (
select tranx.*,
case trxtype
when 'IN' then row_number() over(partition by item_code,trxtype order by seq)
else 0
end rn_in,
case trxtype
when 'OUT' then row_number() over(partition by item_code,trxtype order by seq)
else 0
end rn_out,
count(case trxtype when 'OUT' then 1 end) over(partition by item_code) cnt_out
from tranx
select trxtype,
item_code,
rate,
qty
from t
model
partition by(item_code)
dimension by(rn_in,rn_out)
measures(trxtype,rate,qty,qty qty_remainder,cnt_out,1 current_in,seq)
rules iterate(10) until(iteration_number + 1 = cnt_out[0,1])
rate[0,iteration_number + 1] = rate[current_in[1,0],0],
qty_remainder[0,iteration_number + 1] = case sign(qty_remainder[0,cv() - 1])
when 1 then qty_remainder[0,cv() - 1] - qty[0,cv()]
else qty[current_in[1,0],0] - qty[0,cv()] + nvl(qty_remainder[0,cv() - 1],0)
end,
current_in[1,0] = case sign(qty_remainder[0,iteration_number + 1])
when 1 then current_in[1,0]
else current_in[1,0] + 1
end
order by seq
TRXTYPE ITEM_CODE RATE QTY
IN 14042014 457.2 10
OUT 14042014 457.2 10
IN 14042014 458.1 35
OUT 14042014 458.1 11
OUT 14042014 458.1 6
IN 14042014 457.2 10
OUT 14042014 458.1 3
OUT 14042014 458.1 4
IN 14042014 457.2 20
OUT 14042014 458.1 5
OUT 14042014 458.1 9
TRXTYPE ITEM_CODE RATE QTY
OUT 14042014 457.2 8
12 rows selected.
SQL> SY. -
A set from a set of sampled data from a thickness sensor.
A set from a set of sampled data from a thickness sensor.
The samples are stored in an appropriate array of up to 100 elements.
The values represent the depth of the insulation around a particular cable.
The thickness can vary from 0-4mm represented by a corresponding real value
stored in the array. as long as the depth is between 1 and 4 mm the cable
falls within the allowable specification however if it falls below 1mm it is
deemed to be too thin.
Design a VI that will identify the number of thin values and display as a
percentage of total number of samples.
The front panel have the must have the following:
An array of controls holding the sampled values.
An indicator holding the percentage of thin values.
Attachments:
Test Data 1.vi 14 KBhi there, this is Faddi, I am doing the same assignment as the one you poested earlier!! I know its a very basic one, but i need your help in doin the second task asap PLEASE!! IF you could please...its as follows:
This is to be designed to work on a set of sampled data from a thickness sensor. The samples are stored in an appropriate array of up to 100 elements. The values represent the depth of the insulation around a particular cable. The thickness can vary from 0- 4 mm represented by a corresponding real value stored in the array. As long as the depth is between 1 and 4 mm the cable falls within the allowable specification however if it falls below 1 mm it is deemed to be too thin. Specification 1
To design a VI that will identify the number of thin values and display as a percentage of the total number of samples. The front panel must have the following: An array of controls holding the sampled values. An indicator holding the percentage of thin values. Specification 2
Thin values tend to appear as sections or patches along the length of the cable i.e. values of less than 1 mm stored in successive array elements. To design a VI that will meet specification 1 and will also calculate the number of these thin patches. The front panel will now have an extra indicator to display the number of patches.
Thankyou -
Sample data for oracle 10.1.0.4
Hi,
On the companion cd samplel data are avialable for oracle 10g 10.1.0.2.
I've upgraded the database to 10.1.0.4. Setup fails now.
Two questions: can I install in a new oracle home?
Are sample data available for 10.1.0.4?
(I'm only interested in the odm sample data)
Thanks in advance for any reaction.
LeoFor the first question, you cannot install the companion CD to a new oracle home which is different from the database home.
Second, sample programs are in companion CD which is shipped with 10.1.0.2 base release. For a patch release, Oracle does not ship a companion CD. You can go to $ORACLE_HOME/demo directory to check whether you have installed any sample programs previously.
If the database was upgraded from a 9.2.0.X release, you may want to upgrade it to 10.1.0.2 release first and install the companion CD at 10.1.0.2 level. You can upgrade it to 10.1.0.4 afterwards.
Xiafang
Maybe you are looking for
-
Hi, can any body brief me about business flow of AR in financials and how it is get connected with GL transactions. Thanks & regards Gb prasad
-
Business Rule Currency Conversion and Carry Forward
Hello Guys, I have a question related to the Business Rules Currency Conversion and Carry Forward I configured the business rule "Currency Conversion" and I selected "Apply to Periodic". What this rules does with the flag selected is to get the valu
-
Scrabble apps crash after ios7 upgrade
I have try all possible tips and I have even format my iPad to default settings but it still can't run my paid Scrabble App! Could you help me ? Degarde....
-
HT1222 iOS 5.1 Download
Why am I getting a contacting the iPhone server when downloading iOS 5.1?
-
OS X Mavericks: Restore files and folders from your trash bin
Please how restore files from trash bin