Power Spectrum Density conversion to Time Series Data
Hi,
This may seem an odd request but is there a way to convert power spectrum density data back to the time series data that generated it in the first place. I have lost the original time series data but still have the PSD and need the time series to do other analysis.
Thanks,
Rhys Williams
Hate to be the bearer of bad news, but there are an infinite number of time series that will generate a given PSD. You lose all phase information upon taking the PSD. For this reason I almost always save time domain data, or at least complex FFT values.
Similar Messages
-
Read optimization time-series data
I am using Berkeley DB JE to store fairly high frequency (10hz) time-series data collected from ~80 sensors. The idea is to import a large number of csv files with this data, and allow quick access to time ranges of data to plot with a web front end. I have created a "sample" entity to hold these sampled metrics, indexed by the time stamp. My entity looks like this.
@Entity
public class Sample {
// Unix time; seconds since Unix epoch
@PrimaryKey
private double time;
private Map<String, Double> metricMap = new LinkedHashMap<String, Double>();
as you can see, there is quite a large amount of data for each entity (~70 - 80 doubles), and I'm not sure storing them in this way is best. This is my first question.
I am accessing the db from a web front end. I am not too worried about insertion performance, as this doesn't happen that often, and generally all at one time in bulk. For smaller ranges (~1-2 hr worth of samples) the read performance is decent enough for web calls. For larger ranges, the read operations take quite a while. What would be the best approach for configuring this application?
Also, I want to define granularity of samples. Basically, If the number of samples returned by a query is very large, I want to only return a fraction of the samples. Is there an easy way to count the number of entities that will be iterated over with a cursor without actually iterating over them?
Here are my current configuration params.
environmentConfig.setAllowCreateVoid(true);
environmentConfig.setTransactionalVoid(true);
environmentConfig.setTxnNoSyncVoid(true);
environmentConfig.setCacheModeVoid(CacheMode.EVICT_LN);
environmentConfig.setCacheSizeVoid(1000000000);
databaseConfig.setAllowCreateVoid(true);
databaseConfig.setTransactionalVoid(true);
databaseConfig.setCacheModeVoid(CacheMode.EVICT_LN);Hi Ben, sorry for the slow response.
as you can see, there is quite a large amount of data for each entity (~70 - 80 doubles), and I'm not sure storing them in this way is best. This is my first question.That doesn't sound like a large record, so I don't see a problem. If the map keys are repeated in each record, that's wasted space that you might want to store differently.
For larger ranges, the read operations take quite a while. What would be the best approach for configuring this application?What isolation level do you require? Do you need the keys and the data? If the amount you're reading is a significant portion of the index, have you looked at using DiskOrderedCursor?
Also, I want to define granularity of samples. Basically, If the number of samples returned by a query is very large, I want to only return a fraction of the samples. Is there an easy way to count the number of entities that will be iterated over with a cursor without actually iterating over them?Not currently. Using the DPL, reading with a key-only cursor is the best available option. If you want to drop down to the base API, you can use Cursor.skipNext and skipPrev, which are further optimized.
environmentConfig.setAllowCreateVoid(true);Please use the method names without the Void suffix -- those are just for bean editors.
--mark -
Discoverer 4i - Time Series Data type support
Does Discoverer 4i support time-series data type i.e. the ability to store an entire string of
numbers representing for example daily or weekly data points?
Thanks & Regards,
DeeptiHi O G-M,
Each model must contain one numeric or date column that is used as the case series, which defines the time slices that the model will use. The data type for the key time column can be either a datetime data type or a numeric data type. However, the column must
contain continuous values, and the values must be unique for each series. The case series for a time series model cannot be stored in two columns, such as a Year column and a Month column. For more information about it, please see:
http://msdn.microsoft.com/en-us/library/ms174923(v=sql.100).aspx
Thanks,
Eileen
Eileen Zhao
TechNet Community Support -
What's the most effiecient way to store time-series data in oracle?.
Thanks,
Jay.937054 wrote:
Hello,
1. Usally time-series data goes in multiple millions, so timeseries databases like FAME,KDB,SYBASE-IQ are used. Does oracle11gr2 provide storage optmizations, compressions, columnar database like FAME,KDB,SYBASE-IQ?
The only methods of optimization are partitioning of the data by some date or if data set is narrow (few columns)l enough, partitioned IOT.
2. http://www.oracle.com/us/corporate/press/1515738
Link is about R statistical language and data mining integration with Oracle database 11gR2. Does this come default during installation or with BigData - EXADATA? OR this is a seperate license?I am not sure about the licensing, you will need to ask your sales person, but it looks like it might be apart of ODM (oracle data mining - a licensed product)
Take a read through this case study.
http://www.oracle.com/technetwork/database/options/advanced-analytics/odm/odmtelcowhitepaper-326595.pdf?ssSourceSiteId=ocomen
>
Thanks -
Relevance of Time series data in PPDS
Hi, we have an SNP planning area with some times series KFs in it. if we delete the time series for the PA what impact it would have on PPDS..
or what would be the relevance of times series data in PPDS?..
kindly explain.the only relation of time series data to PP/DS I know of is Time-dependent safety stock planning in PP/DS.
Safety and Target Stock Level Planning in PP/DS
In Customizing for PP/DS, define the following:
· Make SNP key figures available
You define which SNP key figures PP/DS should use as safety stock or daysu2019 supply. (Make SNP Key Figures Available)
· Define planning area
You specify an SNP planning area that contains the key figures for the time-dependent safety stock/daysu2019 supply levels. (Global Settings ® Maintain Global Parameters and Defaults)
if you did not use the planning area and key figure there should be no influence.
Frank -
Power spectrum density question: One sided convert
Hi, there
I am using the PSD function. In order to get the RMS noise, I integrated the PSD data and doubled the result because I have two sides in positive frequency range and negative frequency range. Then my result is 1.414 (or Sqrt(2)) times higher than my RMS data measured in time domain.
Then I looked into the PSD fuction and found out that in the function of "Convert to on-sided spectrum", the amplitude was doubled. But it is not meantioned in the help file.
My thinking is: either change the help file of PSD function and mention that it is a doubled one-sided output or do not double the amplitude when convert the spectrum from two-sided to one-sided output.
best regards
Chengquan Li
CQDr. Chengquan Li,
I know sometimes the help can be hard to navigate, but I think I found the help documentation that address this issue. It can be found here: Power Spectrum.
Under the heading "Converting a Two-Sided Power Spectrum to a Single-Sided Power Spectrum", you can find the following:
"A two-sided power spectrum displays half the energy at the positive frequency and half the energy at the negative frequency. Therefore, to convert a two-sided spectrum to a single-sided spectrum, you discard the second half of the array and multiply every point except for DC by two, as shown in the following equations."
This is why you noticed that the block diagram of "Convert to One-Sided Spectrum" had the output doubled. I hope this helps clear up the issue.
Regards,
Elizabeth K.
National Instruments | Applications Engineer | www.ni.com/support -
All,
I'd like to know how to model a time series table to submit it to Oracle Data Mining. I've already tried a reverse pivot, but is there something different?
Regards,
Paulo de Tarso Costa de SousaIf you are trying to do something like create a demand forecasting model like ARIMA, ODM does not have explicit support for this type of modeling. Data mining is usually more about creating a "prediction" rather than a forecast. You may want to look at Oracle's OLAP capabilities for this.
If you are trying to include variables that contain the element of "time", such as "blood pressure before" and "blood pressure after", you can include these a variables (attributes) in the model. ODM has no real limit on the number of variables it can include in the model, so you don't have to worry about creating too many of them (usually).
You may want to "clump" the data so as to create a set of variables at certain check points in time like the "before" and "after" approach above. Rather than entering for example the measurement off an instrument ever 10 seconds (which would ordinarily create new variables for each time period), you may want to only detect "events". That is, only record the amount of time between events--sort of Mean Time Between Failure (MTBF) type of modeling.
Hope this helps with your thinking about how to approach your problem -
How to do an average on time series data?
I need to generate average hold times for various stock of companies as follows:
The data looks like:
stock timestamp (sec) quantity
GOOG 12459.6 -100 <-- SALE
GOOG 12634.0 +100 <-- PURCHASE
GOOG 12636.2 +200
GOOG 12464.8 -100
GOOG 12568.3 -300
GOOG 12678.0 +200
The rules are
1. begin and end day with balance 0
2. can short sell, i.e. can sell shares even if balance is currently 0
3. hold time is defined as number of seconds stock was held before it was sold
4. first stock purchased are sold first
I need to generate the average hold times seconds per share. I'd prefer to do this using SQL and NOT a procedure.
Any tips on how to go about calculating this? I have looked at various analytic functions, but still not sure.
Thank you.I'm afraid you might be after something like below:
this is a simplified scenario where the quantity balance always reaches 0 before changing sign (not very probable in real life)
Simple examples are reserved for the lecturer was a pretty common phrase in my university times
I dont know how to deal with a general case yet
select * from trade_0 order by position,time
TIME
POSITION
DIRECTION
QUANTITY
8
GOOG
S
100
13
GOOG
B
20
16
GOOG
B
30
17
GOOG
B
30
19
GOOG
B
20
22
GOOG
B
20
25
GOOG
B
30
26
GOOG
B
20
30
GOOG
B
30
34
GOOG
B
20
38
GOOG
B
30
41
GOOG
S
150
7
YHOO
S
10
12
YHOO
S
20
15
YHOO
S
30
16
YHOO
S
40
18
YHOO
S
60
21
YHOO
S
30
24
YHOO
S
10
25
YHOO
B
100
29
YHOO
B
300
33
YHOO
S
100
37
YHOO
S
80
40
YHOO
S
20
your condition 4. first stock purchased are sold first requires a procedural solution so model clause must be used if you want to do it in SQL.
Model Men, bear with me, please !
select m.*,
avg(abs(x_time - decode(kind,'B',time_b,time_s))) over (partition by position
order by rn rows between unbounded preceding
and unbounded following
) average
from (select *
from (select nvl(b.position,s.position) position,
nvl(b.rn,s.rn) rn,
nvl(b.cnt,0) cnt_b,
nvl(s.cnt,0) cnt_s,
b.time time_b,
s.time time_s,
b.quantity qty_b,
s.quantity qty_s
from (select time,position,quantity,
row_number() over (partition by position order by time) rn,
count(*) over (partition by position) cnt
from trade_0
where direction = 'B'
) b
full outer join
(select time,position,quantity,
row_number() over (partition by position order by time) rn,
count(*) over (partition by position) cnt
from trade_0
where direction = 'S'
) s
on b.position = s.position
and b.rn = s.rn
model
partition by (position)
dimension by (rn)
measures (0 loc,
case when cnt_b >= cnt_s then 'B' else 'S' end kind,
time_b,
time_s,
qty_b,
qty_s,
0 qty_left,
0 x_time
rules iterate (1000000) until (loc[iteration_number] is null)
loc[0] = nvl2(loc[0],loc[0],1),
qty_left[loc[0]] = case when iteration_number > 0
then qty_left[loc[0]] + case when kind[iteration_number] = 'B'
then qty_b[iteration_number]
else qty_s[iteration_number]
end
else 0
end,
x_time[iteration_number] = case when kind[iteration_number] = 'B'
then time_s[loc[0]]
else time_b[loc[0]]
end,
loc[0] = loc[0] + case when qty_left[loc[0]] = case when kind[iteration_number] = 'B'
then qty_s[loc[0]]
else qty_b[loc[0]]
end
then 1
else 0
end
) m
where kind is not null
order by position,rn
POSITION
RN
LOC
KIND
TIME_B
TIME_S
QTY_B
QTY_S
QTY_LEFT
X_TIME
AVERAGE
GOOG
1
0
B
13
8
20
100
100
8
10.4
GOOG
2
0
B
16
41
30
150
150
8
10.4
GOOG
3
0
B
17
30
8
10.4
GOOG
4
0
B
19
20
0
8
10.4
GOOG
5
0
B
22
20
0
41
10.4
GOOG
6
0
B
25
30
0
41
10.4
GOOG
7
0 -
Raw time series with power spectrum
I want to generate another VI with raw time series and power spectrum. Use soundgen.vi, with a sampling frequency of 1 kHz and datasize of 2000, so that it covers 2 sec.
How do I display the raw time series data and a power spectrum of it?
Find the rest of the peaks and identify them. Why the peaks are where they are?
Attachments:
soundgen.vi 15 KBTo start do a search in the examples that came with LV for FFT. This will show you a good cross-section of what LV has to offer in terms of analysis capabilities. The thing to remember is that you can perform this analysis on data directly from a device or post-process data that you read from a datafile. After going through the examples if you have specific questions, we'll be able to give more specific answers...
Mike...
Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion
"... after all, He's not a tame lion..."
Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps -
Warning, CR Newbie here so this may be a stupid question. I am evaluating the trial version of CR to see if it will be a good fit for an upcoming project. I've seen some related posts in the SCN, but no answers that quite fit.
I'm looking to create a line chart (or a scatter chart) with time-series data. My dataset includes a time stamp field (yyyy-MM-dd hh:mm:ss) and some floating-point temperature values like this:
2014-05-01 08:00:00, 123.4, 115.1, 109.2
2014-05-01 08:00:10, 123.6, 116.0, 109.8
The desired outcome has the date / time along the X-axis with data points spaced proportionally in the X dimension and plotted in the Y-dimension according to the temperature. The interval between the time stamps is not always the same, so numerical scaling is required on both axes. The desired chart would show a temperature scale along the vertical axis, three trend lines for the three series of temperature data and times shown on the X axis label.
I've played with several options in an attempt to make this work. On the data tab, it would seem I would want to select "on change of" and then my time-stamp field. However, with this selection, I can only use summary values and end up with a chart with a single data point for each series. I don't need or want any summary calculations carried out on the data, I just want to plot it so I can look at a trend over time. I can get trend lines if I select "for each record" on the data tab of the wizard, but then my X-axis is meaningless and the horizontal scaling is misleading unless the interval between my samples is constant.
I would welcome any suggestions on how best to accomplish this with Crystal Reports.
Thanks for reading.Jamie,
Thanks for continuing to reply. I am getting close, but still no success.
Here is the procedure I've followed and problem:
Put chart in RF section
Start Chart Expert
Chart Type = Numeric Axes, subtype = Date axis line chart
Data tab
On change of datetime field
Order... ascending, printed for each second
Values avg of my data fields (must select summary when on change of is used)
Right-click on X-axis label, select Group (X) Axis Settings
Scales tab: base unit, major unit and minor unit can only be set to days, months or years
I cannot set the minimum and maximum date with resolution other than day
Right-click Chart, select Chart Options...Axes tab: show group axes set to show time scale
No matter the setting I use, I can't find a way to adjust the resolution of the time scale lower than days.
I tried using a formula to extract only the time portion of my datetime field. I used that as my "on change" data series, hoping maybe CR would automatically recognize I was looking at a fraction of a day if I did that. No good - now it gives me a date scale with the dates showing up as the beginning of the epoch, but I can still only get resolution of integer days.
Thanks for your patience and persistence.
- Max -
Very large time series database
Hi,
I am planning to use BDB-JE to store time series data.
I plan to store 1 month worth of data in a single record.
My key consists of the following parts: id,year_and_month,day_in_month
My data is an array of 31 doubles (One slot per day)
For example, a data record for May 10, 2008 will be stored as follows
Data Record: item_1, 20080510, 22
Key will be: 1, 200805, 9
data will be: double[31] and 10nth slot will be populated with 22
Expected volume:
6,000,000 records/per day
Usage pattern:
1) Access pattern is random (random ids). May be per id, I have to
retrieve multiple records depending on how much history I need to
retrieve
2) Updates happen simultaneously
3) Wrt ACID properties, only durability is important
(data overwrites are very rare)
I built a few prototypes using BDB-JE and BDB versions. As per my estimates,
with the data I have currently, my database size will be 300GB and the growth
rate will be 4GB per month. This is huge database and access pattern is random.
In order to scale, I plan to distribute the data to multiple nodes (the database on
each node will have certain range of ids) and process each request in parallel.
However, I have to live with only 1GB RAM for every 20GB BDB-JE database.
I have a few questions:
1) Since the data cannot fit in memory, and I am looking for ~5ms response time,
is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?
3) Besides distributing the data to multiple nodes and do parallel processing,
is there anything I can do to improve throughput & scalability?
4) When do you plan to release Replication API for BDB-JE?
Thanks in advance,
SashiSashi,
Thanks for taking the time to sketch out your application. It's still
hard to provide concise answers to your questions though, because so much is
specific to each application, and there can be so many factors.
1) Since the data cannot fit in memory, and I am looking for ~5ms
response time, is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?There are certainly applications based on BDB-JE and BDB that have
very stringent response times requirements. The BDB products try to
have lower overhead and are often good matches for applications that
need good response time. But in the end, you have do some experimentation
and some estimation to translate your platform capabilities and
application access pattern into a guess of what you might end up seeing.
For example, it sounds like a typical request might require multiple
reads and then a write operation. It sounds like you expect all these
accesses to incur I/O. As a rule of thumb,
you can think of a typical disk seek as being on the order of 10 ms, so to
have a response time of around 5ms, your data accesses need to be mainly
cached.
That doesn't mean your whole data set has to fit in memory, it means
your working set has to mostly fit. In the end, most application access
isn't purely random either, and there is some kind of working set.
BDB-C has better key-based locality of data on disk, and stores data
more compactly on disk and in memory. Whether that helps your
application depends on how much locality of reference you have in the
app -- perhaps the multiple database operations you're making per
request are clustered by key. BDB-JE usually has better concurrency
and better write performance. How much that impacts your application
is a function of what degree of data collision you see.
For both products, some general principles, such as reducing the size
of your key as much as possible will help. For BDB-JE, you also need
to consider options like experimenting with setting je.evictor.lruOnly to
false may give better performance. Also for JE, tuning garbage collection
to use a concurrent low pause collector can provide smoother response times.
But that's all secondary to what you could do in the application, which
is to make the cache as efficient as possible by reducing the size of the
record and clustering accesses as much as possible.
>
4) When do you plan to release Replication API for
r BDB-JE?Sorry, Oracle is very firm about not announcing release estimates.
Linda -
Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...Hi,
Time Series Key Figure:
Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
Storage Bucket profile:
SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
Time/Planning bucket profile:
basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
Hope this will help you.
Regards
Sujay -
Time-series / temporal database - design advice for DWH/OLAP???
I am in front of task to design some DWH as effectively as it can be - for time series data analysis - are there some special design advices or best practices available? Or can the ordinary DWH/OLAP design concepts be used? I ask this - because I have seen the term 'time series database' in academia literature (but without further references) and also - I have heard the term 'temporal database' (as far as I have heard - it is not just a matter for logging of data changes etc.)
So - it would be very nice if some can give me some hints about this type design problems?Hi Frank,
Thanks for that - after 8 years of working with Oracle Forms and afterwards the same again with ADF, I still find it hard sometimes when using ADF to understand the best approach to a particular problem - there is so many different ways of doing things/where to put the code/how to call it etc... ! Things seemed so much simplier back in the Forms days !
Chandra - thanks for the information but this doesn't suit my requirements - I originally went down that path thinking/expecting it to be the holy grail but ran into all sorts of problems as it means that the dates are always being converted into users timezone regardless of whether or not they are creating the transaction or viewing an earlier one. I need the correct "date" to be stored in the database when a user creates/updates a record (for example in California) and this needs to be preserved for other users in different timezones. For example, when a management user in London views that record, the date has got to remain the date that the user entered, and not what the date was in London at the time (eg user entered 14th Feb (23:00) - when London user views it, it must still say 14th Feb even though it was the 15th in London at the time). Global settings like you are using in the adf-config file made this difficult. This is why I went back to stripping all timezone settings back out of the ADF application and relied on database session timezones instead - and when displaying a default date to the user, use the timestamp from the database to ensure the users "date" is displayed.
Cheers,
Brent -
Time series and Order series questions
Hi Guys - Need some help in understanding/Visualizing some basic APO concepts. I do not want to move further without understanding these concepts completely. I did read sap help and couple of apo books but none gave me a complete understanding of this very basic concept.
1. Data is stored in livecache in 3 different ways. time series, order series and atp time series. for now I am concentrating on just time series and order series. Can some one help me understand with an example how data is stored in time series and how it is stored in order series? I read that data which is not order related is called time series data and which is order related is called order series data.
My query is even in DP time series data, data is stored with respect to product and location that is transferred to snp. In SNP too data is processed with respect to product and location. so what is the difference in time series data and order series data?
2. what are time series key figures and what are order series key figures? I read safety stock for example is a time series keyfigure. why is it not a order series key figure? what makes a keyfigure time series or order series? can some one xplain this in detail with an example or numbers?
3. there is a stock category group in snp tab of location master LOC3. Stock category should be product related right? how is this related to location and what does this field mean in location master
Thanks a lot for your help in advance. Please let me know if I am not clear in any of the questions.Hi,
Time series: Data is stored in buckets with no reference to orders.( If you place the mouse on time series data and right click for
display details , you will not find any information.
Suitable for tactical planing and aggregated planning. Usually in demand planning.
Pre requisite: 1. You need to create time series objects for the planning area.
2. When creating planning area you should not make any entries for the Key figure in the field Info Cube, category
and category group.
3. When creating planning area any entry you made in the field Key figure semantics with prefixed with TS.
(Optional entry)
Order series: Data is stored in buckets with reference to orders.( If you place the cursor on the order series data and right click
the mouse for display details , you will find information of order details.)
Useful for operative planning.
*You will have real time integration with R3.
Pre requisite: 1. You need to create time series objects for the planning area.( though you are creating Order series)
2.When creating a planning area specify a category or category group or enter a key figure semantics with prefix
LC.
3. When creating planning area you should not make an entry for the key figure in the field Info cube.
Thanks,
nandha -
Hi,
In SNC system, the supplier is getting the following error when he is trying to update the planned receipt quantity.
Due to that error the ASN canu2019t be created and sent to ECC system.
Time series error in class /SCF/CL_ICHDM_DATAAXS method /SCF/IF_ICHDMAXS_2_CNTL~SAVE
Please give your inputs as to how to resolve this error.
Regards,
ShivaliHi Shivali,
This is not related to time series data issue.
ASN (ASN number:122593)XML failed may be because of there no Purchase order(Reference order) exists for supplier 0000104466 which will be there in failed XML.(see the DespatchedDeliveryNotification_In XML check XML tag <PurchaseOrderReference> value under tag <Item>)
Login as a supplier 0000104466 and search for purchase order (or replenishment order) and this PO(or RO) won't be there in supplier 0000104466 .
That's why ASN got failed.
Regards,
Nikhil
Maybe you are looking for
-
Can it run apple 23 lcd?
-
NEW UPDATED LIST for Comenity Bank Store Cards- can be useful for Shopping cart trick
Good Day Family, I have taken the time to compile an accurate UPDATED list for Comenity bank Store cards, you can check for yourself on their website here http://www.comenity.net/comenity/UsingAccount/ Please let me know if you have any success wit
-
I rolled back to Snow Leopard to fix this issue, I wondered if Apple fix this in Mountain Lion?...anyone there with a Mid 2010 MBP 15" with the display issue upgraded to Mountain Lion and the issue was fixed?.... Please let me know. Regards,
-
I used to be able to create new graphic background images for iWeb with AppleWorks-Draw. Now that Appleworks is no longer compatible with Mac OS, does anyone know of a simple way to create your own background images? Here is a three page sample of so
-
Where is my website, since MobileMe transformed to iCloud?
I made a nice website that was hosted bij MobileMe. Since that account was migrated to iCloud I lost track of mij website. Does anyone know how to find my website?