Drawback of Time series related to cache
Hi
can any one tell me the drawback of time series related to cache(10.1.3.4.1)
Thanks and Regards
Ananth
<p>
Ok first I created one table:
</p>
<p>
CREATE TABLE TEMP1234( <br>
location VARCHAR2(6),<br>
date_start DATE<br>
);<br>
</p>
<p>
ALTER SESSION SET NLS_DATE_FORMAT='YYYY/MM/DD';
</p>
<p>
INSERT into TEMP1234 values ('l1', '2006/10/01');<br>
INSERT into TEMP1234 values ('l1', '2006/10/20');<br>
INSERT into TEMP1234 values ('l1', '2006/11/01');<br>
INSERT into TEMP1234 values ('l2', '2006/11/03');<br>
INSERT into TEMP1234 values ('l2', '2006/11/19');<br>
INSERT into TEMP1234 values ('l1', '2006/11/28');<br>
INSERT into TEMP1234 values ('l1', '2006/12/10');<br>
<br>
COMMIT;
</p>
<p>
Now once it is done, now issue following SQL.
</p>
<p>
SELECT location, date_start, (MAX(date_start) OVER ( order by date_start rows <br>between current row and 1 following)) as date_end<br>
FROM TEMP1234<br>
ORDER BY date_start<br>
</p>
<p>
Result will be as follows:<br>
</p>
<p>
LOCATI DATE_START DATE_END<br>
------ ---------- ----------<br>
l1 2006/10/01 2006/10/20<br>
l1 2006/10/20 2006/11/01<br>
l1 2006/11/01 2006/11/03<br>
l2 2006/11/03 2006/11/19<br>
l2 2006/11/19 2006/11/28<br>
l1 2006/11/28 2006/12/10<br>
l1 2006/12/10 2006/12/10<br>
</p><br>
<p>
7 rows selected.<br>
</p>
<p>
I hope this will solve your problem.
</p>
<p>
Rajs
</p>
<b>www.oraclebrains.com<a>
<br><font color="#FF0000" size="1">POWERED by the people, to the people and for the people WHERE ORACLE IS PASSION.</font></b>
Similar Messages
-
hi all,
i have a physical table(as fact) having one date column.Can i use the date column to create a times view in physical layer, so that this view can be used as time dimension ???
thanksHi,
yes i am also did same for OE(Order Entry) schema which is in Oracle 10g demo schema.
wait for experts answers there is any alternative way to do this.
or else
alias the fact table as time dim extract year,month,date columns using functions in bmm layer
Regards
Naresh -
Hello
Got some queries please. I tried to search the forum but not able to find the solution.
1) What is the concept of time series in live cache. I have read the documentation and came to know that there r 3 types - Time Series , Order cache and ATP order cache.
2) What happens when we select the planning area and use the transaction 'Create time series' ?
Thank you.
Regards
KKHi
In laymanu2019s language APO we different dimensions-. We can have product and location and we require to attach the time dimension which will show the data in live chache with particular time.
Order series KF are mainly used for APO SNP. In which al the data get stored based on order categories for example Forecast KF will be stored as FA and FC order series category.
When you select the planning area and create time series, it attach the respective time horizon for the planning area.
Thanks
Amol -
Warning, CR Newbie here so this may be a stupid question. I am evaluating the trial version of CR to see if it will be a good fit for an upcoming project. I've seen some related posts in the SCN, but no answers that quite fit.
I'm looking to create a line chart (or a scatter chart) with time-series data. My dataset includes a time stamp field (yyyy-MM-dd hh:mm:ss) and some floating-point temperature values like this:
2014-05-01 08:00:00, 123.4, 115.1, 109.2
2014-05-01 08:00:10, 123.6, 116.0, 109.8
The desired outcome has the date / time along the X-axis with data points spaced proportionally in the X dimension and plotted in the Y-dimension according to the temperature. The interval between the time stamps is not always the same, so numerical scaling is required on both axes. The desired chart would show a temperature scale along the vertical axis, three trend lines for the three series of temperature data and times shown on the X axis label.
I've played with several options in an attempt to make this work. On the data tab, it would seem I would want to select "on change of" and then my time-stamp field. However, with this selection, I can only use summary values and end up with a chart with a single data point for each series. I don't need or want any summary calculations carried out on the data, I just want to plot it so I can look at a trend over time. I can get trend lines if I select "for each record" on the data tab of the wizard, but then my X-axis is meaningless and the horizontal scaling is misleading unless the interval between my samples is constant.
I would welcome any suggestions on how best to accomplish this with Crystal Reports.
Thanks for reading.Jamie,
Thanks for continuing to reply. I am getting close, but still no success.
Here is the procedure I've followed and problem:
Put chart in RF section
Start Chart Expert
Chart Type = Numeric Axes, subtype = Date axis line chart
Data tab
On change of datetime field
Order... ascending, printed for each second
Values avg of my data fields (must select summary when on change of is used)
Right-click on X-axis label, select Group (X) Axis Settings
Scales tab: base unit, major unit and minor unit can only be set to days, months or years
I cannot set the minimum and maximum date with resolution other than day
Right-click Chart, select Chart Options...Axes tab: show group axes set to show time scale
No matter the setting I use, I can't find a way to adjust the resolution of the time scale lower than days.
I tried using a formula to extract only the time portion of my datetime field. I used that as my "on change" data series, hoping maybe CR would automatically recognize I was looking at a fraction of a day if I did that. No good - now it gives me a date scale with the dates showing up as the beginning of the epoch, but I can still only get resolution of integer days.
Thanks for your patience and persistence.
- Max -
Very large time series database
Hi,
I am planning to use BDB-JE to store time series data.
I plan to store 1 month worth of data in a single record.
My key consists of the following parts: id,year_and_month,day_in_month
My data is an array of 31 doubles (One slot per day)
For example, a data record for May 10, 2008 will be stored as follows
Data Record: item_1, 20080510, 22
Key will be: 1, 200805, 9
data will be: double[31] and 10nth slot will be populated with 22
Expected volume:
6,000,000 records/per day
Usage pattern:
1) Access pattern is random (random ids). May be per id, I have to
retrieve multiple records depending on how much history I need to
retrieve
2) Updates happen simultaneously
3) Wrt ACID properties, only durability is important
(data overwrites are very rare)
I built a few prototypes using BDB-JE and BDB versions. As per my estimates,
with the data I have currently, my database size will be 300GB and the growth
rate will be 4GB per month. This is huge database and access pattern is random.
In order to scale, I plan to distribute the data to multiple nodes (the database on
each node will have certain range of ids) and process each request in parallel.
However, I have to live with only 1GB RAM for every 20GB BDB-JE database.
I have a few questions:
1) Since the data cannot fit in memory, and I am looking for ~5ms response time,
is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?
3) Besides distributing the data to multiple nodes and do parallel processing,
is there anything I can do to improve throughput & scalability?
4) When do you plan to release Replication API for BDB-JE?
Thanks in advance,
SashiSashi,
Thanks for taking the time to sketch out your application. It's still
hard to provide concise answers to your questions though, because so much is
specific to each application, and there can be so many factors.
1) Since the data cannot fit in memory, and I am looking for ~5ms
response time, is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?There are certainly applications based on BDB-JE and BDB that have
very stringent response times requirements. The BDB products try to
have lower overhead and are often good matches for applications that
need good response time. But in the end, you have do some experimentation
and some estimation to translate your platform capabilities and
application access pattern into a guess of what you might end up seeing.
For example, it sounds like a typical request might require multiple
reads and then a write operation. It sounds like you expect all these
accesses to incur I/O. As a rule of thumb,
you can think of a typical disk seek as being on the order of 10 ms, so to
have a response time of around 5ms, your data accesses need to be mainly
cached.
That doesn't mean your whole data set has to fit in memory, it means
your working set has to mostly fit. In the end, most application access
isn't purely random either, and there is some kind of working set.
BDB-C has better key-based locality of data on disk, and stores data
more compactly on disk and in memory. Whether that helps your
application depends on how much locality of reference you have in the
app -- perhaps the multiple database operations you're making per
request are clustered by key. BDB-JE usually has better concurrency
and better write performance. How much that impacts your application
is a function of what degree of data collision you see.
For both products, some general principles, such as reducing the size
of your key as much as possible will help. For BDB-JE, you also need
to consider options like experimenting with setting je.evictor.lruOnly to
false may give better performance. Also for JE, tuning garbage collection
to use a concurrent low pause collector can provide smoother response times.
But that's all secondary to what you could do in the application, which
is to make the cache as efficient as possible by reducing the size of the
record and clustering accesses as much as possible.
>
4) When do you plan to release Replication API for
r BDB-JE?Sorry, Oracle is very firm about not announcing release estimates.
Linda -
Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...Hi,
Time Series Key Figure:
Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
Storage Bucket profile:
SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
Time/Planning bucket profile:
basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
Hope this will help you.
Regards
Sujay -
How can you build time series measures in OBIEE without using TODATE AGO fu
How can you build time series measures in OBIEE without using TODATE and AGO function?
Please provide steps to build time series
measures in OBIEE without using TODATE and
AGO function. Dashboard results not storing
in cache when using TODATE and AGO functions.
eventhough its cached users queries not
hitting cache because queries doesn't match
exact date time when using TODATE and AGO
functions. so I want to build queries using
sysdate and some simple calculations. Please
send your inputs/ideas for my questions..
Thanks in AdvanceThis can be using Msum function in answers. Use the following formula, here dollars is my metric. Change the formula based on your metric.
Msum("Sales Measures".Dollars ,2) - "Sales Measures".Dollars
the report will be cached and better performed compared with time series. check ti
- Madan Thota -
Comparing AGO vs TODATE - few time-series questions
Hi All,
I just thought that someone might actually shed some light on the following situation.
I'm using AGO function for reporting CY, PY, PY-1 - etc.
So, AGO is essentially showing the value (or aggregated value) of the metrics at the same time period - and I usually use year, not months in the function.
Now, my understanding is that when I have CYTD and PYTD - I must use TODATE function. Please correct me if I'm wrong. Like if I need to show % variance over different times periods (Previous year to Current Year) and the current year data MUST be compared to the same data in the previous year. For example, if we are in January of current year, the comparison must be between Oct-Jan of current year to Oct-Jan of previous year, not the full previous year.
Here, I should probably use months' level in this situation, correct? Please correct me if I'm wrong.
Is TODATE getting a current system time and date? Or Do I need to go about creating a dynamic variable Current Month?
UPD:
http://oraclebizint.wordpress.com/2007/11/05/oracle-bi-ee-101332-understanding-todate-and-ago-achieving-ytd-qtd-and-mtd/
this is very helpful actually
Message was edited by:
wildmightThis can be using Msum function in answers. Use the following formula, here dollars is my metric. Change the formula based on your metric.
Msum("Sales Measures".Dollars ,2) - "Sales Measures".Dollars
the report will be cached and better performed compared with time series. check ti
- Madan Thota -
Greetings All,
I created two time series measures in a fact table using AGO and TODATE - e.g., last month sale and year to date sales
In Answer when I select one of these two fields the data returned are correct. However, when I select both fields in a report I am getting error: column does not exist in this table.
Is selecting two or more time series measures not allowed?
Here is the entire error msg:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] Odbc driver returned an error (SQLExecDirectW).
Error Details
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 59014] The requested column does not exist in this table. (HY000)
SQL Issued: SELECT Calendar."Calendar Month Name" saw_0, "Sales Facts"."Amount Sold" saw_1, "Sales Facts"."Last Month Sales" saw_2, "Sales Facts"."Change From Last Month Sales" saw_3, "Sales Facts"."Month To Date Sales" saw_4 FROM SH WHERE Calendar."Calendar Year" = 2001 ORDER BY saw_0
Thanks for your help.Thanks for your response.
Yes, I have Calendar Month Name in Month level and it is indeed not unique. How do I remove it?
OBIEE version 10.1.3.3.1
I am using the tables from SH schema for testing.
The chronological key is Times Id which to the best of my knowledge is correct.
I tried the following:
Highlight Calendar Month Level > right click > Display Related > Logical Key > Edit > unchecked Use for drilldown.
Moved Calendar Month Name under Times Detail
After This change the Times dim levels are as follows:
Time Total
Year
Calendar Year
Calendar Year ID
Quarter
CalendarQuarter Desc
Calendar Quarter Id
Month
Calendar Month Desc
Calendar Month Id
Times Detail
Time Id
Calendar Month Name After this change,
(1) I can select Calendar Month Desc, Last Month Sales and Month to Date sales and the results are correct.
(2) However, when I add Amount Sold to the query in (1), I am getting error with following msg:
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S0002 code: 942 message: [Oracle][ODBC][Ora]ORA-00942: table or view does not exist. [nQSError: 16001] ODBC error state: S0002 code: 942 message: [Oracle][ODBC][Ora]ORA-00942: table or view does not exist. [nQSError: 16015] SQL statement execution failed. (HY000)
SQL Issued: SELECT Calendar."Calendar Month Desc" saw_0, "Sales Facts"."Amount Sold" saw_1, "Sales Facts"."Last Month Sales" saw_2, "Sales Facts"."Month To Date Sales" saw_3 FROM SH WHERE Calendar."Calendar Year" = 2001 ORDER BY saw_0
(3) In (1) Replace Calendar Month Desc with Calendar Month Name and the numbers for Month To Date Sales are not correct.
Any suggestion?
Thanks.
Message was edited by:
rxshah -
Time series and Order series questions
Hi Guys - Need some help in understanding/Visualizing some basic APO concepts. I do not want to move further without understanding these concepts completely. I did read sap help and couple of apo books but none gave me a complete understanding of this very basic concept.
1. Data is stored in livecache in 3 different ways. time series, order series and atp time series. for now I am concentrating on just time series and order series. Can some one help me understand with an example how data is stored in time series and how it is stored in order series? I read that data which is not order related is called time series data and which is order related is called order series data.
My query is even in DP time series data, data is stored with respect to product and location that is transferred to snp. In SNP too data is processed with respect to product and location. so what is the difference in time series data and order series data?
2. what are time series key figures and what are order series key figures? I read safety stock for example is a time series keyfigure. why is it not a order series key figure? what makes a keyfigure time series or order series? can some one xplain this in detail with an example or numbers?
3. there is a stock category group in snp tab of location master LOC3. Stock category should be product related right? how is this related to location and what does this field mean in location master
Thanks a lot for your help in advance. Please let me know if I am not clear in any of the questions.Hi,
Time series: Data is stored in buckets with no reference to orders.( If you place the mouse on time series data and right click for
display details , you will not find any information.
Suitable for tactical planing and aggregated planning. Usually in demand planning.
Pre requisite: 1. You need to create time series objects for the planning area.
2. When creating planning area you should not make any entries for the Key figure in the field Info Cube, category
and category group.
3. When creating planning area any entry you made in the field Key figure semantics with prefixed with TS.
(Optional entry)
Order series: Data is stored in buckets with reference to orders.( If you place the cursor on the order series data and right click
the mouse for display details , you will find information of order details.)
Useful for operative planning.
*You will have real time integration with R3.
Pre requisite: 1. You need to create time series objects for the planning area.( though you are creating Order series)
2.When creating a planning area specify a category or category group or enter a key figure semantics with prefix
LC.
3. When creating planning area you should not make an entry for the key figure in the field Info cube.
Thanks,
nandha -
Hi,
In SNC system, the supplier is getting the following error when he is trying to update the planned receipt quantity.
Due to that error the ASN canu2019t be created and sent to ECC system.
Time series error in class /SCF/CL_ICHDM_DATAAXS method /SCF/IF_ICHDMAXS_2_CNTL~SAVE
Please give your inputs as to how to resolve this error.
Regards,
ShivaliHi Shivali,
This is not related to time series data issue.
ASN (ASN number:122593)XML failed may be because of there no Purchase order(Reference order) exists for supplier 0000104466 which will be there in failed XML.(see the DespatchedDeliveryNotification_In XML check XML tag <PurchaseOrderReference> value under tag <Item>)
Login as a supplier 0000104466 and search for purchase order (or replenishment order) and this PO(or RO) won't be there in supplier 0000104466 .
That's why ASN got failed.
Regards,
Nikhil -
Hello,
I want to check the data in my time series live cache in my planning area. What is the transaction to check ?
Thank you
SteveHello Steve,
Here are my answers:
For Q1: No, I don't think it's because you are in the 10th month of the year. The package size (i.e. the number of rows in each package) and the number of packets depend on a few factors: a) how much data is in your planning area b) on whether you implemented BADI /SAPAPO/SDP_EXTRACT c) the parameters that you placed in the "data records/calls" and "display extr. calls" fields.
For Q2: It is included because key figures with units/currencies (e.g. amounts and currencies) do need UOM/BUOM/Currency information and that's why it is also part of the output. You can check what unit characteristic a certain KF uses in transaction RSD1.
For Q3: Yes, you can but you need to do more than what I mentioned before. Here are some ways to do that:
A) Generate an export datasource. If you are in SCM < 5.0, connect that to an InfoSource and then to a cube. If you are in SCM 5.0, connect that to an InfoCube using a transformation rule. You can then load data from the planning area to the InfoCube. After that, you can then use transaction /SAPAPO/RTSCUBE to load data from the cube to the PA.
B) You can opt to create a custom ABAP that reads data from the DataSource, performs some processing and then write the data to target planning area using function module /SAPAPO/TS_DM_SET or the planning book BAPI.
Hope this helps. -
SAP HANA One and Predictive Analysis Desktop - Time Series Algorithms
I have been working on a Proof-of-Concept project linking the SAP Predictive Analysis Desktop application to the SAP HANA One environment.
I have modeled that data using SAP HANA Studio -- created Analytic views, Hierarchies, etc. -- following the HANA Academy videos. This has worked very well in order to perform the historical analysis and reporting through the Desktop Application.
However, I cannot get the Predictive Analysis algorithms -- specifically the Time Series algorithms -- to work appropriately using the Desktop tool. It always errors out and points to the IndexTrace for more information, but it is difficult to pinpoint the exact cause of the issue. The HANA Academy only has videos on Time Series Algorithms using SQL statements which will not work for my user community since they will have to constantly tweak the data and algorithm configuration.
In my experience so far with Predictive Analysis desktop and the Predictive Algorithms, there is a drastic difference between working with Local .CSV / Excel files and connecting to a HANA instance. The configuration options for using the Time Series Algorithms are different depending upon the data source, which seems to be causing the issue. For instance, when working with a local file, the Triple Exponential Smoothing configuration allows for the specification of which Date field to use for the calculation. Once the data source is switched to HANA, it no longer allows for the Date field to be specified. Using the exact same data set, the Algorithm using the local file works but the HANA one fails.
From my research thus far, everyone seems to be using PA for local files or running the Predictive Algorithms directly in HANA using SQL. I can not find much of anything useful related to combing PA Desktop to HANA.
Does anyone have any experience utilizing the Time Series Algorithms in PA Desktop with a HANA instance? Is there any documentation of how to structure the data in HANA so that it can be properly utilized in PA desktop?
HANA Info:
HANA One Version: Rev 52.1
HANA Version: 1.00.66.382664
Predictive Analysis Desktop Info:
Version: 1.0.11
Build: 708
Thanks in advance --
BrianHi,
If you use CSV or XLS data source you will be using Native Algorithm or R
Algorithm in SAP Predictive Analysis.
When you connect HANA, SAP Predictive Analysis uses PAL Algorithm which runs
on HANA server.
Coming to your question regarding difference,
In SAP PA Native Algorithm, we could provide the Data variable, Algorithm
picks the seasonal information from the Data column. Both R and SAP HANA PAL
does not support Date Column. We need configure seasonal information in
Algorithm properties.
R Properties
1) Period : you need to mention the periodicity of the Data
Monthly : (12)
Quarter : (4)
Custom : you can use it for week or Daily or hourly.
2) Start Year: need to mention Start year.
Start year is not used by algorithm for calculating Time series, but it helps
PA to generate Visualization ( Time series chart) by simulating year and
periodicity information.
3) Starting Period:
if your data is Quarterly and you have data recordings from Q2, mention 2 in
start period.
Example.
If the data periodicity is Monthy and my data starts from Feb 1979, we need to provide following information,
Period: 12
Start year: 1979
start Period: 2
PAL Properties. : Same as properties defined in R.
Thanks
Ashok
[email protected] -
Time Series Objects for a Planning Area
Hi all,
Can anyone let me know why do we create Time Series Objects for a Planning Area.
What is its Role & significance..
Regards,.
Vishal.S.Pandyatime series is usually a Demand planning concept(and used in SNP as well but in SNP its predominantly the order series that plays a main role)
time series is a general concept in statistics(and forecasting) wherein the value of a key figure is represented in a time bucket
Time series gives you an idea of the gradual change in the values by time and the relation of the future based on the past
planning area in APO (and other tools) tries to represent this as a 2 dimensional model with time on the columns and key figures in the rows. the value that you load into the cells that are formed by the above are based on the characterisitic values you choose from your MPOS which is linked to the way the values are stored in the planning area
The planning area stores data for each key figure in the smallest unit of time(technical storage buckets) and the lowest level of characterisitc value combination -
Relevance of Time series data in PPDS
Hi, we have an SNP planning area with some times series KFs in it. if we delete the time series for the PA what impact it would have on PPDS..
or what would be the relevance of times series data in PPDS?..
kindly explain.the only relation of time series data to PP/DS I know of is Time-dependent safety stock planning in PP/DS.
Safety and Target Stock Level Planning in PP/DS
In Customizing for PP/DS, define the following:
· Make SNP key figures available
You define which SNP key figures PP/DS should use as safety stock or daysu2019 supply. (Make SNP Key Figures Available)
· Define planning area
You specify an SNP planning area that contains the key figures for the time-dependent safety stock/daysu2019 supply levels. (Global Settings ® Maintain Global Parameters and Defaults)
if you did not use the planning area and key figure there should be no influence.
Frank
Maybe you are looking for
-
Remote app start screen on iphone
I'm most of the time using the Remote app in my iPhone to command my Apple TV when listening music. This works as it should with only one issue: each time I start the app and connect to my Apple TV to do whatever I want (change playlist, etc.) the fi
-
Can someone help me figure out why this didn't work?
I recently gave my iMac to my nieces....I erased and reinstalled the osx from my disks that came with the program. They HAD one of the Mac Minis (an old one...g4 I think). When we started up the iMac, I used the Firewire connection to transfer all th
-
What Is a ( sample: AVIN0001.INP) document " could not be opened"
What is a ( sample: AVIN0001.INP)document? " finder cannot open files of this type. Should have only ORF & DSC files from hdsc cards and got a whole bunch of files " unreadable" and with this message? The photos all got put into my iPad with no probl
-
I cannot get Adobe Reader 11 to work - what have I done wrong?
I cannot get Adobe Reader 11 to work - what have I done wrong?
-
I have an email address on my website and I have used outlook express to view and send emails. Recently I cannot send email any longer but instead receive the outlook error The connection to the server has failed. Account: 'mail.storksnmore.net', Se