What else are stored in the database buffer cache?
What else are stored in the database buffer cache except the data blocks read from datafiles?
That is a good idea.
SQL> desc v$BH;
Name Null? Type
FILE# NUMBER
BLOCK# NUMBER
CLASS# NUMBER
STATUS VARCHAR2(10)
XNC NUMBER
FORCED_READS NUMBER
FORCED_WRITES NUMBER
LOCK_ELEMENT_ADDR RAW(4)
LOCK_ELEMENT_NAME NUMBER
LOCK_ELEMENT_CLASS NUMBER
DIRTY VARCHAR2(1)
TEMP VARCHAR2(1)
PING VARCHAR2(1)
STALE VARCHAR2(1)
DIRECT VARCHAR2(1)
NEW CHAR(1)
OBJD NUMBER
TS# NUMBERTEMP VARCHAR2(1) Y - temporary block
PING VARCHAR2(1) Y - block pinged
STALE VARCHAR2(1) Y - block is stale
DIRECT VARCHAR2(1) Y - direct block
My question is what are temporary block and direct block?
Is it true that some blocks in temp tablespace are stored in the data buffer?
Similar Messages
-
Wanted to know what actions are taken on the database
Dear Sir
I wanted to find out what actions are performed explicitly on the database.For example I apply a patch on the database of create object,drop object,insert object,update object ...etc.Does Oracle Maintain any logs for such object.?....
.I wanted to find out what actions are performed on the database on daily basis?....Can we maintain the log of it?
Appreciate your help on the above?
Regards[email protected] wrote:
Dear Sir
But I wanted to know If anybody firing any DML,DDL OPERATION IN THE DATABASE.CAN ORACLE MAINTAINS ANY AUDIT OF THAT?
RegardsPeople dont perform commands on the database level but they do it on the tables. So you would be setting the auditing on the objects actually. Not to nick pick but the terminology should be correctly used as it may come as an confusion later on. Depending on what you need, that type of auditing can be set on the table.
Also make a habit to post your db version and o/s for sure in all the posts.
HTH
Aman.... -
Hi,
We seem to get this error through SCOM every couple of weeks. It doesn't correlate with the AV updates, so I'm not sure what's eating up the memory. The server has been patched to the latest roll up and service pack. The mailbox servers
have been provisioned sufficiently with more than enough memory. Currently they just slow down until the databases activate on another mailbox server.
A significant portion of the database buffer cache has been written out to the system paging file.
Any ideas?I've seen this with properly sized servers with very little Exchange load running. It could be a number of different things. Here are some items to check:
Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
Confirm that the Windows OS is running the recommended hotfixes. Here is an older post that might still apply to you
http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
http://support.microsoft.com/kb/2699780/en-us
Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more. Use the PAL tool to collect and analyze the perf data -
http://pal.codeplex.com/
Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
Be sure that the disk are properly aligned -
http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
Check that the network is properly configured for Exchange server. You might be surprise how the network config can cause perf & scom alerts.
Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
Be sure that hyperthreading is NOT enabled -
http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
Check that there are no hardware issues on the server (RAM, CPU, etc). You might need to run some vendor specific utilities/tools to validate.
Proper paging file configuration should be considered for Exchange servers. You can use the perfmon to see just how much paging is occurring.
These will usually lead you in the right direction. Good Luck! -
This was discussed here, with no resolution
http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
I have the same issue. This is a single-purpose physical mailbox server with 320 users and 72GB of RAM. That should be plenty. I've checked and there are no manual settings for the database cache. There are no other problems with
the server, nothing reported in the logs, except for the aforementioned error (see below).
The server is sluggish. A reboot will clear up the problem temporarily. The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each. Does anyone have
any ideas on this?
Warning ESE Event ID 906.
Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)Brian,
We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
for the sole purpose of serving as our public folder servers.
So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
cache flush to paging file, we got the following alert:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:14 AM
Event ID: 17012
Task Category: Storage
Level: Error
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
Followed by:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:15 AM
Event ID: 17106
Task Category: Storage
Level: Information
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:13:50 AM
Event ID: 17102
Task Category: Storage
Level: Warning
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action. This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
actions.
Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
Thanks! -
Hello -
We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
We run nightly backups on both nodes at the Primary Site.
Node 1 backup covers all mailbox databases [active & passive].
Node 2 backup covers the Public Folders database.
The backups for each database are timed so they do not overlap.
During each backup we get several of these event log warnings:
Log Name: Application
Source: ESE
Date: 23/04/2014 00:47:22
Event ID: 906
Task Category: Performance
Level: Warning
Keywords: Classic
User: N/A
Computer: EX1.xxx.com
Description:
Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation.
See help link for complete details of possible causes.
Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
Current Total Percent Resident: 26% (110122 of 421303 buffers)
We've rescheduled the backups and the warning message occurences just move with the backup schedules.
We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
Backup software is Asigra V12.2 with latest hotfixes.
We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
Any suggestions please?
Thanks in advanceHaving said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
This attribute should do it...
msExchESEParamCacheSizeMax
http://technet.microsoft.com/en-us/library/ee832793.aspx
Give me a shout if this is a bad idea
Thanks -
Will Oracle look into the database buffer cache in this scenario?
hi guys,
say I have a table with a million rows, there are no indexes on it, and I did a
select * from t where t.id=522,000.
About 5 minutes later (while that particular (call it blockA) block is still in the database buffer cache) I do a
select * from t where t.id >400,000 and t.id < 600,000
Would Oracle still pick blockA up from the database buffer cache? if so, how? How would it know that that block is part of our query?
thanksWithout an Index, Oracle would have done a FullTableScan on the first query. The blocks would be very quickly aged out of the buffer cache as they have been retrieved for an FTS on a large table. It is unlikely that block 'A' would be in the buffer_cache after 5minutes.
However, assuming that block 'A' is still in the buffer_cache, how does Oracle know that records for the second query are in block 'A' ? It doesn't. Oracle will attempt another FullTableScan for the second query -- even if, as in the first query -- the resultset returned is only 1 row.
Now, if the table were indexed and rows were being retrieved via the Index, Oracle would use the ROWID to get the "DBA" (DataBlockAddress) and get the hash value of that DBA to identify the 'cache buffers chain' where the block is likely to be found. Oracle will make a read request if the block is not present in the expected location.
Hemant K Chitale
http://hemantoracledba.blogspot.com -
What values are stored in the alignment region of an image and how can they be set?
I am calling a number of C++ functions with different variable sized LabView images at rates in excess of 20 images per second. I need to tell the C++ code developer what values to expect in the alignment region of the image. Right now I am creating an image with a zero border that is 32 byte aligned, so there is no alignment region and everything works fine.
I would like to move to using normal LabView images, as it saves sereval steps and allows me to use a combination of LabView and C++ operations. I do not want to re-write all the C++ functions to be aware of the LabView alignment and border areas. I just want the alignment area and border area to be zero and process them like they were part of the image.
I can set the border region to zero using Fill Image but I am not clear as to what the values will be in the alignment region, or if I can set them. Does Fill Image also fill the alignment region? Since the C++ code is being developed on a system without LabView, and I do not have the means to debug it on my LabView system, it is tricky to know what is in this region.
In the ideal world, I would like it to be zero or to be able to set it to zero.
Thanks in advance.
AndrewHi Andrew,
The function IMAQ Fill Image allows you to set the border and all or part of your image to a certain pixel value that you define. One of the inputs to Fill Image is "Image Mask" which you can use to specify which pixels in your original image will be modified. This help document describes the Fill Image VI in detail and can provide some good information for you.
Essentially, the locations of any non-zero pixels in your Image Mask are where the new pixel value will be set in your original Image. Does that make sense? So if you know where your alignment region is then you can use an image mask with Fill Image to set the alignment region and the border to zero. If you don't use an Image Mask, the Fill Image VI will assign the new pixel value to the entire original image.
Regards,
Daniel H.
Customer Education Product Support Engineer
National Instruments
Certified LabVIEW Developer -
How do we check what events are set in the database.
Hi All,
I have set a event a event dynamically at the system level on the database.
Where can I check if it is set? Is there any table?
Thanks,
JigerI have set a event to check the library cache locks.
event 4020 at the system level.
The database version is 9i.
Thanks,
Jiger
Message was edited by:
Jiger -
What are all information brought into database buffer cache ?
Hi,
What are all information brought into database buffer cache , when user does any one of operations such as "insert","update", "delete" , "select" ?
Whether the datablock to be modified only brought into cache or entire datablocks of a table brought into cache while doing operations i mentioned above ?
What is the purpose of SQL Area? What are all information brought into SQLArea?
Please explain me the logic behind the questions i asked above.
thanks in advance,
nvseenuDocumentation is your friend. Why not start by
reading the
[url=http://download.oracle.com/docs/cd/B19306_01/serv
er.102/b14220/memory.htm]Memory Architecturechapter.
Message was edited by:
orafad
Hi orafad,
I have learnt MemoryArchitecture .
In that documentation , folowing explanation are given,
The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles.
But i would like to know whether all or few datablocks brought into cache.
thanks in advance,
nvseenu -
Running reports stored in the database
Hi guys,
Is it possible to have Reports 6i produce reports that are
stored in the database? Looking at the documentation, it would
appear the report definition files (RDF) must be stored in the
filesystem. I don't really want to be handing out filesystem
access (managing and it) to people - is there a way around this?
Thanks,
denty.user5780461 wrote:
Hi,
Would anyone have any recommendations regrading running reports based on the data stored in an oracle database. We will soon go live with a project using an oracle DB. We do not wish to run reports directly from the production DB as it will impact performance. Have you actually measured the impact? If not, you are just making assumptions that -- lacking any other informatin -- appear to be invalid.
Is there any recommended method/architecture for reporting suggested by oracle for this type of scenario?
You make it sound like some unusual scenario. What you've described so far (running reports against an OLTP database) is just bread and butter operations for Oracle.
One idea is too runs some scripts that will write the data to another database and from there we will runs out reporting queries. Are there any other potential solutions?
And how does reading the data to insert into some other database cause less contention than reading data for a report?
>
Many thanks,
Ro -
Where is the enjoylogo.gif stored in the database
Can anyone tell me where these type of files are stored in the database? Are they in a table?
If you have 2 lists, List A contains custom names, List B has a lookup field that points to List A. I go into list B and select a client name ("Contoso"), enter some info, and save it. On the back end, the list item will contain a value of something like:
34#;Contoso
In this scenario the 34 is the SPListItem.ID value of that item in List A. They start at 1 in each list and increment by 1.
Also, running queries against the database directly is not supported and can take your database out of Microsoft's supportability.
Dimitri Ayrapetov (MCSE: SharePoint) -
LRU and CKPTQ in database buffer cache
Hi experts out here,
This functionality will work out in Database buffer cache of Oracle 10.2 or greater.
Sources:OTN forums and Concepts 11.2 guide
As per my readings.To improve the funtionality and make it more good Database Bufer cache is divided into several areas which are called workareasNow further
zooming this each workarea will store multiple lists to store the buffers inside the database buffer cache.
Each wrokarea can have one or more then one lists to maintain the wrokordering in there.So the list each workarea will have is LRU list and CKPTQ list.LRU list
is a list of pinned,free and dirty buffers and CKPTQ is a list of Dirty buffers.We can say CKPTQ is a bundled of dirty buffers in low RBA ordering and ready to be flushed from cache to disk.
CKPTQ list is maintained in low RBA ordering.
As being novice let me clear about low RBA and High RBA first
RBA is stored in the block header and will give us the information about when this block is changed and how many times it is changed.
Low RBA : the low RBA is the address of the redo for the first change that was applied to the block since it was last clean,
high RBA : the high RBA is the address of the redo for the most recent change to have been applied to the block.
Now Back to CKPTQ
It can be like this (Pathetic diagram of CKPTQ)
lowRBA==================================High RBA
(Head Of CKPTQ) (Tail Of CKPTQ)
CKPTQ is a list of Dirty buffers.As per RBA concept.The most recent buffer modified is at the tail of CKPTQ.
Now oracle process starts and Try to Get buffer from DB cache if it gets a Buffer it will put a buffer MRU end of the LRU list.and buffer will become the most
recently used.
Now if process cant find a required buffer.then first it will try to find out Free buffer in LRU.And if it finds it its over it will place a datablock from datafile to the
place where free buffer was sitting.(Good enough).
Now if process cant fnd a Free buffer in LRU then First step would be it will find some Dirty buffers from the LRU end of the LRU list and place them on a
CKPTQ(Remeber in low order of RBA it will arrange it in CKPT queue). and now oracle process will take required buffer and place it on the MRU end of LRU list.(Because space has been acclaimed by moving Dirty buffers to CKPTQ).
I am sure that from CKPTQ the buffers(to be more accurate Dirty buffers) will move to datafiles.all the buffers are line up n CKPTQ in lower RBA first manner.But
will be flushed to datafile how and in which manner and what event?
This is what i understand after last three days flicking through blogs,forums and concepts guide.Now what i am missing please clear me out and apart from that
i cant link the following functionalities with this flow..that is
1)How the incremental checkpoint work with this CKPTQ?
2)Now what is that 3 seconds timeout?
(Every 3 seconds DBWR process will wake and find if anything there to write on datafiles for this DBWR will only check CKPTQ).
3)apart form 3 second funda , when CKPTQ the buffers will be moved??(IS it when Process cant find any space in CKPTQ to keep buffers from LRU.ITs a
moment when buffer from CKPTQ will be moved to disk)
4)Can you please relate when control file will be updated with checkpoint so it can reduce recovery time?
To many ques but i am trying to build up the whole process in mind that how it works may be i can be wrong in any phase in any step please correct me up and
take me @ the end of flow..
THANKS
KameshHi Amansir,
So i m back with my bunch of questions.I cant again ask a single because you know its a flow so i cant end up with single doubt.Thanks for your last reply.
Yes amansir first doubt clear that was buffer will be inserted at MID point for this i got one nice document (PDF)names "All about oracle touch count algorithm by CRAIG A SHALLAHAMER".That was quite nice PDF allabout hot and cold buffer and buffer movments inside the LRU list.I am prettly much clear with that point.Thank you. and Incremental checkpoint i read from Harald.van.Breederode ppt a person from oracle.You have shared it on one of your thread.that was nice reference
flicking through threads i came across term REPL and its variations REPL-AUX (thread was for Oracle 9i).Is this variation REPL-AUX deprecated in 10g So i i am not wrong For each work area two main lists that are LRU and CKPTQ exists??not more than that any other types?
For non-RAC database Thread checkpoint is a Full checkpoint?
I read about the incremental checkpointing Here incremental checkpointing in my words n brief.Incremental Checkpoints means write only some selected buffer from CKPTQ to Datafiles.FROM CKPTQ few Low orders RBA buffers are selected and chekcpointed *(Buffer will be checkpointed on many conditions)* and When the Next checkpoint occurs that buffers are flushed to disk.Now this thing *(Checkpointing few buffers and flushing them to disk)* can be multiple times within three seconds so after 3 seconds *(This is the 3 second concept i was asking in the starting of the thread,Can this time be changed if yes with which parameter)* the checkpoint RBA and Checkpoint*(the point upto which database buffer has flushed to disk)* will be updated in Control file header *(Datafile also)* by CKPT process.So that Checkpoint will be used for Instance recovery purpose.Which can dramatically down the instance recovery time.
every 3 seconds control file is updated with checkpoint and that checkpoint is the point from where we have to start the recovery process in oracle from redo log.I m aware that incremental checkpointing is controlled by Fast_start_mttr_target prarameter and now it is autotuned for >10.2 but the smaller value i will keep the less time my instance will take.
Is above two para right what i understood if wrong correct me??
What i understand is after three seconds it will take some buffers from the CKPTQ ( from low RBA end ) and flush them to disk.apart from this many other conditions are there when Data will be flushed to disk.
1) like CKPTQ is full.
2)Process cant find a free buffer in LRU
3)to advance checkpoint DBWR writes..
Correct me if i m wrong?
THANKS
Kamesh
Edited by: Kamy on May 2, 2011 10:55 PM -
Flushing Database Buffer Cache
I am trying out variants of a SQL statement in an attempt to tune it. Each variant involves joins across a different combination of tables, although some tables are common across all variants. In order to be able to do a valid comparison of the TKPROF outputs for the variants, I believe I need to flush the database buffer cache between variants so that the db block gets, consistent gets and physical reads parameters are true for each variant. By doing this, data retrieved for one variant is not already in the buffer cache for the next variant, thus not influencing the above parameters for the next variant.
Is it possible to flush the buffer cache? The shared pool can be flushed with the ALTER SYSTEM FLUSH SHARED_POOL command. I've searched but have not been able to find an equivalent for the buffer cache. The NOCACHE option to the ALTER TABLE command only pushes retrieved data to the LRU list in the buffer cache, but does not remove it from the buffer cache.
I'm hoping to be able to do this without bouncing the database between variants. It is a development instance, and I have it to myself after hours.Hi,
I never tried this before, but if you want make a test you can try corrupt the block ID's returned by one of these queries below:
Try corrupt the ID of the block containing the segment header
select dbms_rowid.rowid_block_number(rowid) from hr.regions;
Try corrupt one of the blocks returned by the query, which shows the ID of the block where each row is located
select s.owner,t.ts#,s.header_file,s.header_block
from
v$tablespace t, dba_segments s
where
s.segment_name='REGIONS' and
owner='HR' and
t.name = s.tablespace_name;Legatti
Cheers -
Swapping and Database Buffer Cache size
I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.
Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
However ... as the buffer cache grows, the time to determine 'which blocks
need to be cleaned' increases. Therefore, at a certain point the benefit of a
larger cache is offset by the time to keep it sync'd to the disk. After that point,
increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
A checkpoint performs the following three operations:
1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
It's the DBWR that writes all modified databaseblocks back to the datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN is also written to the controlfiles.
The following events trigger a checkpoint.
1. Redo log switch
2. LOG_CHECKPOINT_TIMEOUT has expired
3. LOG_CHECKPOINT_INTERVAL has been reached
4. DBA requires so (alter system checkpoint) -
How the Payload Message and Logs are stored in the B1i Database Table: BZSTDOC
I would appreciate it if someone could provide any documentation regarding B1i database further maintenance.
for example:
I want to know how the payload message and logs are stored in the table BZSTDOC, and how can we retrieve the payload message directly from the column DOCDATA.
As described in the B1iSNGuide05 3.2 LogGarbageCollection:
to avoid the overload of the B1i Database, I set the Backup Buffer to 90 days : so this means Message Logs from the last 90 days will always be available, but is there some way we can save those old messages to a disk so that I can retrieve the payload message anytime?
in addition, let’s assume the worst, the B1iSN server or the B1i database damaged, Can we just simply restore the B1i database from a latest backup DB then it can work automatically after the B1iSN server is up and running again?
BR/JimDear SAP,
Two weeks passed, I still haven't received any feedback from you guys.
Could you please have a look at my question?
How is this Question going? Is it Untouched/Solving/Reassigned ?
Maybe you are looking for
-
Still cannot play 720p timeline without stalling. Gotta fix this.
Thanks to Shane and everyone, my system seems to be running smoothly except I still am having a problem playing a 720p sequence. No one seems to have an answer. The cursor keeps moving as well as the audio but the video freezes. If I change the frame
-
Help -- No Devices appear in iTunes
I no longer have a "Devices" section listed on the left-hand side of iTunes. My ATV is version 2.1 and my computer is running Windows XP SP3. Everything was running fine until recently, about the time I upgraded to iTunes 8.0.0.35. On the ATV when I
-
Audio switching using optical connection
I am directly connecting to my 2005-era Denon receiver (AV3805, no HDMI) using an optical connection. With the ATV2 "Dolby Digital" setting to 'On', I can get Dolby Digital (5.1) from movies on iTunes, but I'm wondering if the Denon unit is capable o
-
Error 8008: Parts of the file seems to be corrupted.
Ok...So I just wanted to download The Godfather 2 from the iTunes store and at about 600MB its say that message. I have deleted the file several times and redownloaded it and it still doesn't work. How can I solve this? If anyone can help, it would b
-
8830 connectivity in Japan with CDMA
Hello, I want to purchase the 8830 with CDMA and wonder if this smartphone does also work in Japan, calls and data, furthermore I need to buy the phone in Canada without a contract as the 8830 is not available in Germany! Is there any difference betw