Backup take hours, bug coming long time after 2.0.1

The backup was a bit long with 2.0 on my new 3G. But it was ok. 2.0.1 solved this and was really better.
I had to reinstall my mac, with a time machine full restore. It was successful, but now, it take almost 1 hour to backup my phone, and that, each time !
I know I can stop the backup process but hey, I want it working as before !

Yes, 2.0.1 also updates the modem firmware to 01.48.02. Luckily for me it actually improved my 3G/Edge receptions. At first though, Edge reception actually got worse. The next day it went back to full strength (better than before the update). Leads me to believe it was on the AT&T side... maybe new or updated protocol/negotiation settings that needed updating after the new modem firmware.
Try turning your phone off for the night and see what happens in the morning. Network updates are done automatically when a phone is reset like that.

Similar Messages

  • HT1338 I downloaded upgrades available. The installation of those took a long time after re- initiatin the system and had to off. After this I could not get it to turn on again. Just stays in the gray screen with apple in the middle. What's wrong?

    I downloaded upgrades available. The installation of those took a long time after re- initiatin the system and had to off. After this I could not get it to turn on again. Just stays in the gray screen with apple in the middle. What's wrong?

    Sorry to hear of your problem, but any hardware in a computer can fail - that includes the hard drive, the logic board, etc, etc. Fortunately, it usually doesn't happen that soon; however, I've read of hard drives failing within 6 months.
    So, since you did not purchase the extended Applecare Protection Plan (3 years coverage), you will need to make a decision to repair or sell it as is. I would always purchase 3 year coverage for an all in one or a laptop because repairs are expensive and at least I'd be covered for 3 years.
    We can't speculate on whether there should be a recall or not; these forums are for user to user assistance with technical problems. In your case, there is nothing we can help with since you've already gotten a diagnosis.

  • Time Machine Problem - New Internal Hard Drive, Now Backup Takes Hours

    Need some help!
    Background.
    Almost 4 year old 24" aluminum iMac; 4 gb ram; 500 gb hard drive; OS 10.6.8.
    About 2 months after Alpple Care expired, the internal hard drive failed (New Year's Eve of all times!). Everything else working great.
    Had a Mac technician I know install a new 750 gb drive for me. I cloned back from SuperDuper, and was ready to go. The SuperDuper external is 1.5 TB, partitioned - one half TM, the other half SuperDuper.
    All apps, internet, mail - fine.
    I hooked up my existing Time Macinne (probably not a good idea), and the next backup failed.
    Did some research, and since I have plenty of archived backup anyway, decided that the best thing to do would be to erase the TM external, and start fresh.
    Did that, and as expected, first backup took a couple hours.
    The Problem.
    Now, every incremental backup takes 4 to 5 hours, even if no data has changed.
    "Indexing" lasts for 45 minues; "Backing Up" reads 600,000 plus files, which also goes on for 30 or 40 minutes (approx.), and wants to Calculate changes etc forever; then the actual backup starts out at 10 kb, and in 20 minutes is at 30 kb and then after about an hour I start getting into gbs and the blue bar progresses pretty fast after that. Then "Finishing Up" takes approximately 2 - 3 hours.
    Not right.  :~(
    What I've Done to Toubleshoot.
    First, I re-erased the TM external.
    Then reset PRAM and did an SMC reset on the iMac.
    Then backed up to TM again. The same problem persisted.
    So next, I repeated that process (other than backing up), and I also deleted from the Root Library/Preferences com.apple.timemachine.plist, and from Home/Library/Preferences deleted com.apple.finder.plist. Then rebboted the iMac. Then I did a Disk Utility verify of the empty external - all was/is fine. Then I ran DiskWarrior on the iMac and all was fine; repaired permissions on the iMac from DiscWarrior as well. Then, I did another TM backup to the external, which took a couple hours since it was backing up to an empty drive.
    Next 3 incremental backups still take 4 to 5 hours, even if nothing has changed on theiMac.
    The Request.
    Please if anyone can give me some easy to understand step by step options I might try to get TM working correctly again, it will be most appreciated.
    If I've left out any details that might assist, please also let me know.
    The problem computer is at home and I'm at work, so if any logs are needed, I can post those later today (but in that event, please tell me exactly which log to post and how to isolate it from System Profiler or Console (as there are many logs to choose from and I want to provide the proper info if someone needs to see a log to assist)).
    Thanks in advance!

    Pondini,
    Did every step in D2 (green box) - except did not reinstall Combo Updater which I did late last night.
    After that, did a TM backup - started at 6:15.
    All apps were off.
    Sequence of events and facts:
    At start, 434.03 0f 500.1 GB available on the external TM
    Last backup shown - 7:58 AM; Oldest Back Up - Yesterday at 9:56 (that's the 1st backup after I reinitialized the external for the 2nd or 3rd time trying to get this thing figured out)
    Upon starting and doing the backup -
    Preparing 250,000 (approx.) items (too quick to get the final number) - Parenthetically, this AM it was over 600,000; probably has to do with turning off Spotlight per your D2 instructions
    Next message -
    Backing Up 6.96 GB
    Hung at 2 kb to 90 kb for 20 minutes before progressing
    Ultimately, at 1.78 GB the  progress bar started moving rapidly (as it should) - so this isn't a cable connection issue - the Backup phase took 22 minutes, but from 1.78 GB to 6.96 GB took only a few seconds; I have seen this precise same sequence repeatedly the past 2 days
    Next - Backing Up (the candy cane blue and white progress bar)  - Preparing 140,000 items - took 12 minutes
    Now shows 426.95 GB available on the external
    Next - Backing Up (no count of items shown) - started at 7:05 and went until 7:45 (40 minutes)
    Next - Finishing Backup (blue/white candy cane progress bar; no count of items shown) - started at 7:45 - went until sometime after 8:15 (can't give a precise time because I need to sit down and eat :~)  )
    Backup completed (between bites of chicken) at 8:23.
    Over 2 hours - something isn't right.
    Also, TM Prefs tells me that the Latest Backup was at 7:44 - odd.
    Anyway - to take a step further, here's the TM Buddy widget info you might need:
    Successful Backup: 02:06:32.
    There is no log entry or data showing.
    Just as I sent this, a new backup started, and it appears to be going through the same routine, from Preparing  ....
    Gotta eat.
    Please post any further instructions and I'll do what I can tonight.
    Thanks!
    What is the problem? How do I speed this mother up?
    Thanks!

  • Delete Index in Process Chain Takes long time after SAP BI 7.0 SP 27

    After upgrading to SAP BI 7.0 SP 27 Delete index Process & Create index process in Process chain takes long time.
    For example : Delete index for 0SD_C03 takes around 55 minutes.
    Before SP upgrade it takes around 2 minutes to delete index from 0SD_C03.
    Regards
    Madhu P Menon

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • Time Machine backup on external drive no longer working after upgrade to Yosemite

    Can anyone explain why, after upgrade to Yosemite, Time Machine is no loger working? Backup on external drive stops few seconds after it has started. What can I do?

    Welcome to Apple Support Communities
    Try running a DVD Lens Cleaner disc through it, a little bit of dust or dirt is all it takes to throw them out and it might just be co-incidental with the upgrade.

  • Query Prediction takes long time - After upgrade DB 9i to 10g

    Hi all, Thanks for all your help.
    we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
    After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
    Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
    Thanks in advance
    skat

    Hi skat
    Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
    If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
    I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
    Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
    There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
    If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
    Best wishes
    Michael

  • Non-self-contained movie huge & takes a very very long time to export

    This is for FCE 4.
    I have a 1.5 hour movie, and when I export it to quicktime (Not quicktime conversion) it takes WAY too long.
    I have "make self-contained movie" unchecked, so I thought creating a reference movie would be very quick. Why is it taking so long and why is the resulting file huge?
    My source files is DV, and only minor editing and effects were used. However the entire movie was cropped & resized. Is that why?

    Ah I think I figured it out.
    I did another test project with the same source DV file, but didn't do ANY editing to it.
    It took 2 minutes to export a 70mb reference mov file from a 7 minute DV sequence.
    Then I cropped & resized the sequence by changing parameters in the "motion" tab.
    Then it took 20 minutes to export a much larger reference mov file from the same 7 minute DV sequence.
    Then I clicked "render all" after selecting every render option in the drop-down menu.
    Then it took less than 1 minute to export a 70mb reference mov file from the same 7 minute DV sequence.
    So I guess I was having problems because I didn't render "everything", because I didn't select "FULL" on the drop-down menu for rendering. I thought only RED segments in the timeline needs to be rendered prior to export, but apparently the "FULL" render must be selected as well so that the entire timeline is nice and purple. (or some shade of blue)
    Also, a cropped and resized video is very large in size (GB) if it is exported as a reference file only without prior rendering. * Can someone else confirm that this is the normal behavior in FCE? *
    Oh well, lesson learned.. This movie is going to take a very long time to export, and result in a very large file. But I can't use it without cropping & resizing it!
    Message was edited by: Yongwon Lee

  • Restore takes a really really long time

    Last night I replaced the hard drive in our iMac and started the process of restoring from a Time Machine backup. The restore has been working now for nearly 20 hours and estimates that it will be finished in approximately 52 more.
    Is this kind of restore time typical? I'm guessing the 320GB drive that we replaced was at least 75% full.

    Wes Plate wrote:
    Last night I replaced the hard drive in our iMac and started the process of restoring from a Time Machine backup. The restore has been working now for nearly 20 hours and estimates that it will be finished in approximately 52 more.
    Is this kind of restore time typical? I'm guessing the 320GB drive that we replaced was at least 75% full.
    First, the estimates are often quite inaccurate. Second, a lot depends on your Mac, the drive, and how they're connected.
    If you recall how full your Mac was when you did your first TM backup, and how long it took, you can make a rough estimate. In my experience, a full restore takes roughly 60% as long as a full backup.
    That's on a much smaller PPC system, but ought to be roughly similar, percentage-wise.

  • Update 3.1.3 Vault backup takes an inordinate amount of time...

    Installed the update, 3.1.3. Now clicking Vault icon to update the Vault, the update takes an inordinate amount of time. In addition after the update the Vault icon, lower left, remains red.
    Normal Vault update goes to the server, prior to the update it ran quite fast then icon turned black. As a test I created a new Vault on my Desktop (deleted the original), again update takes an inordinate amount of time, still the Vault icon is red.
    MacBook Pro, 8 GB RAM, 10.6.8, 2.4 GHz i5.

    Hi shutterp33d,
    That speed answer is a yes and no. To the local drive yes speed is almost where it was prior to the 3.1.3 update. Over to the network drive it is quite slow. Even direct connected to an external vault update is much slower than before the 3.1.3 update.
    My network is AirPort Extreme to which I have a Pogoplug connected with RJ45, a Seagate 1.5 TB external drive is connected to the Pogoplug via USB 2.0.
    At present Aperture vault update has been running forty-five minutes being one third through. So, shutdown the Pogoplug did a direct cable connect to the Seagate from the laptop still vault update is taking a longer time than before the update.
    This time the icon has changed from red to black. Then I disconnected the USB cable direct connect, reconnected the external to the the Pogoplug. Maybe this is a Pogoplug hit maybe not.
    Thank you for listening.

  • Writing to large file takes an increasing​ly long time.

    We are acquiring a large amount of data and streaming it to disk.  We have noticed that when the file gets to be a certain size it takes an increasingly longer time to complete the write operation.  Of course, this means that during these times our DAQ backlog grows large, and, although we can process any backlog quickly enough, when the write operation takes a sufficiently long time, we will overwrite our buffer and the DAQ will fail.  We have looked at numerous examples of high-speed DAQ and feel that we are following the examples as given.  This behavior happens on a variety of computers, under different programming stratagies (data as 1D WF, Raw, etc.).  On one system (h/w&s/w) we can get to almost 1.5GB flawlessly before our write speed drops off severely and affects our DAQ, while on another (much more capable) system we can reach 20 GB before starting a decline in write speed.  We've implemented a work-around by limiting our file size and writing to a new file when the limit is reached (multiple 10 GB files as an example) then reworking the data files during postprocessing.  We would like to know why this is happening.  I do not believe this is a G issue as the info I have is you can open a file and write to it with "position current" as many bytes as you like, then close it when done, and I have read that you can do this "until your disk is full".  I have searched the NI knowledgebase without any relevant info on this behavior, and the MS KBase with the same results.
    Here is a little detail about our setup.  PXI chassis with 4472 cards acquiring data at 102.4KS/s.  One system has XPpro, a controller in slot 1 (128MB RAM, 1.3 GHz cpu) and two 4472 cards (call it Junior), another at the high end has XP pro, four 4472 cards, MXI-4 connection to a computer with 2GB RAM, dual AMD Opterons @ 2.0 GHz, 400 GB RAID (call it Senior).  All systems XP pro SP 2, LabVIEW 7.1.1, NI-DAQ 7.4.  The programming methodologies used follow the many high-speed data logger examples found in the KBase and it works flawlessly until up until the file reaches a critical size that is different for systems of differing capabilities (Junior and Senior) (the rate of performance degradation is different also).  Obviously we are using a high sample rate on a lot of channels for a long time, but we do not see an obvious increase in memory usage until we pass our "critical" file size, so I am pretty confident that our program is ok, and that LabVIEW is also behaving itself.  I am suspicious of WinXP but I have no good infomation that reliably points to it as the culprit.
    If you can shed some light on this issue, I would appreciate it.  I know, it seems odd even to me, that being unable to write a 50-60 GB file should be a concern, it wasn't that long ago that I thought 500 MB files were huge, but the things that our engineers want to be able to do these days would stun me if I took the time to think about them instead of solve them.  Thanks for the efforts!

    The OS is probably reallocating space for the file from time to time. Say it had space for 10 GB at location 1. When the file size exceeds 10 GB then the OS goes looking for a location 2 which is bigger and then rewrites the file. Or it may begin fragmenting the files, but it has extra overhead keeping track of all the fragments. If you can allocate a space bigger than the largest file you expect to produce at the file creation time, you may avoid the slowdown.
    I have not used files this big and do not use Windows, so these comments are generic based on things I have heard from others and have observed in other systems.
    Lynn

  • U00D8 DB Time takes long time after migration

    Hi Guru,
    I have time out error in my background job. i see the status in st03 the DB Time take big time
    Trans/Rep. : ZFISLRR009
    Background Job     Z_ZFISLRR009_TPJOB_DAILY
    1. Steps     1
    T Response Time     3,740     
    Ø Time       3,739,819.0
    T CPU~     5
    Ø CPU~     5,340.0
    T DB Time      1,933
    Ø DB Time 1,932,983.0
    But when I run this job again in the morning it is fine. This error occurred since i migrate my system from Solaris/Oracle 9 to HP-UX/Oracle 10. In the old system with lower spec hardware everything was okay.
    Any idea why it's happened or what database configuration should i consider after migration?
    Any help will be appreciated,
    Thanks,
    Aswin

    Hello Andrea,
    > I see the status in st03 the DB Time take big time
    Yes you are correct, that is your problem.
    > But when I run this job again in the morning it is fine
    Maybe your I/O subsystem is very slow and mostly of the needed buffers are still in the buffer cache or dynamic sampling is done, etc...
    > This error occurred since i migrate my system from Solaris/Oracle 9 to HP-UX/Oracle 10. In the old system with lower spec hardware everything was okay.
    I am quite sure, that the execution plan of the SQL statement(s) has changed. This is not unusual if you upgrade to a newer database release.
    > Any idea why it's happened or what database configuration should i consider after migration?
    Oracle 10g Parameter: Sapnote #830576
    Oracle 10g Patches: Sapnote #1137346
    Oracle 10g Statistics: Sapnote #838725
    I think that these notes are the important ones. If you have done all these things, then you have to go into deeper analysis (as i can see that this is a Z-Report you won't get support by SAP).
    Regards
    Stefan

  • Mail sometimes takes a very, very long time to open messages

    This is similar to some other messages I've seen here, but I perhaps have some new observations.
    Often, but not always, Mail takes up to 120 seconds to open each message in my Inbox. This happens with messages that have already been downloaded from the server, and even with messages that have already been read previously. While opening a message, Mail is otherwise unresponsive and spinning the ol' beachball cursor.
    When this happens, the system.log on the Console displays these lines. Mail is process 4502 in this example.
    Jan 19 08:20:23: --- last message repeated 1 time ---
    Jan 19 08:20:23 Juliette /usr/sbin/spindump[4552]: process 4502 is being monitored
    Jan 19 08:20:28 Juliette kernel[0]: disk0s2: I/O error.
    Jan 19 08:20:28 Juliette kernel[0]:
    Jan 19 08:20:37: --- last message repeated 1 time ---
    Jan 19 08:20:37 Juliette kernel[0]: disk0s2: I/O error.
    Jan 19 08:20:37 Juliette kernel[0]:
    The last three lines are then repeated every 10 seconds until the message is finally displayed at which time spindump reports that it spindump is no longer monitoring process 4502 (or whatever number Mail happens to be). The time required to sort itself out is variable, from 30 to 120 seconds.
    I ran Verify on my disk and there were no errors of any kind. I have also checked and corrected permissions on the disk a couple of times.
    If I run Mail>Mailbox>Rebuild the problem is cleared up (for a while), but eventually returns. The problem is cleared up only on the Inbox on which I ran Rebuild. There may be a global rebuild somewhere, but I haven't found it.
    I have also noticed that after running Rebuild when the problem is occurring, a mailbox will contain one more unopened message than it did before Rebuild-ing, i.e. If there were no unopened messages, there will now be 1, or if there were 6 unopened messages, there will now be 7. And the new unopened message is at some seemingly random place in the message list. I can't say that this happens every time, but I have not seen it not happen.
    I'm not really qualified to make a diagnosis of this, but, nevertheless, I will venture a guess that my local mail database is somehow getting corrupted, or possibly is permanently corrupted, in some minor way. Assuming that might be true, is there a way to rebuild my entire mail database? It's fairly large, about 600 Mbytes.
    Thanks in advance for any help or advice.

    Update: I found out how to re-index my mail database (in the Mail Help, duh) and this may have fixed the problem. At least I haven't seen it since. I'll post a final report at the end of the day.

  • Crash report runs a long time after startup

    Crash report runs for about 1-1/2 hour after startup.

    did you check how much time the entire report takes to execute (evn though the first 25 rows comes up quickly). I suspect it is > 30 mins.
    OBIEE is not meant as a data dump tool and there is little that could be done. (except better hardware)

  • Backup in progress over extraordinary long time

    Dear All,
    We have two version of SAP ECC 6.0 have SQL2005 as database and ECC 4.6  having SQL 2000 as database.
    Now I have scheduled my backup for ECC4.6 development,but it is not taking place.When I checked the log it is giving no history information found.Might still be running.
    When I see in DB12 I found the following
    Last successful backups :
    Full R/3 backup            29 Jun 2008 04:17:08
    Differential R/3 backup    Not found during past 1 month
    Transaction log backup     30 Jun 2008 10:32:22
    Full master backup         27 May 2008 02:21:36
    Full MSDB backup           27 May 2008 02:25:43
    Normally it takes 10-11 to take full backup via cartridge.But why this job is running since so long.Can anyone suggest what went wrong.
    Regards,
    Ashutosh

    Dear Juan,
    This is the log I am getting.can you suggest what can be done.
    Date     Source     Message
    2008-06-28 15:41:47.00      server     Microsoft SQL Server  2000 - 8.00.760 (Intel X86) ...
    2008-06-28 15:41:47.03      server     Logging SQL Server messages in file 'h:\MSSQL\log\ERRORLOG'.
    2008-06-28 15:41:47.03      server     Server Process ID is 888.
    2008-06-28 15:41:47.03      server     All rights reserved.
    2008-06-28 15:41:47.03      server     Copyright (C) 1988-2002 Microsoft Corporation.
    2008-06-28 15:41:47.07      server     SQL Server is starting at priority class 'normal'(1 CPU detected).
    2008-06-28 15:41:48.17      server     SQL Server configured for thread mode processing.
    2008-06-28 15:41:48.17      server     Using dynamic lock allocation. [2500] Lock Blocks, [5000] Lock Owner Blocks.
    2008-06-28 15:41:48.67      server     Attempting to initialize Distributed Transaction Coordinator.
    2008-06-28 15:41:51.15      server     Failed to obtain TransactionDispenserInterface: Result Code = 0x8004d01b
    2008-06-28 15:41:51.28      spid3     Starting up database 'master'.
    2008-06-28 15:41:52.35      spid3     Recovery is checkpointing database 'master' (1)
    2008-06-28 15:41:52.35      spid3     0 transactions rolled back in database 'master' (1).
    2008-06-28 15:41:52.67      spid5     Starting up database 'model'.
    2008-06-28 15:41:52.67      server     Using 'SSNETLIB.DLL' version '8.0.766'.
    2008-06-28 15:41:52.67      spid3     Server name is 'SAPSERVER'.
    2008-06-28 15:41:52.79      spid9     Starting up database 'pubs'.
    2008-06-28 15:41:52.79      spid8     Starting up database 'msdb'.
    2008-06-28 15:41:52.81      spid11     Starting up database 'test'.
    2008-06-28 15:41:52.81      spid10     Starting up database 'Northwind'.
    2008-06-28 15:41:53.14      server     SQL server listening on 127.0.0.1: 1433.
    2008-06-28 15:41:53.14      server     SQL server listening on 10.1.1.27: 1433.
    2008-06-28 15:41:53.25      spid5     Clearing tempdb database.
    2008-06-28 15:41:53.50      spid10     Starting up database 'DEV'.
    2008-06-28 15:41:56.37      spid8     1003 transactions rolled forward in database 'msdb' (4).
    2008-06-28 15:41:56.67      spid8     0 transactions rolled back in database 'msdb' (4).
    2008-06-28 15:41:56.73      spid8     Recovery is checkpointing database 'msdb' (4)
    2008-06-28 15:41:57.06      server     SQL Server is ready for client connections
    2008-06-28 15:41:57.06      server     SQL server listening on TCP, Shared Memory, Named Pipes.
    2008-06-28 15:41:57.29      spid5     Starting up database 'tempdb'.
    2008-06-28 15:42:47.31      spid10     343 transactions rolled forward in database 'DEV' (8).
    2008-06-28 15:42:47.73      spid10     0 transactions rolled back in database 'DEV' (8).
    2008-06-28 15:42:47.76      spid10     Recovery is checkpointing database 'DEV' (8)
    2008-06-28 15:42:48.03      spid3     SQL global counter collection task is created.
    2008-06-28 15:42:48.03      spid3     Recovery complete.
    2008-06-28 15:42:52.62      spid51     Using 'xpsqlbot.dll' version '2000.80.194' to execute extended stored procedure
    2008-06-28 15:43:43.93      spid53     Using 'xpstar.dll' version '2000.80.760' to execute extended stored procedure '
    2008-06-28 15:46:48.87      backup     BACKUP failed to complete the command BACKUP LOG [DEV] TO  DISK = N'G:\tlog bac
    2008-06-28 15:46:48.87      spid53     Internal I/O request 0x46E4B618: Op: Write, pBuffer: 0x10AF0000, Size: 983040,
    2008-06-28 15:46:48.87      spid53     BackupMedium::ReportIoError: write failure on backup device 'G:\tlog backup\280
    2008-06-28 15:54:14.20      backup     BACKUP failed to complete the command BACKUP LOG [DEV] TO  DISK = N'H:\tlog bac
    2008-06-28 15:54:14.20      spid53     BackupDiskFile::CreateMedia: Backup device 'H:\tlog backup\28June.bak' failed t
    2008-06-28 16:21:35.89      backup     Log backed up: Database: DEV, creation date(time): 2006/04/21(11:21:11), first
    2008-06-28 16:25:30.98      spid87     Using 'xplog70.dll' version '2000.80.760' to execute extended stored procedure
    2008-06-29 00:02:54.82      spid101     Tape 'CD19S' (Family ID: 0x2728ff2e, sequence 1) mounted on tape drive 'TAPE1'.
    2008-06-29 04:17:08.71      spid101     Tape 'CD19S' (Family ID: 0x2728ff2e, sequence 1) dismounted from tape drive 'TA
    2008-06-29 04:17:12.50      backup     Database backed up: Database: DEV, creation date(time): 2006/04/21(11:21:11), p
    2008-06-29 04:23:30.04      spid100     Tape 'CD19S' (Family ID: 0x2728ff2e, sequence 1) mounted on tape drive 'TAPE1'.
    2008-06-29 07:35:39.32      spid100     Tape 'CD19S' (Family ID: 0x2728ff2e, sequence 1) dismounted from tape drive 'TA
    2008-06-29 07:35:55.28      backup     BACKUP failed to complete the command BACKUP DATABASE [DEV] TO TAPE1 WITH  NOIN
    2008-06-29 07:36:47.59      spid100     BackupTapeFile::RequestDurable: WriteTapemark failure on backup device '
    .\Tap
    2008-06-30 10:32:23.21      backup     Log backed up: Database: DEV, creation date(time): 2006/04/21(11:21:11), first
    Regards,
    Ashutosh
    Edited by: ashutosh singh on Jun 30, 2008 2:03 PM
    Edited by: ashutosh singh on Jun 30, 2008 2:09 PM

  • Cache entry created a long time after the report runs

    We have a report that results in more than 300,000 records. We get the results for the report in quick time but the cache entry gets created only after sometime, say around 30 mins later or so. Any idea why this delay? Is it that the report caches 25 records at a time (default no. of rows per page) and shows them in quick time and the rest of the records are getting cached in the background? Is there like a way we can optimize this?

    did you check how much time the entire report takes to execute (evn though the first 25 rows comes up quickly). I suspect it is > 30 mins.
    OBIEE is not meant as a data dump tool and there is little that could be done. (except better hardware)

Maybe you are looking for

  • Qosmio Laptop NOT REPAIRED!!

    I was playing world of warcraft the last week of november and while I was playing - about 10 mins into the game, my Qosmio x500 just shuts off with no warning, nothing. I turned the laptop back on thinking it was just some kind of fluke or the batter

  • TDS Certificate No not displayed

    Dear Friends, Using T.code J1INCERT - Print Withholding Tax certificates for vendors. The user has entered all the details like Co.code,F.year,B.place/section code,Section,Document details,Certificate details as required in the selection screen. Howe

  • Plsql

    how to use 'nvl' against the various 'search' items in the default 'default_where'. i have search window contain ( bank name - bank_account num - currency - branch name) how to use set_block_property with default where with more than 1 condtion any h

  • How can I get this stupid Mapneto pop up add off my FFox browser on Mac OSX? It shows "deleted" in Extensions, but now when I open a tab a download ad appears

    Had never seen these pop up pages before I tried to delete the Mapneto tool bar, now every time I open a tab or a link up pops this stupid "download completed" come-on ad which is monstrously annoying. Short of dumping Firefox and re installing, what

  • Rating indicator in an ObjectHeader?

    Just wanted to know how can we add rating indicator in an ObjectHeader?