Suddenly flashback file increased

Hi All,
I have two database ora1 and ora2. All parameters are same in this two database. In this two DB normally total flashback size 40 GB each. But suddenly flashback file of ora1 db is increased and size is 139 GB, but ora2 flashback file is till 40 GB. In ora1 and ora2 db flashback retention target is 1 day, but ora1 flashback logs 9 to 10 days old. Also we have execute daily RMAN backup in two db.Please let me know why flashback has grown so large and why it has flashback logs 1 week old when the flashback retention target is 1 day.
Oracle Version 10g
OS Linux

Hi Ashif,
Please find the output :
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/10.2.0/dbs/snapcf_ora1.f'; # default
Edited by: user1013607 on Jan 30, 2012 12:30 AM

Similar Messages

  • Index file increase with no corresponding increase in block numbers or Pag file size

    Hi All,
    Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
    I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
    The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB.  We expect this to grow fairly rapidly over the next 12 months or so.
    After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
    When I then perform a dense restructure, the index file reduces to 0.6GB.  The PAG file remains around 12GB (a minor reduction of 0.4GB occurs).  The number of blocks remains exactly the same.
    If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
    If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
    Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
    Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
    I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
    I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it.  At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
    After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1. 
    I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
    http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
    However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right?  But I am getting more than 160% growth (1.6GB / 0.6GB).
    And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
    The Index file growth in itself is not a problem.  But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st.  And with the expected growth of the model, this will likely get much worse.
    Anyone have any explanation as to what is occurring, and how to prevent it...?
    Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.

    alan.d wrote:
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.
    I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
    Sabrina

  • I am having issues suddenly exporting files. It reads error exporting 25 files, As I attempt to choose another destination folder the folders show a black square where the folder sign previously was. I am in my busy season and this has created a huge dela

    I am having issues suddenly exporting files. It reads error exporting 25 files, As I attempt to choose another destination folder the folders show a black square where the folder sign previously was. I am in my busy season and this has created a huge delay!!! HELP!!!!

    oh and if you use a creative cloud version of Lightroom it could also be that the logon to the cloud is messed up. Logging out and logging back in from preferences in the creative cloud app will fix that. Due to the release of Lightroom CC it appears that adobe's servers have been overwhelmed a bit And many people have strange problems that are solved by logging out and back in.

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Oracle's management of Flashback files

    Hi there
    I have a db_recovery_file_dest_size=150GB which I now need to reduce by nearly half doe to SAN constraints.
    In my flash recovery area 100GB is currently being used - as follows:
    55GB - flashback files
    40GB - backup sets
    5 GB - archive logs
    My flashback retention policy is 2 days.
    However there are flashback files going back nearly 2 months!
    So the logical place to gain disk space here is to reduce the number of flashback files being retained beyond the 2 day period I need.
    Now I understand that I cannot simply delete the files as they are oracle managed.
    My question is this.
    If I try and reduce the db_recovery_file_dest_size to say 80GB how does oracle manage this. i.e. my new limit will be < than the space currently being used. Will oracle simply delete the unnecessary flashback files? And will oracle start to curtail the number of files being retained?
    Thanks
    Niall

    [url http://docs.oracle.com/cd/E11882_01/server.112/e10897/backrest.htm#CHDEFIHF]Even after files in the fast recovery area are obsolete, they are typically not deleted until space is needed for new files. The restore point issue is mentioned in MOS Space issue in Flash Recovery Area( FRA ) [ID 829755.1], but the answer to your question seems curiously absent in the docs, not that I looked particularly hard.
    I've never shrunk an FRA, so I'm also curious what will happen. I suspect you might want to manually delete some backups with the associated crosscheck and delete obsolete (or backup to elsewhere to accomplish the same thing), so you don't get into an extreme space pressure situation that kills archiving, then shrink the FRA and do some backup operation to set off cleanup operations.

  • As i was using my itouch(2nd gen)there was a sudden and drastic increase in temperature on the rear portion(metal portion) and since then it is not working.Has anyone come across a similar problem?Any Suggestions?

    As i was using my itouch(2nd gen)there was a sudden and drastic increase in temperature on the rear portion(metal portion) and since then it is not working.Has anyone come across a similar problem?Any Suggestions?

    Sounds to me like the battery has failed.
    You can take it to an Apple Store.  They MAY be able to do something..  Or, you can find parts and "How-To" repair guides on the internet.  eBay and Google are your friends...

  • Files increase sixfold or more when importing as "original size"

    Simple question really - why does a 4GB .MTS file from my camera end up as a 35GB .MOV file in my Events folder, when choosing the "keep original size" option when importing? Why does the same file increase from 4GB to 9GB when I choose the lighter "large file" size that convert the movie from HD to SD?
    Is there a way around this problem?
    Background info - I'm using a Panasonic HDC-SD10 camera and either importing straight into iMove '09 Events, or importing as archive and then importing later. As I've only got an 8GB SD card for my
    camera, I have to get them off there and onto my Macbook on a daily basis while I'm on holiday (as I am right now).
    Unfortunately the large size of the eventual files in iMovie means that I have run out of disk space on my Macbook, and have had to resort to just copying the files onto my hard disk, rather than importing them into iMovie, so I'm wondering if there's an intermediate stage I could introduce that would stop the files growing in size so much?

    This isn't a problem. Video on your camera is in a highly compressed format and as such can't be edited without decompressing it first. Your import routine is decompressing your files and copying them to your hard drive so that iMovie can edit it.

  • Suddenly PDF files write: %1 er ikke et gyldigt win 32 program. What shall I do

    Suddenly PDF files write: %1 er ikke et gyldigt win 32 program. What shall I do

    Hi ib%201991,
    Are you receiving that error when you try to create PDF files? Or when you open or try to install Acrobat? If you're getting that error when you try to open Acrobat, it could be that one of the application files is damaged, and you'll need to reinstall Acrobat.
    Best,
    Sara

  • Any news on the problem of large CHC files increasing size of roaming profiles?

    Any news on the problem of large CHC files increasing the size of roming profiles?  Thank you.

    I talked to the product manager.  Here's what he said.
    This was addressed in our last release.  CHC 4.0 changes the location of the locally stored documents so that they are longer included when users synchronize their roaming profiles.  The CHC will even move previous files to the new location when users upgrade.

  • Trying to find mysterious 23GB file increase

    While I was out at work today, my Library seems to have increased from 45GB to 68GB, meaning that suddenly I'm bumping up against the limit of my available RAM (though I thought I had way more than 20GB left to play with). I can't even Time Machine my way out of this, as I don't have enough available RAM to overwrite with the older, smaller Library.
    So: how do I search to find out what has ballooned so grotesquely in my Library? I spent quite a while opening individual files in it (800KB, 300KB etc) before realising that I could keep doing that all night without necessarily hitting the answer.
    I set up a Finder Find window and told it to look for files modified within the last day, making sure that System files were included. All it found were a couple of innocuous downloaded documents from this morning before I went to work, and I've verified that they're not problematic. It reckons no System files have changed, though the size of the Library suggests that that's not accurate.
    (I ran Clamxav last night, which told me that my machine contains five viruses -- BUT they're all safely quarantined in old unopened emails, and it recommended leaving them well alone.)
    How do I find out which file(s) grew or multiplied in my absence, if the Finder search that I described failed to do so? I'm really at a loss here.
    Thanks in advance for any advice that anyone can give me.

    This was extremely useful and I have bookmarked it, although in the event I didn't need its more in-depth advice for this issue. I didn't previously know about Show View Options/Calculate All Sizes, and as soon as that was switched on the problem was revealed -- a glitch in Mac Mail that was creating bogus "recovered messages" at 51.4MB a time. I've deleted them and my memory is restored. Thanks for the tip!

  • Delete file from original location and when copied size of file increases.

    hi !
    I have 2 exe's both of 1.15MB which I want to move to another folder. The files are copied to the folder but the size of exe increases to around 350MB each. What can be the problem ?
    BufferedOutputStream bos=new BufferedOutputStream(new FileOutputStream(filename));
    int o=bis.read();
    do{bos.write(o);}while (o!=-1);
    boolean del=this.fpara.delete();
    System.out.println(del);          Also, the file is not deleted from the original location even though I have used the delete function. The last line produces null output.

    And do follow coding conventions
    {color:0000ff}http://java.sun.com/docs/codeconv/html/CodeConvTOC.doc.html{color}
    If your original source is all jammed up like the snippet you posted, I'm not surprosed that you can't spot your mistakes.
    db
    Edited by: Darryl.Burke on Mar 9, 2008 12:40 AM

  • All of a sudden (video) files added to my iTunes library after a certain date aren't playable on other devices.

    hello!
    All of a sudden I have a weird issue with playing video-files added to my iTunes library after a specific date.
    This is the situation so far:
    I encode the video for iTunes with the same settings for the last 3 years. I know these vids play on all my devices. AppleTV & iPad. All of a sudden videos added to my library after the 8th of september aren't playable on my AppleTV and iPad. I can play them directly in iTunes and by accessing them directly via the finder and quicktime.
    The following errors occur:
    When I pull the video from iTunes on my AppleTV.. the video simply doesn't play.
    When I pull the video from iTunes on my iPad I get a message that the file can't be found.
    When I push the video from iTunes to the AppleTV using airplay I get a message that there has occured an error while connecting to the AirPlay-device "Apple TV". The necesary file could not be found. In addition when the AppleTV is switched off it doesn't turn on doing this, suggesting that there is something missing on my mac itself.
    All the above errors aren't popping up when I choose a video that has been put in my library before the 9th of september. Then everything works as expected.
    This is what I tried:
    Turning off and on home sharing on all devices including the mac.
    Logging in wiht a different apple ID though using the same ID as before.
    I created a new iTunes library and populated with a video that was added after the 8th of september and one that was added before the 9th of september. Surprisingly the older one played and not the newer one.. while they both used the same encoding settings. The new library was locally on my mac as my original library is on a NAS.
    I encode my videos with Quicktime 7 Pro using the following settings:
    H.264 (current frame rate, key-frames:automatic, re-order frames, automatic bitsize)
    Medium Quality
    Multi-pass
    current resolution (which is always 1280 X 720)
    AAC sound
    44.100 kHz
    128kbps
    I'm using the Mountain lion 10.8.2 and iTunes 10.7 Mountain lion got updated a few days after 19th of september and iTunes i think as well. Both versions were released after this issue seemed to pop up.
    I also have two video-files encoded on the same day with the same settings that dragged into the iTunes library on the 9th of september. One works.. the other doesn't.
    So I wonder if there is somebody out there that can tell me what might have happened here and how I could fix this issue.
    Thank you in advance.

    Hi Seraphiel07.
    I have a very similar issue. Hardware involved:
    Apple MacMini (Mid 2011) - OSX 10.8.2. iTunes 10.7. Home Sharing enabled.
    Apple TV (3rd Generation). Home Sharing (access) enabled.
    Apple MacBook Pro (15" Mid-2009). OSX 10.8.2. iTunes 10.7. Home Sharing enabled.
    Apple iPad 2 (Wifi only). iOS 6
    Apple iPhone 4S. iOS 6
    All connected through:
    Apple Time Capsule 8.0211n (4th Gen)
    I have an iTunes library on the MacBook Pro, shared over "Home Sharing" and another (duplicate content, different library name) iTunes library on my MacMini, which is also accessed through the Apple TV.
    All software is current (as of Sep 26th).
    All movies are iTunes purchases. 80 in total.
    All files are purchased (and home shared) under one account.
    My last two iTunes purchases both HD movies on 09/17/2012 and 09/05/2012 refuse to appear on any device accessing either of the two Libraries over Home Sharing. It should be noted that every other movie (the other 78) all show up and play correctly. I therefore know it is not an issue with connectivity, otherwise I would see zero movies available.
    I have:
    1. Removed all content from each Library and re-added
    2. Upgraded the OS on the MacMini 10.8.2
    3. Tested viewing the two missing movies on other Apple devices and no luck.
    4. Turn home sharing on and off within iTunes and also on each device.
    It would appear to me that the issue is with the files themselves, rather than the devices as different devices and different versions have been tested.
    All comments welcome.

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Pdf file increase on print command

    when i go to print a pdf file i have noticed a 7 megabyte file will increase to 100 megabytes which overwhelms my printer's buffer and the print time goes from less than a minute to fifteen minutes.  why? is there a fix?  thanks

    Hi Leonard,
    Thank you very much for your prompt reply. The answer has not solved the problem yet. My users don't want to manually merge the files. They want it to happen automatically.
    The application which prints documents is a simple MS access database. This is available on users personal desktop. As soon as the user opens the MS access, it is programmed in such a way that, the access database runs as a program. It uses DSN to connect to the sybase database and get the details of documents that are pending for print.
    At this point MS accesses database starts an itretive process, fetches the data from sybase , calls the correct report template in MS Access , populates it and prints the report using the default printer on which the database resides. Thus it prints the document one by one. By setting the default printer to Adobe PDF converter, we were successful in creating PDF documents rather than printing to paper , but the disadvantage was each document got printed to one file each.
    The whole process is coded as VBA module of MS access. my question is by using an API from Adobe SDK, will I be able to make changes in the VBA code of MS access database so that all files requestd for print comes out as a single PDF file.

  • Sudden MTS File Issues

    I have been editing MTS files (AVCHD) from a Panasonic GH2 (Some hacked some stock firmware) for over 2 years with little to no issues. I am using the CS5.5 Master Collection, Premiere Pro in this case, on a 3 year old Sager laptop (64-bit, i-7 720qm, 8GB RAM, 2 internal HDDs, nVidia GeForce GTX 285M). I am not sure if I am having hardware issues or a corrupt file but something is not right.
    I am now in the middle of a project and all of the sudden my computer is VERY sluggish and it takes minutes before things update (such as scrolling in my timeline to a different frame) and sometimes it just freezes up all together. There is also an issue (could be related) with the second set of spanned clips (detailed below) from this project.
    The timeline is DV 24P widescreen since it is going to DVD and also uses footage shot from a DVX100b for cutaways. The GH2 footage was all shot with stock firmware 1.1 at 24H, all shot on one day and the first recording time was just shy of an hour, then about 20 minutes and then another for about an hour. This produced 3 mts files for the first section, 1 solo file for the short section and 4 files for the last long section.  I copied the whole card to my D drive without altering any of the folder structure like a good boy. I brought in the clips to the project window and had no issues with editing the first hour section. I will say that I did not use the media browser as I have learned recently is the preferred way to import these clips. For the last 2 years I have dragged and dropped using the windows explorer window and have not had any issues until now. I have dealt with spanned clips, a music video hat required over ten video tracks and much higher bit rates so I am not sure what the trigger is for these issues. What is odd about the second longer clip is that when the first clip is dragged in to the timeline( 00004), it only brings that clip, not the others that should be attached. The first section worked but not this one. And all of the files that follow this clip are numbered differently but look identical to (00004) when dragged to the timeline. They all appear normal in explorer (smaller clips that should be spanned together) so the footage is in tact, just not in PP.
    I experimented a little and copied clip 0005 into it's own folder, dragged it in and it then displayed properly (not a copy of 00004). The sluggish issue is still there so I am not ready to just grin and bear it with this workaround. I also experimented by creating a brand new project, copying the original footage from my archive drive (never imported into a timeline so no xmp files) into a new folder and used media browser to bring in the footage. It didn't show clips 00005 and up since the media browser is only supposed to show the first of the spanned clips but it was still only 24 minutes long and obviously not including the rest of the clips. MEdia browser recognizes these clips are supposed to be together, but PP is seeing them all as copies of 00005.
    I will greatly appreciate any opinions but bear in mind that I have used this same workflow on this same system for over 2 years. The only variable is that these spanned clips may be slightly longer than I have used in the past but not by much. I really don't know if there is a peice of hardware malfunctioning, if the clip is somehow corrupt or if I am missing something obvious.
    What coulod cause this sudden change in performance? Would editing with a proxy help? Should I convert this trouble clip to a different format? Am I asking too much of my system? Are there ways to check if my computer is functioning properly?
    Thanks in advance for any insight and for reading this rather lengthy explaination.
    Cole

    I wanted to give a quick update on this MTS file issue. I was able to get my system back to normal by isolating the trouble files in their own folders outside of the "Private" folder(the actual source files, not in the premiere project). I copied the first clip of the group into a folder by itself and the last 3 clips into another folder. I deleted the originals from premiere and imported (dragging and dropping from windows, not using media browser) the isolated files and they worked fine. Obviously they were now 4 independent files that I had to place side by side in the timeline but they lined up and there is no more lag in the system and there are no more duplicate files.
    Far from conventional but it has me editing again and I didn't have to buy a new computer to be back in business. I can only guess that something glitched when it was conforming and it wouldn't recognize the spanned clips.
    A special thanks to Eric at ADK for offering some suggestions that I fortunately didn't have to try.

Maybe you are looking for

  • Using updatable ResultSet to update Unicode data ?

    Is it possible to use Unicode data in updateString() methods for updatable resultset ? Or I only can do it using OraclePreparedStatement ? I've opened an updatable ResultSet and trying to update NCHAR column using the code below: rset.updateString(6,

  • BI Load Failure: Error message....IDoc ready to be transferred to applicati

    Hello All, Many loads failed in Production, and in the monitor status it show Extraction (messages): Missing messages (Status :Yellow)        Missing message: Request received    (Status :Yellow)        Missing message: Number of sent records (Status

  • Webdynpro ALV

    Hi I have created an application through which we need to update the DB table. How to capture the current ALV data into an Internal Table. Regards, Surya

  • Enter Logo in Smartform

    Hi All, Can anybody guide me to enter logo in Smartform. I want to print one logo on inspection setup print program. Please suggest me the steps to do the same. Regards, Deepak.

  • Why AirPrint is not working on my ipad4 after updating to IOS 8.1?

    WWhy AirPrint is still not working on my ipad4 after updating to IOS 8.1