Dealing with KeyStore larger than available memory.

Hi,
I'm manipulating a JCEKS/SunJCE KeyStore that has grown larger than available memory on the machine. I need to be able to quickly lookup/create/and sometimes delete keys. Splitting into multiple KeyStores and loading/unloading based on which one the particular request needs isn't ideal.
Can anyone recommend a file backed KeyStore that doesn't depend on loading the entire file into memory to work with the KeyStore? Or perhaps a different way of using the existing framework?
Thanks,
Niall

You might check the diffferent providers (and ask their developers about it) to see if you can find one; they should be using BER encoding and not DER encoding of the ASN1 structures. In that case the provider is able to read entries and parse through to the target entry (PILE) on demand but you will have a "pile" version which will make your performace pay for it. If somebody offers that, there sure should be some caching and enhancements on the KeyStore implementation not to suck on random searches.
Start your tests with Bouncycastle provider but I remember, in 2001, certificates generated by the security provider of jcsi (later wedgetail and a part of Quest [Vintela]) were BER encoded. It does not necessarily mean, that they use BER for all constructs now. Furthermore that does not mean that partial load is supported for their implementation of key store.
Finally if none matched your needs, you can write one security provider yourself. Reading the current keystore once (you hopefully have the passwords for all entries), write the entries in the new keystore file( in BER format) then write a logic (probably with caching) to offer transparent partial load in your keystore implementation; drop me some lines if you need more details or commercial consulting services on this.

Similar Messages

  • PCI 5112 and fetch more than available memory

    Would anyone confirm that  pci 5112 can run fetch more than available memory vi with a large number of records(more than onboard memory)? Mine appears not able. thanks.

    see attached screen shot. the error message is different from that in the cited article.
    Attachments:
    niscope_error.jpg ‏86 KB

  • 'flash:c2600-js-mz.122-34.bin' is too large for available memory??

    guyz, i have upgrade my 2621 IOS,the IOS requirement is 16/48 and my router is have 16/48 as well. But everytime the router boot, ther is an error msg. What does it means?
    "%SYS-3-IMAGE_TOO_BIG: 'flash:c2600-js-mz.122-34.bin' is too large for available memory (7016856 bytes).boot of "c2600-js-mz.122-34.bin" using boot helper "flash:c2600-js-mz.122-34.bin" failed
    error returned: File read failed -- Not enough space
    loadprog: error - on file open
    boot: cannot load "c2600-js-mz.122-34.bin"
    ++++ BOOT MSG HERE ++++
    program load complete, entry point: 0x80008000, size: 0xb8d40c
    Self decompressing the image : ################################################################################################################################################################################################################################ [OK]
    Smart Init is enabled
    smart init is sizing iomem
    ID MEMORY_REQ TYPE
    0000A2 0X0010A400 C2600 Dual Fast Ethernet
    0X000F3BB0 public buffer pools
    0X00211000 public particle pools
    TOTAL: 0X0040EFB0
    If any of the above Memory Requirements are
    "UNKNOWN", you may be using an unsupported
    configuration or there is a software problem and
    system operation may be compromised.
    Rounded IOMEM up to: 5Mb.
    Using 10 percent iomem. [5Mb/48Mb]
    %SYS-3-IMAGE_TOO_BIG: 'flash:c2600-js-mz.122-34.bin' is too large for available memory (7016856 bytes).boot of "c2600-js-mz.122-34.bin" using boot helper "flash:c2600-js-mz.122-34.bin" failed
    error returned: File read failed -- Not enough space
    loadprog: error - on file open
    boot: cannot load "c2600-js-mz.122-34.bin"
    System Bootstrap, Version 11.3(2)XA4, RELEASE SOFTWARE (fc1)
    Copyright (c) 1999 by cisco Systems, Inc.
    TAC:Home:SW:IOS:Specials for info
    C2600 platform with 49152 Kbytes of main memory
    program load complete, entry point: 0x80008000, size: 0xb8d40c
    Self decompressing the image : ################################################################################################################################################################################################################################ [OK]
    Smart Init is enabled
    smart init is sizing iomem
    ID MEMORY_REQ TYPE
    0000A2 0X0010A400 C2600 Dual Fast Ethernet
    0X000F3BB0 public buffer pools
    0X00211000 public particle pools
    TOTAL: 0X0040EFB0
    If any of the above Memory Requirements are
    "UNKNOWN", you may be using an unsupported
    configuration or there is a software problem and
    system operation may be compromised.
    Rounded IOMEM up to: 5Mb.
    Using 10 percent iomem. [5Mb/48Mb]
    Restricted Rights Legend
    Use, duplication, or disclosure by the Government is
    subject to restrictions as set forth in subparagraph
    (c) of the Commercial Computer Software - Restricted
    Rights clause at FAR sec. 52.227-19 and subparagraph
    (c) (1) (ii) of the Rights in Technical Data and Computer
    Software clause at DFARS sec. 252.227-7013.
    cisco Systems, Inc.
    170 West Tasman Drive
    San Jose, California 95134-1706
    +++++++++++++++++

    ++++ Show Version ++++
    Router#sh version
    Cisco Internetwork Operating System Software
    IOS (tm) C2600 Software (C2600-JS-M), Version 12.2(34), RELEASE SOFTWARE (fc1)
    Copyright (c) 1986-2006 by cisco Systems, Inc.
    Compiled Wed 01-Mar-06 20:31 by pwade
    Image text-base: 0x8000808C, data-base: 0x814718E4
    ROM: System Bootstrap, Version 11.3(2)XA4, RELEASE SOFTWARE (fc1)
    Router uptime is 7 minutes
    System returned to ROM by reload
    System image file is "flash:c2600-js-mz.122-34.bin"
    cisco 2621 (MPC860) processor (revision 0x102) with 44032K/5120K bytes of memory.
    Processor board ID JAD04290DBX (302381003)
    M860 processor: part number 0, mask 49
    Bridging software.
    X.25 software, Version 3.0.0.
    SuperLAT software (copyright 1990 by Meridian Technology Corp).
    TN3270 Emulation software.
    2 FastEthernet/IEEE 802.3 interface(s)
    2 Serial(sync/async) network interface(s)
    32K bytes of non-volatile configuration memory.
    16384K bytes of processor board System flash (Read/Write)
    Configuration register is 0x2102
    +++++++++++++++++++++++++++++++++++++++++++

  • Numbers 09 is slow dealing with relatively large files

    As some of you might know, Excel 2008 is painfully slow when opening relatively large files (about 12000 rows and 28 columns) that have simple 2D x-y charts. FYI, Excel 2004 doesn't have the same problem (and Excel 2003 in the XP world is even better). I purchased iWork09 hoping that Numbers 09 will help, but unfortunately I have the same problem. iWok09 takes more than 5 minutes to open the file - something the older versions of Excel could do in seconds. When the file opens up, it is impossible to manipulate it. I have a MacBook with 2.4 GHz Intel Core 2 Duo and 4GB of RAM running OS x (version 10.5.6).
    Has anybody else experienced the same problem? If so, is there a bug in iWork09, or is it because it isn't meant to deal with large files.
    I appreciate your response.

    Numbers '08 was very slow.
    Numbers '09 is not so slow but it's not running like old fashioned spreadsheets.
    I continue to think that it's a side effect of the use of xml to describe thr document.
    We may hope that developers will discover programming tips able to fasten the beast.
    Once again, I really don't understand why users may buy an application before testing it with the FREE 30 days demo available from Apple's Web page.
    Yvan KOENIG (from FRANCE dimanche 8 mars 2009 13:13:18)

  • Get-MailboxFolderStatistics for mailbox with items larger than 25MB

    Hi all,
    try to get mailboxfolderstatistic for all mailboxes that contains items larger than 25 MB With the following command:
    Get-mailbox -Resultsize Unlimited | get-mailboxfolderstatistics -includeanalysis -FolderScope All
     {Size -gt '25MB'}| Select-Object name,itemsinfolder,TopSubject,TopSubjectSize,topsubjectCount,topsubjectPath | export-csv c:\25mb.csv
    But i get Error :
    The input Object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input and its Properties do not match any of rhte parameters that take pipline input.
    Any ideas?Im not so good at pws.
    thanks!
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you. Thank you! Off2work

    Hi all,
    have tried all above command but when i did a check it didnt work that well.What i ended up with is following script:
    $MBX = Get-Mailbox –resultsize unlimited
    $MBX | foreach {New-MailboxExportRequest -Mailbox $_.Identity -ContentFilter {Size -gt '25MB'} -FilePath ('\\server01\pst\ ‘ + $_.Alias + '(' + $_.DisplayName + ').PST')}
    Downside with this script is that it export all mailboxes to a pst file.Inside its structure of users folder.
    Upside is that it only export mail items that are larger than 25 MB,but you will need to create a search folder to find  items larger than 25mb.
    Most of pst files that are exported are less than 2mb in size.So we sorted them by size and delete everything that is less than 25MB.
    We then ended up with all mailboxes that have items larger than 25MB.
    So if you have over 5000 mailboxes then this wont be recomended.We had over 1200 and the export went fine over night.
    Amits recomendation also probably works,but havent tested that one.
    THanks all for your help!
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you. Thank you! Off2work

  • Peering with AS larger than 65535

    Hi,
    Have an oldish 7200-G2 in the lab that I need to setup with test peering with an AS larger than 65535 - It does not accept asdot notation (i.e. throws an error when I enter the converted AS - It doesnt like the ".").
    Is there any work-around to this? (Aside from IOS upgrade)
    Cheers.

    Hello John,
    if your objective is to test an eBGP peering with a 32 bit AS peer and the C7200-G2 has to play that role you need an IOS upgrade.
    Releases 12.0(32)S11, 12.0(33)S, 12.0(32)SY
    Cisco 7200 Series 
    to build an EBGP session between the C7200 and another 32bit AS capable device there is a special 16 bit AS number for backward compatibility.
     newly reserved AS TRANS# 23456 for interoperability between 4-byte ASN capable and non-capable BGP speakers
    see
    http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/border-gateway-protocol-bgp/data_sheet_C78-521821.html
    Hope to help
    Giuseppe

  • Loading very large files (larger than available RAM) causes system to bog down

    My System:
    Quad Core (Q6600) @ 2.4Ghz
    8GB RAM
    OS Drive (7200 RPM)
    Audio Source Drive (10k RPM)
    Temp Drive (7200 RPM)
    (all drives are SATA and internal)
    Vista 64 SP1
    My most common scenario for using Audition 3 is to load multitrack recordings of concerts and musicals for mixdown. This generally means 12-24 tracks of 24bit/48kHz audio (about 500MB/hour and recordings are usually 1.5 hours - so a 16 track, 1.5 hour recording would mean about 12GB of data to load into the session).
    What I see is that everything loads very quickly at firstm, but my available physical memory is also being consumed very quickly. Once my phsical memory is consumed, everything grinds to a halt as the system has to now swap out to disk to free memory before loading the next chunk of audio. During this time, the entire system is pretty much unusable. Once everything is loaded (sometimes after a _long_ wait) Audition functions normally, but as the physical memory on the system is still all consumed overall performance of the system, and Audition, noticably suffers due to the constant ongoing swapping between RAM and disk whenever a new operation is needed. Only on exiting Audition does the physical memory get released.
    What is interesting is that Audition is a 32bit program and should not be able to consume this amount of memory, and this is borne out by the Working Set being as small as 20MB at some points, and the commit size under 300MB. I've been told that what I am seeing may be Vista caching those files for the application (I saw this on XP as well for the record), and that the only way to stop it is for the application to load the files with an instruction that they not be cached. From MSDN, I find the OpenFileById function that takes a DWORD dwFlags as a parameter, and in there one of the flags is FILE_FLAG_NO_BUFFERING 0x20000000 which looks like it does what I'm thinking of but brings along a number of restrictions.
    Anyways, what I'm looking for is:
    1) Are there any settings I can use to mitgate this issue? I haven't found anything yet.
    2) Are there changes to my system that would help? I could move to a system with 32GB RAM and never see this again (until I go for even bigger projects!) but that's a bit impractical - though it appeals to the geek in me. I have moved my swap file to the temp drive already and disabled SuperFetch.
    3) Any chance this could be addressed in future versions of the product or am I so far out on a limb here that there's no hope for me?
    4) I've tried breaking the recordings into chunks to work on separately, but this has always been a clunky experience that's generally more trouble than it's worth. Are there other workflow methods I should try - what do you do with large recordings like this?
    Thanks!
    Aleks

    Aleks, you are using this harder than anyone I can think of. These are great questions and I hope somebody who understand what's happening "under the hood" can answer them.
    I have run a rare 90-100 minute session with track counts along those lines, and more, but not necessarily with wave blocks that are continuous for the length of the entire session. The system resources were definitely challenged, but I did not see the behavior that you're experiencing. And that's with far less RAM.
    My unofficial take is that Vista 64 is behaving very differently than XP32, which is what I use. In other words, I am presuming that if you were on XP Home or Pro, this RAM-swapfile issue wouldn't be happening. Audition is compatible with Vista only to the extent that Microsoft was forthcoming with information that the developers needed, and if you've been reading here for long, you might have seen that things could have been better in that area. More importantly, Audition is "certified" for 32-bit OS use only, though you're not the first to try it or want it in a 64-bit environment.

  • How to Deal with a Large Form

    Hello all. I have created a very large form using LiveCycle, which utilizes quite a bit of scripting. Unfortunately, as more and more data is added, the form becomes slower and slower, to the point where a user spends more time waiting than actually filling out the form. This is clearly unacceptable, so I'm looking to remedy this. One thought I had was to split the form up into several sections (the format of the form allows this without trouble). I have a few concerns related to this. First, for the people processing the form on the receiving end, processing several forms instead of one is several times as much work and several times as much of a hassle. Second, it is clearly less convenient to distribute several forms than it is to distribute one, so if I were to do this I would be looking for a way to bundle them together (perhaps a PDF Portfolio?). Third, for ease of use I would want to find some way that a user could simply click on a link (or button or whatever) to move on to the second form after finishing the first. If there were some way to combine the separate parts back together after they had been filled out and submit them as one entity, that would seem to solve the first problem, but I have no idea how one might go about doing that. As I mentioned perhaps a PDF Portfolio is the solution to my second question, but I've never worked with those and don't know if they would be suitable. Of course, if there was a way to speed up this form directly that would solve all of these problems in one fell swoop. If anyone is willing to take a look at this form I'd be glad to e-mail it to them (I don't feel comfortable posting it in a public location at the moment).
    Thank you all very much.
    ===========================================
    Update: I just read a different post below about the "remerge" command causing a form to slow down. I used xfa.form.recalculate(1) all over the place in my form. Is it possible this is causing the slowdown?

    Send the form to LiveCycle8@gmail,com and I will have a look when I get a chance.
    Paul

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Problem with blobs larger than (about) 2kb

    Hello,
    I try to insert binary files with java (jdbc) into tables in my 9i database, and small files with e.g. 500bytes are no problem, but files with 3 or more kbytes are not inserted yet. I got no sql-exception. It simply seems not written into my table.
    This is the SQL I used to create the table:
    create table tblblob2 (
    name varchar2(30),
    content blob)
    lob (content) store as glob_store (
    tablespace LOBSPACE_1
    storage (initial 100k next 100k pctincrease 0)
    chunk 4
    pctversion 10
    INDEX glob_index (
    tablespace raw_index))
    TABLESPACE LOBSPACE_1
    storage (initial 1m next 1m pctincrease 0);
    insert into tblblob2 (name) values ('Test')
    Before this I already tried a very simple create-statement, but it wouldnt work ,too. Thats why I tried this lob...
    And then I used java to update this row:
    DbConnectionHandling dbConnection = new DbConnectionHandling();
              dbConnection.connect();
              FileInputStream fis = null;
              Connection cn = dbConnection.connection;
              PreparedStatement st = null;
              try {
              File fl = new File( FILE ); // imgFile
              fis = new FileInputStream( fl );
              st = cn.prepareStatement( "update tblblob2 set content = ? where (name = 'Test')" );
              st.setBinaryStream( 1, fis, (int)fl.length() ); // imgFile
              st.executeUpdate();
              System.out.println( fl.length() + " Bytes successfully loaded." );
              } catch( SQLException ex ) {
                   System.out.println("0: " + ex.getMessage());
              } catch( IOException ex ) {
                   System.out.println("0: " + ex.getMessage());
              } finally {
              try {
                   if( null != st ) st.close();
              } catch( Exception ex ) {
                   System.out.println("1: " + ex.getMessage());
              try {
                   if( null != cn ) cn.close();
              } catch( Exception ex ) {
                   System.out.println("2: " + ex.getMessage());
              try {
                   if( null != fis ) fis.close();
              } catch( Exception ex ) {
                   System.out.println("3: " + ex.getMessage());
    With small files it works, but whats wrong with "larger" files?
    I need fast help...
    Thanks, Nick

    No Ideas? May it perhaps be a restriction of db-admin?

  • Keychain services API and dealing with Keychains other than the default

    Hello,
    So I've been doing some C work with the Keychain services API. But I can't figure out how to set the Keychain for a function to something other than the default, which is set by passing NULL.
    For example,
    status = SecKeychainAddGenericPassword (
    *NULL, // default keychain*
    10, // length of service name
    "SurfWriter", // service name
    10, // length of account name
    "MyUserAcct", // account name
    passwordLength, // length of password
    password, // pointer to password data
    NULL // the item reference
    This function is defined here:
    http://developer.apple.com/documentation/Security/Reference/keychainservices/Ref erence/reference.html#//apple_ref/c/func/SecKeychainAddGenericPassword
    and the data type for the Keychain is here:
    http://developer.apple.com/documentation/Security/Reference/keychainservices/Ref erence/reference.html#//appleref/doc/cref/SecKeychainRef
    But neither helps me at all for being able to edit a keychain that isn't the default Keychain.
    I appreciate any help, thanks.
    Message was edited by: Smerky

    http://developer.apple.com/documentation/Security/Reference/keychainservices/Ref erence/reference.html#//apple_ref/c/func/SecKeychainOpen

  • How to deal with a large project (perhaps using the Daisy Trail Approach)?

    Hi,
    My intital problem was that I needed the screen to pause so that the user could interact with the spry menus and image maps. In other threads, I noticed that you added click boxes to prevent the screen from moving on. However, whenever I clicked on the spry menus or image maps, the screen moved on without giving me the chance to interact more on the screen. Furthermore, the screen faded away to white. I decided to use the timeline and make each slide last for at least 60 seconds. However, whenever I tried to do this for the entire project, it crashed, repetitively, The project has become too big - it is 164 slides.
    I read in another thread that you can break the project up using the 'Daisy Trail' Apprach. When one chunk of the project finishes, you can execute another swf file to open and so on until they all open in sequence. I am wondering, how does this approach work exactly and will I still be able to use spry menus, i.e. will I still be able to insert links to the other parts of the project? Or does this approach mean that I must create a web page in order to add the links of these separate swf files and will this also make my spry menus and design defunct?
    Any help on this would be greatly appreicated.
    Many thanks.

    Hi,
    If I can prevent the screen from fading away whenever it pauses, I think that would be a good start (as you suggested). I tried this before, having read other threads but am having the same problem - it still fades away. I have done the following:
    In Preferences, Defaults, I have commanded the objects to display for rest of slide, and for the effect to have no transition.
    In Prefererences, Start and End, I have deselected the fade on first and end slides.
    When I right-clicked on the actual slide, and tried to select Transition, No Transition, nothing happened. It wasn't even possible to select this option or the others from this menu. No tick was shown to show that it was selected.
    Is there anyting else I can do or something I'm doing wrong? Thanks for any help!

  • Dealing with large files, again

    Ok, so I've looked into using BufferedReaders and can't get my head round them; or more specifically, I can't work out how to apply them to my code.
    I have inserted a section of my code below, and want to change it so that I can read in large files (of over 5 million lines of text). I am reading the data into different arrays and then processing them. Obvioulsy, when reading in such large files, my arrays are filling up and failing.
    Can anyone suggest how to read the file into a buffer, deal with a set amount of data, process it, empty the arrays, then read in the next lot?
    Any ideas?
    void readV2(){
         String line;
         int i=0,lineNo=0;
            try {
              //Create input stream
                FileReader fr = new FileReader(inputFile);
                 BufferedReader buff = new BufferedReader(fr);
                while((line = buff.readLine()) != null) {
              if(line.substring(0,2).equals("V2")){
                     lineNo = lineNo+1;
              IL[i] = Integer.parseInt(line.substring(8,15).trim());
                    //Other processing here
                     NoOfPairs = NoOfPairs+1;
                     }//end if
                     else{
                      break;
            }//end while
            buff.close();
            fr.close();
            }//end try
            catch  (IOException e) {
            log.append("IOException error in readESSOV2XY" + e + newline);
            proceed=false;
            }//end catch IOException
            catch (ArrayIndexOutOfBoundsException e) {
                   arrayIndexOutOfBoundsError(lineNo);
         }//end catch ArrayIndexOutOfBoundsException
         catch (StringIndexOutOfBoundsException e) {
              stringIndexOutOfBoundsError(e.getMessage(),lineNo);
    }//end V2Many thanks for any help!
    Tim

    Yeah, ok, so that seems simple enough.
    But once I have read part of the file into my program,
    I need to call another method to deal with the data I
    have read in and write it out to an output file.
    How do I get my file reader to "remember" where I am
    up to in the file I'm reading?
    An obvious way, but possibly not too good technically,
    would be to set a counter and when I go back to the
    fiel reader, skip that number of lines in the inpuit
    file.
    This just doesn't seem too efficient, which is
    critical when it comes to dealing with such large
    files (i.e. several million lines long)I think you might need to change the way you are thinking about streams. The objective of a stream is to read and process data at the same time.
    I would recommend that you re-think your algorithm : instead of reading the whole file, then doing your processing - think about how you could read a line and process a line, then read the next line, etc...
    By working on just the pieces of data that you have just read, you can process huge files with almost no memory requirements.
    As a rule of thumb, if you ever find yourself creating huge arrays to hold data from a file, chances are pretty good that there is a better way. Sometimes you need to buffer things, but very rarely do you need to buffer such huge pieces.
    - K

  • IPad available memory NOT consistent with available memory shown in iTunes and available memory reduced after a connected sync...doesn't make sense.

    I was looking through my pictures on iPhoto on my iPad 2 and noticed in some of my albums there were duplicate pictures, but on iPhoto on my iMac there wasn't. I had also noticed that my 64GB iPad 2 had only a little over 14GB left for available memory. I decided to connect my iPad2 to my iMac and sync it with itunes but this time I unchecked Sync Photos. I was asked to remove photos which I agreed to. My intnet was to re-install my photos in hopes that I would no longer see the duplicate photos and quite possibly free up some memory. In short, none of that worked and instead the pictures remained on my iPad2 and my available memory jumped to 956MB. I then tried selecting a " Selected Events, Faces, Albums..." in the Photo section of iTunes but I did not select any albums or events or faces. The sync ended and I still had all of the pictures on my iPad2 with 7GB left of available memory showing on my iPad2 but now showing 12GB on my iTunes on the iMac.
    I then tried to do a sync again, syncing all photos and the sync cancelled. I tried it again and it went through. This sync ended with 14GB of available memory showing on iTunes on my iMac, BUT on my iPad2 it showed 7.1GB.
    None of this makes any sense. I guess I could have left it alone and viewed duplicate photos but if duplicate photos were causing available memory to be reduced, I wanted the memory to come back up for future use.
    Also, I noticed that the amount of photos being synced has randomly changed on a couple of these occassions as if I had more or less photos to sync and I haven't. I'm not sure what would cause this either.
    Where it says "Other" in the memory bar in ITunes on my iMac is 26.34 GB. Before I did the first sync it was a little over 7GB. What would cause the "Other" to consume so much memory? and what is "Other" Why did it fluctuate so drastically when I've made no changes to my iPad?
    I hope I can get some help to this.
    Thanks!

    Brock_DB wrote:
    Where it says "Other" in the memory bar in ITunes on my iMac is 26.34 GB. Before I did the first sync it was a little over 7GB. What would cause the "Other" to consume so much memory? and what is "Other" Why did it fluctuate so drastically when I've made no changes to my iPad?
    "Other" can be the cause of this storage capacity snafu that you have going on with your iPad. "Other" is pretty much what it sounds like it should be with regard to what is stored on your iPad. It is "stuff" associated with apps but not the apps themselves or any other media. It consists of Safari Bookmarks and history, Notes, text messages, email messages, contacts, photos associated with contacts and items of that nature. It should not be anywhere near 26 GB and the fact that you have much is a little bit of a problem.
    Sometimes the only way to rid your device of that bloated "other" is by restoring as a new device without using the backup. There could be something corrupt on your iPad and if you restore from a corrupted backup, you will just end up back where you started. Also, sometimes this issue is caused by syncing photos and while I don't know this for a fact, it might have something to do with the metadata associated with the photos that gets corrupted and then that might cause this "other" storage problem.
    Take a look at some of these and see if something in one of them helps.
    https://www.google.com/search?q=What%20is%20%22Other%22%20on%20my%20iPad%20and%2 0how%20do%20I%20get%20rid%20of%20it?#q=What%20is%20%22Other%22%20on%20my%20iPad% 20and%20how%20do%20I%20get%20rid%20of%20it%3F

  • Compressor setting so it won't output a video larger than source?

    Hello.
    I am sending a whole library of videos (most Apple ProRes, but varrying video dimensions based on the project) through Compressor 4 and I'm having an issue where, for some of the videos (with smaller video dimensions), the settings I'm choosing result in the transcoded videos being larger (in terms of dimensions) than the source (ie. if the source is 720p, the resulting video is 1080p).
    I am using the following two settings:
    - HD for Apple Devices (5 Mbps)
    - HD 1080p for Video Sharing
    Example 1 - I fed a 640 x 360 (Apple ProRes) file into Compressor 4 and selected the "HD for Apple Devices (5 Mbps)" preset. The resulting transcoded video was larger than 640 x 360 (I can't remember the actual size anymore, sorry). There is no option available, that I can see, to modify the preset to NOT create a file larger than the source?
    Example 2 - I fed a 720p (Apple Animation) file into Compressor 4 and selected the "HD 1080p for Video Sharing" preset. The resulting transcoded video was 1080p. I though because the preset is marked "Up to 1920 x 1080" it wouldn't go beyond 1080p (ie. scale down anything larger to 1080p), but also thought that it would maintain the source dimensions and not enlarge a 720p source to 1080p?
    Maybe this is how Compressor is supposed to work (ie. making resulting videos larger than source), but I'm hoping there's a way to modify the presets to not do this "ie. an option to not create a video with dimensions larger than the source)? Is this possible?
    Thanks,
    Kristin.

    Apple devices have a standard frame size they encode to, so yes, this is how it works. 

Maybe you are looking for

  • Video output from ATV is "blurry"

    Here's a strange one. I have an ATV connected to a Sony Bravia W series 1080p TV via component video . The output from my ATV is "blurry" on the screen, but otherwise functions fine. When I power off the ATV and back on (remove the power cord), the p

  • In which table Invoice status is stored in payables.

    Hi, I want to display the status and accounted fields of invoices(payables) in my report . Please help me if any one know's how to display as soon as possible. It's very urgent. Thanks in advance Regards Lakshmi

  • Response.sendRedirect() doesn't work immediately

    Hello, I'm moving my application to JDK 1.4 and IAS 10g. It's alright, but I noticed one problem. For long operation I created my way to display a "Working..." page. I use response.sendRedirect() to an HTML page, with the "Working..." message, that r

  • Exit_SAPLF048_001 Exit_SAPLF048_002

    Hi abap experts, I have a requirement when posting a financial doc to fill BSEG_SGTXT with BSEG_BELNR. I cannot do this using substitution because BSEG_BELNR = $1 and not populated till the last moment before posting. I did however find the following

  • Audition CS6 not support file name written in Arabic language

    dear support manager i have audition CS6 Version 5.0 Build 708 i have issue with Arabic language it not support   file name written an arabic language please help me argent in arabic language i have license for 10 user  i spent 3000$ for new audition