Unecessary Space Consumption

ok sooo when you plug in your ipod and it comes up under devices and you click on it to view it. The bar that shows you how much you ipod holds and what catagories are taking up what about of space, is showing that i have 10 gigs being taking up under the catagory of OTHER. and i have no idea what this other is. Ive checked contacts, tv shoes, podcasts, anything and everythign that isnt Music videos and Photos. and i cant find anything on the ipod either. Does anyone know whats taking up this space

i figured it out guys

Similar Messages

  • Space Consumption by Schemas

    Hi All,
    How do I determine space consumption of each schema present in my DB's ? Is there any tool or any Query that can perform this task ?
    Thanks

    907490 wrote:
    Hi All,
    How do I determine space consumption of each schema present in my DB's ? Is there any tool or any Query that can perform this task ?
    Thanksquery DBA_EXTENTS

  • ARCHIVELOG space consumption

    Hi!
    I am new to DBA world and would like to ask how much space is consumed when ORA XE database is in archivelog mode. That is if XE is limited to 5 GB of data how much of that space is ment to be for archivelog segment. Furthermore what exactly(besides DATA) is consuming those 5 GB of available space(db objects, data, ???)?
    Thank you in advance,
    Marinero

    Thank you C.
    I wasn't sure weather files for archivelog were part of those 5 GB DATA space. I was thinking that if you have lets say 2 of 5 GB of space consumed and last backup was made when there was only 1 GB of data in DB that actual consumption is 3 GB(2GB for data and 1 GB for archive log). Thank you once again for clarification.
    Regards,
    Marinero

  • Sql server log shipping space consumption

    I have implemented sql server logs shipping from hq to dr server
    The secondary databases are in standby mode .
    The issue is that after configuring it , my dr server is running out of space very rapidly
    I have checked the log shipping folders where the trn files resides and they are of very decent size , and the retention is configured for twenty four hours
    I checked the secondary databases and their size is exactly same as that of the corresponding primary databases
    Could you guys please help me out in identifying the reason behind this odd space increase
    I would be grateful if you could point me to some online resources that explains this matter with depth

    The retention is happening . I have checked the folders they do not have records older then 24 hours .
    I dont know may be its because in the secondary server (Dr) there is no full backup job working , is it because of this the ldf file is getting bigger and bigger but again I far as my understanding goes we cannot take a database full backup in the stand
    by mode .
    The TLog files of log shipped DBs on the secondary will be same size as that of primary. The only way to shrink the TLog files on secondary (I am not advising you to do this) is to shrink them on the primary, force-start the TLog backup job, then the copy
    job, then the restore job on the secondary, which will sync  the size of the TLog file on the secondary. 
    If you have allocated the same sized disk on both primary and secondary for TLog files, then check if the Volume Shadow Copy is consuming the space on the secondary
    Satish Kartan www.sqlfood.com

  • Cluster and space consumption

    I'm confused about some thing in regards to clusters. I have about 1GB wort of data when it is not in a cluster, just in a normal table. If I create a cluster and insert the data in to this cluster it consumes more space, and that was fully expected. When I tried the first few times it took 13GB of space. The table space was 30GB large and contained a few other tables. Then I cleaned out the table space so only 2% of the space was used and added another 30GB to the table space and ran the test once more. Now it is consuming more than 40GB and is still not completed. So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in? Is there a way to limit the size it is allowed to use other than to put it in a separate tablespace?

    Hi Marius,
    First, by "cluster" you mean sorted cluster tables, right?
    http://www.dba-oracle.com/t_sorted_hash_clusters.htm
    So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in?These are "hash" clusters, and you govern the range of hash cluster keys, and hence, the range where Oracle will store the rows.
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10739/hash.htm
    Oracle Database uses a hash function to generate a distribution of numeric values, called hash values, that are based on specific cluster key values. The key of a hash cluster, like the key of an index cluster, can be a single column or composite key (multiple column key). To find or store a row in a hash cluster, the database applies the hash function to the cluster key value of the row. The resulting hash value corresponds to a data block in the cluster, which the database then reads or writes on behalf of the issued statement.

  • ITunes Space Consumption

    I have about 3,700 songs on ITunes:
    Does it make sense that it would consume close to 16GB's?
    If I have downloaded albu, graphics (which I assume also consumes space), how can I turn off and delete existing to free up space?

    "Does it make sense that it would consume close to 16GB's?"
    Yes.  It makes perfect sense.  That would mean an average of 4.25 or so MB per song.  At highest quality they would take up more space than that.
    "If I have downloaded albu, graphics (which I assume also consumes space), how can I turn off and delete existing to free up space?"
    Doubtful that takes up much space at all.

  • Log file disk space consumption

    Do I need to care to any log files (excluding website access logs), like mail logs or anything else that could grow indefinitely and consuming much disk space ?
    How about the bayesian filter of SpamAssassin - does it have a maximum size limit or grows it forever ?
    Any other logs or files or whatever to keep an eye upon ?
    Currently I have made these settings on SA:
    FTP: Log anything (all checkboxes marked)
    Mail: Log anything at level "Information"; archive every one day
    Firewall: Log max. of 1000 packets
    DNS: Log enabled, level "Information"

    I looked into /ezc/weekly and it seems that this does already what I want, doesn't ? It seems that there log files are cleaned up so I feel that anything is okay as it is currently...
    Especially since I don't have an real idea which log files are missing there (and finding out their path, which is the hardest task
    I just want to prevent that such undeleted logs are eating my diskspace slowly but steadily.

  • ASM space consumption

    Hi,
    We have 2 node RAC (10.2.0.3 db) hosted in ASM in AIX 5.3. Our db size is currently 1.8TB. We have purging policy to hold only 3 months data + current month data. Some tables use XML Blobs which take most of the spaces. We do purge from this table as well.
    What i believe is, after this purging is complete, a rebuild of the indexes involved in the tables purged will reclaim the space (extents for that matter) used and be used by incoming data thereby the size of the table will not grow more. This has been the case with the application's old version (which was 9.2.0.6 HACMP clustered). But now in this 10.2.0.3 ASM database this is not happening. The purged space is not being reclaimed and only new space from ASM is utilized increasing the ASM space.
    Is this how ASM is supposed to behave or any way to make Oracle use the purged space back. Comments are welcome.
    Thanks

    v$asm_disk will show the free space in whole disk which has not been allocated by any segment. Since deleting some LOB data doesn't release the space from segment, you wont be able to see that free up space in v$asm_disk view. You need to check the free space in tablespace using DBA_FREE_SPACE and also importantly, check the free space in segment itself using DBMS_SPACE package. After purge, once you shrink the lob segment then only that free space will be released by that segment to the tablespace and you should be able to that space in DBA_FREE_SPACE, again not in V$ASM_DISK because the free space is part of tablespace or datafiles and since we never shrink data files, that free space will not be visible in v$asm_disk. But before shrinking LOB, read metalink doc: 386341.1, that will be real helpful.
    Truncate should have deallocated all the space by default, so check DBA_FREE_SPACE to find out total free space you have.
    Thanks
    Daljit Singh
    Edited by: Daljit on Jul 9, 2009 11:50 AM

  • Disk Space Consumption - iTunes TV Videos

    I am in the process of buying a 60G V ipod. I am curious how much disk storage is consumed by say a 42 minute "Desperate Housewives" TV file. Also when you purchase the video clip on iTunes, do you have the option to set the bit rate to modulate the quality vs size tradeoff like you do on music files? Thanks in advance for the responses.

    It will all depend on he data rate at which your movies (or purchases) are encoded. A 25 minute episode of Doctor Who targeted as a 100MB file has a typical rate of 540-560 kbits/sec. Using a 2-pass encode, the rendering on the iPod remains very sharp and crisp -- quite viewable on an SD TV, Thus, you should expect to put about 5 such shows on your iPod per 1G of space used. You can, of course double or triple this rate, but the increase in quality will be minimal as far as the iPod screen is concerned. Hope this is of some help.

  • Request Space consumption

    Dear All,
                     I am pulling data in cube from ODS through infosource. The requests for data in this case always begins with "ODSR". What I want to know is that do these requests take hard disk space? I do know that ODS requests that involve pulling of data from source system does consume space but what about pulling requests in cube from an ODS?
    Yours truly,
    Abhijit

    Hi,
    Even if the req ID consumes space, it would be very much small. I think to create some space in your BW system, try to delete the old PSA data. Also make sure that you delete  PSA data for  Master data attributes that are loaded on daily basis.
    Regards
    Srini

  • Insufficient free space in the database during upgrademodule General checks

    we are upgrading SAP R/3 Enterprise 470 110 to ECC6 on Oracle/Windows.
    During PREPARE module: General checks the file CHECKS.LOG show the following information:
       #====================================================#
    Requests and information for module General checks #
       #====================================================#
    INFO> The following values may be preliminary because of
          free space consumption during productive operation and
          additional free space requests derived in a later stage.
          Conversions of modified tables can require additional space.
          Please use the largest free space request printed,
          which are the values at the very end of this file.
    ERROR> Insufficient free space in the database as follows:
    Create TABLESPACE PSAPDIMD             with 102 MB
    Create TABLESPACE PSAPDIMI             with 102 MB
    Create TABLESPACE PSAPODSD             with 112 MB
    Create TABLESPACE PSAPODSI             with 112 MB
    Create TABLESPACE PSAPFACTD            with 102 MB
    Create TABLESPACE PSAPFACTI            with 102 MB
    INFO> To adjust the size of your tablespaces, you may use the commands
          in file 'C:\usr\sap\put\log\ORATBSXT.LST' using the 'brspace' utility.
          Please copy the file before making changes, as it may be
          overwritten in subsequent upgrade phases.
    INFO> During the upgrade, the new SAP kernel
          will be installed. All files and subdirectories
          in directory C:\usr\sap\CTD\SYS\exe\run which are not used
          in Release 700 will be removed.
          The files from "dbclient.lst" in the kernel directory are kept.
          Files and subdirectories can be protected from deletion
          if they appear in a file "protect.lst" in the same directory
          (each protected name in a separate line).
          For security reasons, directory C:\usr\sap\CTD\SYS\exe\run
          should be saved in a backup.
    INFO> You already installed kernel extensions.
          Please unpack the archive(s)
          RFC.CAR
          after the upgrade has finished.
          You can find these archives on the CD "Presentation".
          Do  n o t  unpack the archives now, the software
          is only compatible with the new SAP kernel.
    INFO> It is possible to upgrade the frontend software before
          you start SAPup!
       #===========================================================#
    PREPARE module General checks finished with status failed #
       #===========================================================#
    #====================================================
    Execution of all selected PREPARE modules finished.
    #====================================================
    Now, we have the new tablespace layout and as says the upgrade general note: Note 819655 - Add. info.: Upgrade to SAP NW 2004s ABAP ORACLE we do not have to create tablespaces as saied in PREPARE; so I have adjusted tables DDART,    TAORA, IAORA: with the entry
    DDIM     STD     Dimension Tables in BW
    DFACT     STD     Facts Table in BW
    DODS     STD     ODS Tables in BW
    Then J have repeated the module: GENERAL CHECK but in the file CHECKS.LOG I noted always the same error.
    Any HELPS?????
    Edited by: Raffaele Pezone on Dec 1, 2009 4:25 PM
    Edited by: Raffaele Pezone on Dec 1, 2009 4:34 PM

    As mentioned in sap note: Note 541542 - Upgrade phase INIT_CNTRANS: Container inconsistency
    we don't have standard layout:
    Standard layout: TABART: TAORA-TABSPACE, IAORA-TABSPACE
    SSDEF:  PSAPES<rel>D,   PSAPES<rel>I
    SSEXC:  PSAPES<rel>D,   PSAPES<rel>I
    SLDEF:  PSAPEL<rel>D,   PSAPEL<rel>I
    SLEXC:  PSAPEL<rel>D,   PSAPEL<rel>I
    APPL0:  PSAPSTABD,      PSAPSTABI
    USER :  PSAPUSER1D,     PSAPUSER1I
    <...>:  PSAP<.....>D,   PSAP<.....>I
    but
    MCOD Layout (new layout)
    SSDEF:  PSAP<sid><rel>, PSAP<sid><rel>
    SSEXC:  PSAP<sid><rel>, PSAP<sid><rel>
    SLDEF:  PSAP<sid><rel>, PSAP<sid><rel>
    SLEXC:  PSAP<sid><rel>, PSAP<sid><rel>
    APPL0:  PSAP<sid>USR,   PSAP<sid>USR
    USER :  PSAP<sid>USR,   PSAP<sid>USR
    <...>:  PSAP<sid>, PSAP<sid>
    but we don't use Multiple component in one Database. We have only one instance in one DB.
    in fact we have following tablespaces:
    PSAPSID   
    PSAPSID620
    PSAPSID700
    PSAPSIDUSR
    PSAPTEMP  
    PSAPUNDO  
    SYSAUX    
    SYSTEM
    I have just created PSAPSID700 as  PREPARE says.

  • Shrinking HD space on startup disk.

    Greetings.
    I've been using final cut pro at work for quite a while now, but I've been having a problem I haven't been able to fix yet, and I'm even not 100% if it's final cut the culprit.
    Basically seems that my space on the Macintosh HD drive keeps dissapearing. I'm constantly freeing up space, and something seems to be taking it and I get the "startup disk is almost full" message. Just today, for example, I got it, freed 6 gigs of space by copying some old videos to an external HD, went back home for lunch, came back 2 hours later, and it was filled again. A couple of weeks ago I managed to get over 20 gigs free, and since then it filled up again, and I have no clue what's taking it.
    One thing I've noticed is that if I select the whole contents of the HD and click on more information, it tells me I have about 150 gigs of files... but the drive itself tells me I have over 230 gigs used, and the cicle keeps repeating, me deleting stuff, space filling back up.
    This computer has final cut pro 5 open most of the time (what I use for work), with photoshop CS, After Effects 6.5 also most of the time (though much less frequently than FCP) and DVD Studio Pro sometimes.
    I have FCP's scratch disk set to an external, and DVD Studio pro set to use the same drive than the project, so no idea what's being saved there. FCP's autosave is set on the startup drive, but every 20 mins and the files are so small that I doubt they might account to such a fast space consumption. The autosave folder is about 30 gigs in size now, but hasn't really grown up lately so can't be that either. Also once I found a log file (while searching for files over 1 gig) that was 12 gigs in size, but haven't found any other big one since then, and the folder it was on isn't even 100 megs in size after that, so I'm clueless.

    Use Disk Inventory X to find out what is taking your free space. http://www.derlien.com/
    It might be that a corrupted Log file is running amok and writing to itself repeatedly. Track it down and delete it. The first suspect would be that 12 GB file you mentioned. Log files are typically a few hundred Kb in size. It will not actually harm your system to delete all the log files. They just provide a useful record for when something goes wrong.
    Why is your AutoSave folder at 30 GB? You only need those as a fall back in case you mess up the current working project. So long as you are saving your regular project file during and at the end of a session, you don't need the AutoSaves. Even less so those for projects that are long since finished.

  • Disk space in 11i Application server - Admin Node

    Hi Hussein:
    I noticed that the APPL_TOP file system has touched 98 %. I have always seen that this never increased above 95% for a very long time. All of a sudden there is sudden spike in space consumption. While trying to identify unwanted files in log locations, I noticed that the "bne" directory had "Upload" subdirectory. These are actually the xml files used used WebADI upload. I am not sure if we have a program to purge and delete this files from this location. I will get atleast 2 G if I can delete this files.
    BNE Directory location : $APPL_TOP/bne/11.5.0/upload
    Regards,
    Bala
    Edited by: BMP on Nov 15, 2010 2:24 PM

    Bala,
    AFAIK, there is no concurrent program to purge the files under this directory and you will have to purge/clean the files manually -- Please log a SR to confirm this with Oracle support.
    Thanks,
    Hussein

  • Managing Time Machine Disk Space

    I have a NAS server that I use to back up all my computers (3 PCs, and 2 Macs).  For various reasons that I won't bore you with, I cannot partition that storage into multiple partitions.  Is there a way to control how much disk Time Machine uses before it starts dropping old backups?  I do not want it to use up unecessary space to keep backup longer than needed.  I there a way to control the amount of disk space available to TM or control how long to keep backups?
    Thanks.

    Thanks for the reply but this link did not give me anything new that I did not know.  Good overview of Time Machine but did not answer the question.

  • Archive Infostructure Database consumption

    Hi all,
    Does anyone have any idea on the database space consumption of archive info structures?
    We're on the verge of implementing AS in relation to data extraction for BW, but we would also want to consider the disk space that will be eaten up by the transparent tables created for the information structures.
    Best regards to all

    Hi,
    Archiving object MM_EKPO is not found in the standard archiving object lists (Check in SARA).
    If you dont find the standard info structure for your archiving object, then you can create your own custom info structure using the transaction code SARJ.
    But before the creation of custom info structure, it is better to have an analysis of the write program of the archiving object in order to have an idea of tables relationship or mapping that can be used in the creation of custom field catalogue.
    You can follow the below steps to create your own info structure:
    1. go to transaction code SARJ
    2. click on Environment->field catalogs
    3. Click on the New Entries->Enter the inputs for the cloumns needed to create field catalog
    4. Maintain the table relationships using "Other Source Fields"
    Once you create the field catalog, create the info structure using the t.code SARJ
    Enter the info structure name and click on create, enter the archiving object name, field catalog.
    Then activate the info structure using the activate button.
    Run the write and delete programs for archiving data..!!!
    Go to transaction code SARE, enter your archiving object, there in the drop up box, you can find your newly created info structure.
    This will sure solve your issue.
    Thanks,
    Shamim

Maybe you are looking for

  • How do I tell if the database engine is installed?

    I'm using SS 2008 R2. I don't think the database engine was installed and I would like to check this.  How do I tell if the database engine is installed?

  • ITunes is being really annoying!

    It opens up by itself! How can I stop this?!

  • PSE 13 one catalog out of three crashes MAC

    I have been struggling with PSE 13 for weeks. I reluctantly upgraded from 10 to 13, it always seems to be a problem.  It is again this time. I have 3 catalogs:  One with pictures, one with completed scrapbook pages, and one with Digital elements for

  • Is there any function module to convert the date format

    Dear ABAPers, Is there any function module to convert the date format from dd.mm.yyyy to dd-mmm-yyyy.        I want to convert the date format from dd.mm.yyy to dd.mmm.yyy Eg.from 10.03.2008 to 10-mar-2009. Thanks & Regards, Ashok.

  • 2 Mail accounts Contacts sync question

    My Wife has a list of contacts in her yahoo mail account, I have my Contacts in Windows Mail, is it possible for each of us to sync only our respective mail contacts ?  We have a lot of duplication right now with both accounts being synced to both ph