Managing Large Archives

Hello everyone.
I'm looking for some insight on an issue that has come to our attention with some users' archives. They have managed to have just one archive which have grown over 10GB.
I was wondering if anyone out there has come across a tool I could use to break up these archives into smaller size archives and not having to go to the manual process of managing archived emails and creating them manually based on dates.
What I want at the end is to have 4 x 2GB archives, but I know that to do that I would have to go through the manual process of pulling the archive into the mailbox, search and filter by dates, move results to a new archive and then so on... until I have them split...
Anyone? Any ideas?
Much appreciated.
Miguel

miguelvelizven wrote on 11/26/2014 11:06 PM:
> Issue comes when the third party archive tool tries to archive it. It
> errors out because of the archive size.
You have an archive tool archiving archives? :-D
Which one? Wouldn't it be easier to archive directly from the mailbox using that tool?
Anyway, there's no way to "split" an archive. You'll have to unarchive until the archive shrinks to a size that works. I'd then copy it away and start a new archive. Then again if you have the archive archived somewhere else, you probably don't need the old one in the file system any longer.
Uwe
Novell Knowledge Associate
Please don't send me support related e-mail unless I ask you to do so.

Similar Messages

  • Problem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Spanning 8mm tapes for a large archive

    I am looking for some insight on how to span tapes for very large archives. I am using Solaris 8. The tape drive commands seem very limited as do the error messages returned upon a failure.

    Christa,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Roblem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Creating a Management  plugin archive.

    Hello Frineds,
    i want to create Management plugin archive which is included java source code. actullay when i have copied the source code and all xml file manually that is working fine. but i want import my code using OEM . pls help we how can i create one archive contain my source code +all xml file.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    ok , i used emctl command with monitor binary for copied my source code jar and its goes to C:\OracleHomes\agent10g\sysman\admin\scripts\emx folder .but while running my target its give the error class not found as my java class file is in that jar. then I have set classpath in classpath.in C:\OracleHomes\agent10g\sysman\config\classpath.in . then i was able to getting my matrix's data . is there any other way for declaring a class path.
    2. i want to used matrix data in my java code with out using jdbc or sql query. can i passed matrix data back to in java code .how i can do it. please help me.

  • Managing large numbers of SPAx102s

    What are service providers using to manage large numbers of 2102s or 3102s?  How are firmware versions controlled and rolled out?  What about configuration & password management?
    Thanks!
    Solved!
    Go to Solution.

    Hello,
    as called "provisioning" is used for this. The SPA-xxxx devices has ability to remote download (http and tftp) the settings file on regular basis, check for changes, deploy the settings file and reboot. The file is loaded based on device's MAC address, so every device will realy load only it's own file.
    Provisioning can set the adapter password, and can also fil-in rule for the FW update when needed ... and SP-xxxx adapter will check FW's file header and if newer (or better different) than the one running in adapter it will download the FW, flash the FW on idle time and then reboot.

  • How to manage large pages

    Just wondering... What is a good way to manage large pages?
    Let's say a page with a pannelTabbed with 7 or 8 tabs. Each tab shows a table that is bound to a different VO. Most of the tabs also have a add,delete,edit button and i'm using popups for that.
    So as you can see, their will be lots of bindings and lots of components on such a page.
    If i create all this on a single page i think it will be a few thousands of lines which is not realy nice...
    Should i create page fragments or create taskflows per tab or something like that?
    Currently i have created the page with just the panelTabbed and then for each tab i created a taskflow and dropped that inside the showDetailItem. For each popup, i also created a taskflow so i could reuse it later when i need the same popup in other pages.
    I'm wondering... what is a correct approach for such large pages. Are their any guidlines for this?

    Hi,
    we decided to use dynamic regions (11g) for our application.
    This means we only have 1 jspx for the whole application and exchange the content at runtime.
    For each "block" (e.g. a table, a tab or a popup) we have a single page fragment and task flow.
    One page fragment consists normaly only of one view object.
    With this concept we can reuse e.g. the same (similar) table on different pages too.
    Hope this helps.
    regards
    Peter

  • Managing large iTunes library across multiple storage devices

    Greetings,
    I'm trying to find a way to manage a very large library of about 1 TB, which obviously won't fit on my MBP hard drive.  I want my movies and tv shows on an external drive, but would like my music, podcasts, etc on my internal drive.  I would like to ability to see all items when in iTunes, and would like the ability synch any of the items when synching my iPods, iPhone, or iPad.
    I know I can go into the preferences and select the advanced tab and deselect the Copy files to iTunes Media folder when adding to library and then put my media on an external storage device.
    My question is this... when I download new movie or tv shows from iTunes, where will it put them?  Will it put them in my iTunes media folder, or will it put them on the external drive, or someplace else?
    Wouldn't it be easier if Apple would allow you to designate where you wanted to put your videos versus or audo content?  it just seems so "unintuitive" for a company that prides themselves on being easy to use, and of course, "just working"
    Any help would be appreciated.
    Thanks in advance... Scott

    iTunes hasn't really caught up with Apple's seeming trend towards devices that are smaller and smaller and the glut of media that requires large storage.  I guess in the end it might be expected one goes to cloud storage of some sort.
    I have never bought anything from iTunes Store but I guess it will still download to someplace indicated in preferences for your media folder location even if you have turned off copy to for other methods of adding media.  Two options then.  Either make that place your internal drive and move files to the external as you download them, or make it the external drive and work with special handling of music files on the internal.  You can override iTunes copying items if you hold down the option key while dragging the item to itunes.

  • How to manage large database records in enterprise application

    Hi All,
    I am working on a large enterprise application relating to Capital Market. I am working in Java and with its extended technology. I am facing one critical problem which needs solution from your side. I have a database table which contains approximately more than 5 millions of records, I want to display the records with proper pagination. Here I am using Hibernate for database related stuffs. I am using a query which contains a join query to load the records. After the query the filtered records come to approcimately 80,000. I am unable to make proper pagination, everytime for next or previous set of 10 records I hit the database which is a time consuming affair. I do not know what I will do , should I cache the data for pagination. Everytime I load more than 80,000 records, think that in an web based application, the no of users are 5000, then how to manage. I need core java level solution not in the JSP level. Please help me in this regard.

    After the query the filtered records come to approcimately 80,000. I am unable to
    make proper pagination,Just a thought. If you display 50 per page, that's 1600 pages. Say it takes the user
    15 seconds to read the page: total 400min=6 2/3 hours and probably a bad case of RSI.
    The proper pagination would possibly be no pagination.

  • Noob needs help using/managing large font collection in os10

    Hope this is the correct forum for this question. I'm a os10 noob starting at a new, small startup design-oriented biz (6-7 macs) that needs to manage fonts - 100s, over a 1000. Other places I've worked used Font Reserve and I'm familiar with that program but I never knew if it came with os10 or you had to go buy it, and if you needed to buy a separate copy for each computer?
    I'm trying to help get this biz set-up, but I know NOTHING about font usage/set-up in os10. I've always used ATM Deluxe4.6 (os9) and have over 2000 fonts nicely arranged in sets (took me an entire day, way back when). Would love to transfer these arranged sets into os10, if possible.
    Is there a built-in font mgmt in os10? Is it any good for large collections? Is Font Reserve a good program for large font collections?
    I don't even know where os10 stores fonts or the right questions to ask. Pls point me in the right direction!
    TIA

    Fontbook is ok for dealing with a handful of fonts on an occasional basis, but for pro design work you should run, don't walk, to: http://www.linotype.com/fontexplorerX/
    FontExplorerX beats the pants of any other font management tool I've used on the mac after many years of ATM, Suitcase & Font Agent Pro. FEX looks and feels like an "iTunes for Fonts" - it's fast, stable, clean and simple to use and best of all for a small biz (like me) - it's FREE! :O

  • GL acct change to Open Item management (with archived docu.)

    Dear all,
      When we use program ZRFSEPA02(copy from RFSEPA02) to switch on OI mang for GL acct,but program prompt this acct have documents were<b> archived and NOT allow to execute!</b>  I know SAP announced RFSEPA02 was not usable since 4.5A,but users insist to use old GL acct,and ask SAP OSS and replied for consultant issue.
    So is there any ways to solve this problem?
    BR
    Regina

    Regina
    As you would not get to see the old docs even if the program did work then you should go the recommended route of creating a new G/L Account (with open item management switched on) and redirect your users and any system settings to it.
    Regards
    H

  • Managing large libraries

    I have upwards of 40,000+ images in my referenced Aperture library.  I create a folder for each year and a project for each shoot.  I'm fairly diligent about adding keywords and keeping my files clean but as the cameras keep getting better the megapixels grow larger and the hard drive space (internal) gets gobbled up.  I had just about filled a 1TB hard drive with photos dating back to 2004.  I decided to get a 3TB, it formatted beautifully and I used SuperClone to move files from the 1TB to the the 3TB.  I then removed the 1TB, put the 3TB in its bay and changed the name of the 3TB to "Pictures" ... just like the old 1TB.  I thought this strategy would avoid Aperture "losing" the referenced files.  It didn't.  I know how to reconnect the referenced files to Aperture and my method of organization makes it doable but it will be somewhat tedious.  I can do large batches so that should be fine.
    But all this makes me wonder if I should be doing something different in terms of organization.  I'd be curious to hear from others who have similar numbers of photos and how they organize them as well as any views about moving into an increasingly large megapixel world and what that means in terms of computers.  I read somewhere that I can probably put a 3TB drive in each bay of my aging Mac Pro (2.66, Dual Core Intel Xeon) but should I really move towards that or do I need to think about some kind of external storage arrangement that can be connected to the stunning new iMacs that try to seduce me everytime I go to the Apple Store?
    Oh, and if there is a more clever way to reacquaint the referenced files and Aperture please let me know. 
    Thanks very much.

    Your set-up is both good and standard.
    Once you commit a file to management by Aperture, you must use Aperture and only Aperture to manage that file.  When you need to move your Referenced Masters, hook up both drives (if moving from one drive to another), and use "File→Relocate Masters".
    For 1 TB of data, this will take some time.
    Since you used another program to move your Masters (you used SuperClone), you must then re-connect your Images to their masters.  There shouldn't be anything tedious to _you_ about this, and you shouldn't have to do any more but one very big batch.  Just select all your Images and run "File→Locate Referenced Masters".  It, too, will take some time to run.  (Added: after it's done, confirm that no files got left out.  Filter "Photos" with the Rule "File Status" set to "missing".)
    I recommend buying four identical drives for each large Library:
    - one for the Library
    - one for the Referenced Masters
    - one to back up the Library (I use SuperDuper!)
    - one to back up the Referenced Masters (also SuperDuper!)
    (All drives FW800 or faster, 7200 rpm, large caches.)
    Having each of these on its own external drive makes administering back-ups easy (and in making them easy, they are more likely to be done).  I tell people that a pair of 1 TB drives holds about one photographer-year of work, at a cost of $200 US.  Imho, this is a bargain. 
    Message was edited by: Kirby Krieger

  • How to manage large partitioned table

    Dear all,
    we have a large partitioned table with 126 columns and 380G not indexed can any one tell me how to manage it because now the queries are taking more that 5 days
    looking forward for your reply
    thank you

    Hi,
    You can store partitioned tables in separate tablespaces. This does the following:
    Reduce the possibility of data corruption in multiple partitions
    Back up and recover each partition independently
    Control the mapping of partitions to disk drives (important for balancing I/O load)
    Improve manageability, availability, and performance
    Remeber as the doc states :
    The maximum number of partitions or subpartitions that a table may have is 1024K-1.
    Lastly you can use SQL*Loader and the import and export utilities to load or unload data stored in partitioned tables. These utilities are all partition and subpartition aware.
    Document Reference:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm
    Adith

  • Best way to manage large library with AAC and MP3 versions of files

    I have a large library of music. I listen to music through iTunes and iPod/iPhones, but also other devices which only support MP3, and not AAC. I typically convert iTunes purchases to MP3, but I'm left with two copies of each song in the same folder. I'm looking for recommendations on the best way to manage what would essentially be a duplicated music library - a high quality version I use with my iTunes/ipods, and a lower quality MP3 version for the devices that only support that format.

    I have had a similar problem. I have all my music residing on a NAS where I access it from iTunes (for managing my iPods/iPhones) and a Tivo PVR (which is connected to my house stereo system.) The problem is that Tivo does not recognize AAC. So I've used iTunes to create mp3 versions of all my purchased AAC music. Because the NAS is set as my iTunes media folder location, iTunes puts the mp3 copies back into the existing folder structure (which it keeps organised) on the NAS. Tivo accesses the NAS directly, over the network. But iTunes also puts the additional copy into the iTunes library which leaves me with multiple copies of the same songs on iTunes AND my iPods. Having multiple copies on the NAS (mp3 for Tivo and AAC for iPod) is OK: I've got plenty of space on the NAS and the Tivo doesn't 'see' the AAC files.
    The solution I'm planning to implement is to delete the mp3 entries from the library as soon as I create them (leaving the files in the media folder.) But what I'm looking is for a way to automatically select the mp3 copies (for deletion) from the existing duplicates list. Any ideas?

  • General practices - managing large amounts of data

    I have a basic question regarding data management in Java. Right now, I'm working on a program that deals with hundreds of "pages" of content, each with images, HTML, and some state data. I maintain info on each page in a ContentItem object. Similarly, there can be many categories of content, each of which I maintain in a ContentCategory object.
    At this point, I am controlling access to data using manager classes. For example, I have a ContentManager class that contains a global, static list of content and has the methods you'd expect such a class to have (getContent, addContent, deleteContent, etc.). Similarly, I have a CategoryManager class.
    Basically, any time my program has to deal with large amounts of data or objects, I sort of fall back on these manager classes. I'm wondering if this is a reasonable practice or not. If not, perhaps some of the more experienced developers here could recommend a design pattern that fits these situations.

    Thanks for the reply. I do have a sort of ad-hoc database that I'm using. I have a class called ContentReference which holds just the basic state information about a ContentItem. The actual HTML and image data are stored on disk and retrieved as needed. The files for each content item are just serialized copies of ContentItem objects. Each content item has a unique ID value which is passed to the ContentManager's getContent method. The getContent method retrieves the object from disk and returns the ContentItem object. The filenames for all the content items are based on the ID value, so the getContent method doesn't have to search through all the files until it finds the right one.
    In doing this, I don't have to keep all the HTML and image data in memory. Only the ContentReference objects are kept in memory. ContentItems are loaded as needed. It seems a little messy to have two objects that refer to the same thing, but I didn't see any other way of doing it.

Maybe you are looking for

  • Error while running a jsp page : "jasper run time error"

    hii I'm facing trouble while running some jsp page . the error is as follows: HTTP Status 500 - type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception org.apache

  • BAPI -for Creating EXCISE INVOICE AT DEPOT  tcode 'J1IG

    Hi All, Is there any  BAPI for Excise Invoice Creation at depot Tcode is J1IG Thanks, Satishreddy

  • Startx on a vmware is throwing a "failed to create screen resources"

    The Bottom line error is: Fatal server error: [ 1431.076] (EE) failed to create screen resources(EE) I am trying to get a browsers running on my new Arch Linux, so I can cut and paste better for questions. I have installed xorg, chromium, firefox. If

  • A Serious Error Has Occured

    Hey guys, I'm sure this has to exist elsewhere, but I couldn't come up with anything quickly.  I've been trying out the Premiere Pro CC trial for a little while now, and have been mostly pleased. I've used nothing but FCP in my career and know it lik

  • Prperty editor not coming for roles in dev server

    Hi all, Actually in my development server ,when iam clicking on any of the roles the property editor on the right panel is not coming ,whereas for iviews this problem is not there ,please help me with this isssue and guide me with the necessary  step