Best approach to reduce size of 400GB PO (yikes!)

Hi fellow Groupwise gurus,
Am taking a position with a new company that has just informed me they have a 400GB PO message store. I informed them that, uh yea, this is a bit of a problem. So, I am starting to consider best way(s) to deal with this. They are on NetWare 6.5 with Groupwise 7.03. My inclination is to take them to GW 8 because we can do an in place upgrade leaving GW on NetWare for the time being (they can't do the 2012 upgrade and Linux migration, they say, until next year). I would rather have them on GW8 for stability reasons and better support, 7.03 being so old now. I know we could (and perhaps should) create another PO and move users to reduce the size of the main PO. Not sure yet, however, if we have hardware resources to setup another server (although I guess we could just load a 2nd instance of the POA at, say, Port 1678). Retention is obviously a problem here as it is likely that nobody has deleted or purged anything for many, many years (and there are quite possibly less than 50 users, I believe, on this PO!). I would love to get them on Retain, but again that is not an option until next year. We could have them start archiving using the Groupwise client archiving feature, but that does create another set of files outside of the PO that need to be backed up reliably. And finally, we could have people delete but I am not sure that is a viable option. Any suggestions? FYI - they are having to restart the POA on a regular basis due to POA instability and unresponsiveness. Plus, backups take forever, of course. Thanks.
Don

On 04.05.2012 02:26, djhess wrote:
>
> Hi fellow Groupwise gurus,
> Am taking a position with a new company that has just informed me they
> have a 400GB PO message store. I informed them that, uh yea, this is a
> bit of a problem.
And that is a problem why?
The biggest PO I support currently is 2,5TB. Runs without a hitch.
> So, I am starting to consider best way(s) to deal
> with this.
Leave it alone, and reconsider your stance what is a "problem" for
groupwise? ;)
> Not sure yet, however, if we have hardware
> resources to setup another server (although I guess we could just load a
> 2nd instance of the POA at, say, Port 1678).
And the latter, running on the same server, would only further increase
the total size of the data, but else not have any positive effect.
> Any suggestions? FYI - they are having to
> restart the POA on a regular basis due to POA instability and
> unresponsiveness.
Which is almost certainly unrelated to it's size. Before I'd make any
changes, I would want to *know* the cause of those problems. Post some
details of what exactly happens, and we'll be able to help with that.
CU,
Massimo Rosen
Novell Knowledge Partner
No emails please!
http://www.cfc-it.de

Similar Messages

  • Is this the best approach to reduce the memory ??

    Hi -
    I have been given a task to reduce the HEAP memory so that the system can support more number of users. I have used various suggestions given in this forum to find out the size of the object in memory. I have reached to a point that where i think i got an approx size of the object in memory.(not 100%)
    I basically have some objects of some other class which are created when this object is created . The intent was to initialize the nested objects once and use them in the main object. I saw some significant difference reduction in size of the object when i create these objects local to the methods which use it.
    Before moving the objects to method level
    Class A {
        Object b = new Object();
        Object c = new Object();
        Object d = new Object();
         public void method1 () {
             b.someMethod();
         public void method2 () {
             b.someMethod();
         public void method3 () {
             c.someMethod();
         public void method4 () {
             c.someMethod();
         public void method5 () {
             d.someMethod();
         public void method6 () {
             d.someMethod();
    After moving the objects to method level
    Class A {
         public void method1 () {
           Object b = new Object();
             b.someMethod();
         public void method2 () {
            Object b = new Object();
             b.someMethod();
         public void method3 () {
           Object c = new Object();
             c.someMethod();
         public void method4 () {
          Object c = new Object();
             c.someMethod();
         public void method5 () {
            Object d = new Object();
             d.someMethod();
         public void method6 () {
            Object d = new Object();
             d.someMethod();
    }Note : This object remains in the http session atleast 2 hrs. I cannot change the session time out.
    Is this the better approach to reduce the heap size? What are the side effects of creating all objects in the local methods which will be on stack?
    Thanks in advance

    The point is not that the objects are on the stack - they aren't, all objects are in heap, but that they have a much shorter life. They'll become unreachable as soon as the method exits, rather than surviving until the session times out. And the garbage collector will probably recycle them pretty promptly, because they remain in "Eden space".
    (In future versions of the JVM Sun is hoping to use "escape analysis" to reclaim such objects even faster).
    Of course some objects might have a significant creation overhead, in which case you might want to consider creating some kind of pool of them from which one could get borrowed for the duration of the call. With simple objects, though, the overheads of pooling are likely to be higher.
    Are these objects modified during use? If not then you might simply be able to create one instance of each for the whole application, and simply change the fields in the original class to static. The decision depends on thread safety.

  • Best practice to reduce size of BIA trace files

    Hi,
    I saw alert on BIA monitor says 'check size of trace files'. Most of my trace files are above 20MB. I clicked on details it says "Check the size of your trace files. Remove or move the trace files with the memory usage that is too high or trace files that are no longer needed."
    I would like to reduce them these trace files but not sure what is the safetest way to do it. Any suggestion would be appreciated!
    Thanks.
    Mimosa

    Mimosa,
    Let's be clear here first. The tracing set via sm50 is for tracing on the ABAP side of BI not the BIA.
    Yes, it is safe to move/delete TrexAlertServer.trc, TrexIndexServer.trc, etc from the OS level. You can also right click the individual trace when you enter the "Trace" tab in the TREX Admin Tool (python) and I believe there is options to delete them there but it is certaintly OKAY to do this on the OS level. They are simply recreated when new traces are generated.
    I would recommend that you simply .zip the files and move the .zip files to another folder in case SAP support may need them to analyze an issue. As long as they aren't huge, and if hard disk space permits, this shouldnt be an issue. After this you then will need to delete the trace file. It is important that if a trace file has an open handle registered to it then it wont let you delete/move it. Therefore it might be a good idea to do this task when system activity is low or non-existent.
    2 things also to check:
    1. Make sure the python trace is not on.
    2. In the python TREXAdmin Tool, check the Alerts tab and click "Alert Server Configuration". Make sure the trace level is set to "error".
    Hope that helps. As always check the TOM for any concerns:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/46e11c10-0e01-0010-1182-b02db2e8bafb
    Edited by: Mike Bestvina on Apr 1, 2008 3:59 AM - revised some statements to be more clear

  • Best Trategy to reduce the Database Size

    Hi Everyone,
    In our Client's Landscape SAP systems have been upgraded to newer versions whereas our client want
    one copy of older Production systems (one copy to retain)
    1) SAP R/3 4.6 C system  (database size of this system is approx 2TB)
    2) SAP BW 3.0 (database size of this system is approx 2TB)
    Now CLient wants us to reduce the database size via re-organization because Archiving of IDOCs & Links we have already done
    Client has recommended for :
    1) Oracle Export/Import: Only Oracle DBA can do (ignore this one)
    2) Database Reorganization : We have tried Reoragnization via BRtools but found very tedious 9 (ignore this one)
    3) SAP Export/Import : Via this way we want to reduce the database size
    Can anybody Tell us How much Free space do we require in order at OS level in order to store the Database Export
    of Two Databases of size around 4TB & what would be the best strategy of reducing the Dabase size.
    Via SAP Export/Import how much approx how much database size will be reduced
    Thanks & Regards
    Deepak Gosain

    Hi,
    >Can anybody Tell us How much Free space do we require in order at OS level in order to store the Database Export
    >of Two Databases of size around 4TB & what would be the best strategy of reducing the Dabase size.
    The only realistic way to know is to do a system copy of the production system on a testbed system and to test the Database Export.
    If you really want to decrease the database size you will have to archive a lot more than the IDOC archiving object.
    Regards,
    Olivier

  • 40k file size, best approach ?

    Hello,
    I'm just dipping my toes into the Flash water and I'm
    wondering about the best approach for a project... that may not
    even involve flash at all ? Basically my station wants to put some
    of our commercials on our site, but they need to be 40k. I use CS4
    After Effects, so naturally I imported the spot and then exported
    it as a flash movie with the new dimensions 300x100.... down from
    1920x1080. Incredibly I got it down to like 200k and it looked
    pretty darn good. So my question, what's the best way to do this
    type of thing. Do i need flash and how can I get it down to 40k and
    still look good... or should I piss off cause I'm asking the wrong
    group :)
    Thanks for any help or pointers !!

    explain please :)
    So lets say I have a :30 spot that's a quicktime and our web
    folks need it 300x150 40k. What should be my steps.. generally,
    don't have to go into great detail here but I'm pretty blind on
    this topic.
    Thanks for the help

  • R/3 4.7 to ECC 6.0 Upgrade - Best Approach?

    Hi,
    We have to upgrade R/3 4.7 to ECC 6.0
    We have to do th DB, Unicode and R/3 upgrade. I want to know what are the best approaches available and what risks are associated iwth each approach.
    We have been considering the following approaches (but need to understand the risk for each approach).
    1) DB and Unicode in 1st and then R/3 upgrade after 2-3 months
    I want to understand that if we have about 700 Include Program changing as part of unicode conversion, how much of the functional testing is required fore this.
    2) DB in 1st step and then Unicode and R/3 together after 2-3 months
    Does it makes sense to combine Unicode and R/3 as both require similar testing? Is it possible to do it in 1 weekend with minimum downtime. We have about 2 tera bytes of data and will be using 2 systems fdor import and export during unicode conversion
    3) DB and R/3 in 1st step and then Unicode much later
    We had discussion with SAP and they say there is a disclaimer on not doing Unicode. But I also understand that this disclaimer does not apply if we are on single code page. Can someone please let us know if this is correct and also if doing Unicode later will have any key challenges apart from certain language character not being available.
    We are on single code pages 1100 and data base size is about 2 tera bytes
    Thanks in advance
    Regards
    Rahul

    Hi Rahul
    regarding your 'Unicode doubt"' some ideas:
    1) The Upgrade Master Guide SAP ERP 6.0 and the Master Guide SAP ERP 6.0 include introductory information. Among other, these guides reference the SAP Service Marketplace-location http://service.sap.com/unicode@sap.
    2) In Unicode@SAP can you find several (content-mighty) FAQs
    Conclusion from the FAQ: First of all your strategy needs to follow your busienss model (which we can not see from here):
    Example. The "Upgrade to mySAP ERP 2005"-FAQ includes interesting remarks in section "DO CUSTOMERS NEED TO CONVERT TO A UNICODE-COMPLIANT ENVIRONMENT?"
    "...The Unicode conversion depends on the customer situation....
    ... - If your organization runs a single code page system prior to the upgrade to mySAP ERP 2005, then the use of Unicode is not mandatory. ..... However, using Unicode is recommended if the system is deployed globally to facilitate interfaces and connections.
    - If your organization uses Multiple Display Multiple Processing (MDMP) .... the use of Unicode is mandatory for the mySAP ERP 2005 upgrade....."
    In the Technical Unicode FAQ you read under "What are the advantages of Unicode ...", that "Proper usage of JAVA is only possible with Unicode systems (for example, ESS/MSS or interfaces to Enterprise Portal). ....
    => Depending on the fact if your systems support global processes, or depending on your use of Java Applications, your strategy might need to look different
    3) In particular in view of your 3rd option, I recommend you to take a look into these FAQs, if not already done.
    Remark: mySAP ERP 2005 is the former name of the application, which is named SAP ERP 6.0, now
    regards, and HTH, Andreas R

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • What does 'Save as Reduced size PDF' do compared to 'Save as Optimised PDF'?

    (Acrobat X Pro)
    So, both "Save as > Reduced size PDF" and "Save as > Optimised PDF" allow you to reduce the file size of a PDF. "Optimise" offers a variety of options, "Reduce" just gets on with it.
    What is the difference between what they actually do under the hood? Does "Reduce" essentially run the "Optimise" process with certain default settings? Or is it a wholly different process?
    What effect does "Reduce" have in terms of what elements of the PDF are compressed and removed?
    In my experience, "Reduce" seems to be less prone to crashing, and often does as good a job as "optimise". It preserves things like hyperlinks, which optimise may or may not do depending on settings. I'd like to know more about what is actually happening under the hood so I can better judge which is the best tool under what circumstances.

    Personally, I always recommend using Optimize over Reduce Size.
    There are no settings to choose from in Reduce Size. In Optimize you can choose Audit Space Usage so you can get some idea of where the size is coming from. In Optimize you can save settings so you can use them again.
    I have never had a crash from using Optimize on my Mac computers.

  • About save as reduced size pdf

    I noticed that saving as reduced size pdf shrinks a lot my pdf files.
    What does it do precisely? Is there a way, with Acrobat X or some other software, to do the same to a large collection of files in few steps?
    Any compatibility problem reading that kind o pdf on mobile devices (BlackBerry, Android, iPhone, WP6 or WP7)?
    Bye
    Dario

    don't think it's the right explanation: incremental updates are removed with a simple "save as" action.
    Infact, the size of my files (not modifyed after first creation), doesn't change if I open and do a simply "save as" them.
    Size will change with Save as > reduced size PDF... it must be doing also something else that regards the version compatibility: I noticed that setting an higher value in the box asking for Acrobat compatibility level will impact the final size of the file.
    Keeping actual level will save, on the test file I'm using, about 4 MB. Setting at 9.0 compat. level will obaint the 4 MB file. The quality of the output pdf is lower than the original, so it's also doing some kind of compression.
    An explanation could be: it does file compression using the best algorithm available for the version selected?
    PS
    Is there a way to get statistics on PDF files, like the space used for incremental updates?
    Bye,

  • Best approach for BI Rollout

    Hi Gurus,
    i am trying to understand how a sap BI Rollout is working
    Scenario 1:
    a BI System has been implemented e.g in England. Now a rollout should be done in france, spain etc...
    1. should all keyigures and characteristics be global?
    2. What about the case if each country has specific Keyfigures and cahracteristics?
    3. If there are using common KF and charcteristics how about the master Data? Each country has a nother language.
    Scenario 2.
    All objects which have been implemented for the first country will be reimplemented for each following country.
    Could someone please explain how is working or what is the best approach. If you also have a document, it will be great.
    Thanks
    Pat

    I normally prefer saving images in LocalFolder and save file name in database table. I prefer this because saving just file name will keep size of SQLite database small so will load faster.
    Gaurav Khanna | Microsoft .NET MVP | Microsoft Community Contributor

  • Best approach-several infotypes requested to be extracted/modeled in SAP BI

    Hello Gurus!
    The business wants to pull the data from about 150 HR infotypes (including 9-series) into SAP BI. The requirement has been blessed and it is a go (maybe the total number might increase over 150 but not decrease)!! The main premise behind such a requirement is to be able to create ad-hoc reports from BI (relatively quickly) on these tables.
    Now, has anyone of you heard of such a requirement - if so, what is 'best practice' here? Do we just blindly keep modeling/creating 150 DSOs' in BI or is there a better way to do this? Can we atleast reduce the work load of creating models for all these infotypes, somehow? (maybe create views, etc.?)
    Any kind of response is appreciated. Thank you in advance. Points await sensible responses.
    Regards,
    Pranav.

    Personally, I'd say the best approach for this would be not to extract the Infotypes at all and use Crystal Reports to generate ad-hoc queries directly against the Infotypes in your R3/ECC environment. This would eliminate the need to extract and maintain all of that data in BW and unless you have SAP Business Objects Enterprise installed on top of SAP BW, "relatively quick" ad-hoc queries in BW is not what I would call a common theme (BEx Analyzer, which is what I'm assuming is your reporting UI solution for this, isn't exactly user-friendly).
    If you must bring all of these Infotypes in SAP BW, creating views of Infotype data may be your best bet. It would definitely reduce the number of repositories on your BW environment, thereby reducing your maintenance time and costs.

  • What would be best approach to migrate millions of records from on premise SQL server to Azure SQL DB?

    Team,
    In our project, we have a requirement of data migration. We have following scenario and I really appreciate any suggestion from you all on implementation part of it.
    Scenario:
    We have millions of records to be migrated to destination SQL database after some transformation.
    The source SQL server is on premise in partners domain and destination server is in Azure.
    Can you please suggest what would be best approach to do so.
    thanks,
    Bishnu
    Bishnupriya Pradhan

    You can use SSIS itself for this
    Have a batch logic which will identify data batches within source and then include data flow tasks to do the data transfer to Azure. The batch size chosen should be as per buffer meory availability + parallel tasks executing etc.
    You can use ODBC or ADO .NET connection to connect to azure.
    http://visakhm.blogspot.in/2013/09/connecting-to-azure-instance-using-ssis.html
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Best approach for roll over in BPC?

    Hi All,
    We are currently looking for the best approach in SAP BPC for the roll
    forward of closing balances into opening balances from the previous
    year to the current period.
    Our current approach takes the closing balance account lines from the
    previous year , copies them into specific opening year members (f_init
    and ob_init) using business transformation rules then every month there
    are business transformation rules which takes these values in local and
    base currency to calculate the fx gain\loss and also copies over the
    closing balance at the historic rate into the opening balance of the
    current period. This approach takes both input data and journal data
    into account.
    We also need to take into account now the fact that we need to pull
    through any journals which were posted to adjustment companies and some
    (but not all) legal entities for traditional lines which do not have
    typical opening balance accounts (e.g. cash, stock, accruals etcu2026). The
    approach above can work but we need to add the relevant opening balance
    accounts.
    Please could you advise whether there is a better approach than this?
    Kind Regards,
    Fiona

    I normally prefer saving images in LocalFolder and save file name in database table. I prefer this because saving just file name will keep size of SQLite database small so will load faster.
    Gaurav Khanna | Microsoft .NET MVP | Microsoft Community Contributor

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

Maybe you are looking for

  • Can Not Open ASPX Files In Dreamweaver CC ?

    I just upgraded from Dreamweaver CS6 to Dreamweaver CC and now can no longer double click to open ASPX files. Is this something that was removed from Dreamweaver CC? Or, is there something I need to install or edit in order for it to work again? I tr

  • Exception on runtime customizing page in webcenter application

    Hi Community, It occurred after deploying the FOD demo for webcenter(http://www.oracle.com/technology/products/webcenter/fod_for_webcenter.html). 1. The app works well when deployed to integrated WLS. 2. This exception occurred after deploying the ea

  • How to know if an insance is a default instance or a named instance?

    I obtain a list of instance names by using the method in this article: http://blogs.msdn.com/b/askjay/archive/2011/10/11/how-can-i-get-a-list-of-installed-sql-server-instances.aspx On my computer, there are two instance names under registry key: HKEY

  • FTP set up. iMac to iMac

    BAckground. I want to be able to shuffle files back and forth between home and my office. Previously I have been able to do this between mac and PC (across the globe to other ftp sites too) . This was done by using Bulletproof FTP as the "host" softw

  • Succesfull WSAstartup not yet performed (error 10093)

    When lab-view is started up, the following error-message appears: "Error 10093 occurred at PSC_ETH Close Lowest Level.vi." couls somebody help? BR Søren Bækhøj Kjær