Open Source releasing best practices?

Hello,
When creating open source software, I don't really know what the best way to release it (what kind of makefile, versioning system, etc...) is. It's also hard to find this info online, is there any good online info or book about this?
Here are some of my questions...
-How many operating systems should your makefile support? Should I make special cases for every single Linux distro and other OS in the makefile, or can I just  put a generic "g++ *.cpp" in the makefile, and let each OS and distro's own package managers take care of tailoring it to their OS?
-For makefile complexity, I guess there is a scale ranging from a hack like just typing "g++ *.cpp" in it, through having nice sections, groups of files and definitions like "CFLAGS", all the way up to projects which have 20 different makefiles in them like "Makefile.in", "Makefile.pandora", etc.... Where on that scale should you ideally be?
-Makefiles of many projects look incredibly complex, why?
-What versioning system to use? When to make a 1.0.0? When to append "rc" at the end?
-When to tag stable versions? And when you change something in head, do you need to change version number every single time?
-When creating a dynamic library, and you tagged a stable version, and you then change something in head. Should in the makefile the version number of the library name be changed to something? If so, should it be changed to a next minor version, or to something with "-rc" at the end?
-What names should be used for tags of versions?
-Does there need to be both a zipped version of the source code and one under VCS, and if so why is that zipped version needed?
-Are there any naming conventions for output binaries and libraries?
-Are you supposed to let your makefile clean up .o files after compilation or not?
-Are there any conventions for makefiles for names of sections and variables in it? E.g. is it a good idea to have a "clean:" in your makefile to remove everything?
-When depending on another library which is hosted somewhere else, how to handle that? What when statically depending on it?
-Any other things I should know?
Thanks!
Last edited by aardwolf (2013-05-07 12:32:57)

aardwolf wrote:Hello,
Many of your questions have no single correct response. I'll reply with my own opinions and experiences, based on my release of two open source projects (GPT fdisk and rEFInd).
-How many operating systems should your makefile support? Should I make special cases for every single Linux distro and other OS in the makefile, or can I just  put a generic "g++ *.cpp" in the makefile, and let each OS and distro's own package managers take care of tailoring it to their OS?
Ideally, a Makefile should build a package under every OS on the planet. In practice, this isn't always practical. Many developers use programs like Autotools to create Makefiles that are suited to a particular build environment. Other developers (myself included) create a handful of Makefiles for different environments -- for instance, my GPT fdisk has Makefiles for Linux, FreeBSD, OS X, and Windows. My rEFInd officially supports building only under Linux, although it supports two EFI toolkits (GNU-EFI and TianoCore EDK II) via a cascading set of Makefiles. Any of these Makefiles can require changes depending on the distribution and development environment in use, but that's not really my concern.
If a distribution requires changes, that type of change is generally best left to a build system like Autotools or to the person who builds or packages the program. IMHO, it's unreasonable to ask a developer to make minor tweaks to a static Makefile to support every minor Linux variant on the planet.
-For makefile complexity, I guess there is a scale ranging from a hack like just typing "g++ *.cpp" in it, through having nice sections, groups of files and definitions like "CFLAGS", all the way up to projects which have 20 different makefiles in them like "Makefile.in", "Makefile.pandora", etc.... Where on that scale should you ideally be?
This is very much a matter of personal preference and project complexity. Autotools or something similar will make it easy for users and distribution maintainers, but can be tricky to use for the developer. If your program is a simple single-file C program, you might forego a Makefile completely; but for something on the scale of the Linux kernel, a Makefile (or something equivalent) is absolutely required.
-Makefiles of many projects look incredibly complex, why?
Some projects are very complex, as in the Linux kernel itself. Other times, the Makefiles generated by automated systems like Autotools can be more complex than they might be if they were hand-crafted. In still other cases the developers like complexity or are barely competent at creating Makefiles and so create something that's more complex than it needs to be.
-What versioning system to use? When to make a 1.0.0? When to append "rc" at the end?
AFAIK, there are no standards on this. A 1.0 release denotes that something has moved beyond "beta test" status -- in other words, you think it's stable and usable for the masses. Open source software authors tend to be conservative in making that judgment, so pre-1.0 releases in the open source world are often as good as post-1.0 releases of commercial software. The bottom line, though, is that it is a judgment call -- what I consider "1.0" software you might consider well beyond that point and something else might consider pre-beta.
As to release candidate (RC), not all projects use that designation at all. It seems to me to be more common among large projects as they approach major release milestones, to denote something that is close to being finalized, but not quite -- essentially a sort of very late beta stage, even if the initial 1.0 release was made some time before.
-When to tag stable versions? And when you change something in head, do you need to change version number every single time?
If the code changes, you should definitely change the version number. Most developers accumulate several changes before making a new official release, though. Personally, I make full releases with three-digit numbers (like 0.8.6 or 0.6.10), and I upload minor changes to my project's git repository with four-digit numbers (like 0.8.6.1 or 0.6.10.2), but don't do full releases with tarballs and RPMs and whatnot for these, except in a limited way if I want specific people to test a recent change because they filed a bug report. Others have other systems.
-When creating a dynamic library, and you tagged a stable version, and you then change something in head. Should in the makefile the version number of the library name be changed to something? If so, should it be changed to a next minor version, or to something with "-rc" at the end?
The key difference with dynamic libraries is that the interfaces should not change with minor changes. IIRC, the second digit (like "2" in 1.2.3) is the cutoff point. In other words, a program that uses library version 1.2.3 should continue to work without changes or recompilation with library 1.2.4 or 1.2.2 (assuming no bugs). This enables users to upgrade the library (from 1.2.3 to 1.2.4 or the like) without upgrading every binary that relies on it. With version 1.3.0, though, the interface to the library might change in a way that would require recompilation of the program or even changes to the source code. Thus, changing the library from 1.2.4 to 1.3.0 will require the user to upgrade all the programs that use that dynamic library (or keep the old version around along with the new one). Note that I've never created a publicly-released library, and it's been a while since I've read up on this, so I might be a little off on these details.
-What names should be used for tags of versions?
I'm not sure what you mean by this.
-Does there need to be both a zipped version of the source code and one under VCS, and if so why is that zipped version needed?
You can do it any way you want; but as a general rule, you should provide source code in a tarball or .zip file because that's easier to download. Some package systems, such as RPM, require that a source package filename be specified, and so not providing source in such a package just complicates matters for packagers and therefore makes it less likely that they'll bother packaging your program at all. This in turn makes it harder for your users to use the program.
Note that most Linux programs' source code is provided as tarballs rather than as .zip files. Some cross-platform programs can be exceptions to this rule. For instance, I used .zip for rEFInd (a boot loader) because .zip is a little more common in Windows -- although I'm sure either would have worked fine, in practice.
You should probably provide binary builds of your software -- although in some cases this can be tricky because a binary built for Distribution A may not work on Distribution B because of library differences. The OpenSUSE Build Service (OBS) can help with this, although it's a bit of a pain to use.
-Are there any naming conventions for output binaries and libraries?
Not AFAIK, except of course for filename extensions like .so and .a.
-Are you supposed to let your makefile clean up .o files after compilation or not?
No, except for the "clean" target and anything else that's supposed to do this.
-Are there any conventions for makefiles for names of sections and variables in it? E.g. is it a good idea to have a "clean:" in your makefile to remove everything?
The "all" target builds everything, "clean" cleans up, "install" installs everything, and "uninstall" uninstalls everything. There's no law that says you have to have all of these, but they're common, particularly with big projects.
-When depending on another library which is hosted somewhere else, how to handle that? What when statically depending on it?
This type of thing is generally handled by packaging programs (pacman, rpm, dpkg, etc.), not by developers' Makefiles. That said, Makefile builders like Autotools should check for the relevant development libraries and stop if they aren't present. That will handle the static linking issue, as well as other problems. On another level, when using RPM, a source RPM will include dependencies on the relevant development libraries, and Debian source files have a similar feature. Putting these files together is the responsibility of distribution maintainers, not of program authors.
-Any other things I should know?
There's a huge range of acceptable practices on these issues. As a general rule, though, the smaller the package the more likely you are to find a simple Makefile that builds the whole project. Bigger projects are more likely to rely on multiple Makefiles, Autotools, or other complex pre-build software. More standardization emerges at the distribution level, in the form of source and binary RPMs, Debian packages, etc. You shouldn't need to worry too much about that. So long as your package builds with few or no changes on a variety of distributions, the distribution packagers can handle the rest. Build systems always support patches so that minor changes to Makefiles or whatnot can be incorporated. This frees you up to worry about other things rather than trying to support every minor variant distribution in existence.

Similar Messages

  • Open source driver for wrt1900ac?

    It is not possible to compile a fully working (including WiFi) OpenWrt build for the WRT1900AC.   At this point anyway.
    For those who want to use OpenWRT, will the use of a closed source component be required indefinitely or is work happening to make a full open source release?

    The best way to get immediate information would be to directly contact Linksys on the phone and talk to some one at level 2 or higher support to see if any information will be given. Company intentions are not usually given since thats probably company confidential. Linksys or most Mfrs never directly post in forums unless it's something specific and even then, forum Admins post this information. They may or may not be looking in forums aswell. You could ask the forums Admins on here to see if they can get some information as well. Most users on here don't have this kind of information and we can only speculate. The best place for correct information would be directly with Linksys or with OpenWRT. 
    Good Luck.

  • Can we install Best Practice for Fabricated Metals V1.604 on ECC 6.0 EHP5

    Hi Expert Team,
    We are planning to implement Best Practice for Fabricated Metals V1.604 on ECC 6.0 environment.
    We are on ECC 6.0 EHP5, but the SAP installation document mentions to install Fabricated Metals V1.604 on ECC 6.0 with EHP4.
    Also I dont think SAP has released Best Practice for Fabricated Metals for EHP5.
    I would request your confirmation if we can still install Best Practice for Fabricated Metals V1.604 on ECC 6.0 EHP5, does EHP5 supports BP V1.604, Let us know if there will be any impact on Fabricated Metals V1.604 if we install on  EHP5.
    I didnt find any SAP note or document to answer my question. Please provide some reference for the same.
    Looking forward for your response.
    Thank you!
    Guru
    Edited by: Gururaj Srinivasa on Jan 31, 2012 12:41 PM

    Dear Srinivasa,
    Did u find a solution for this issue, if you did please mention it since i have the same problem.
    Thanks in advance.

  • Best practice CO cost centres?

    Hi all,
    Please can you tell me where I can find a source for best practice CO cost centres to set up for my client ?  Would it be somewhere in ASAP for example?
    Thanks very much..
    Mike

    Location and Functions can be a determining factor for justifying for creating exclusive cost centers.
    The cost centers set up shall facilitate to serve the purpose of the main Tasks and Goals of Advanced Cost Management.
    The following principles shall guide for setting up cost centers.
    <b>Management Overhead Cost Transparency</b>
    resource consumption through processes
    parallel quantity and value flow
    <b>Increasing Efficiency</b>
    resource leverage in overhead areas
    continuos efficiency control of internal processes
    <b>Fair Cost Calculation</b>
    source appropriate cost assignmentof internal activities
    cost of complexity
    cost of product and process change
    <b>Information for Strategic Decision Making</b>
    reduce overhead costthrough process optimisation
    increase profitability throughidentification of non profitableproducts and customers

  • New Library? Best Practices?

    I have a large photo library within Aperture and I would like to move a handful of projects that I rarely look at onto an external harddrive to lighten the load on my MacBook Pro. I am open to suggestions/best practices. Thank you for your time...

    . I have exported a Folder that has multiple Projects in there. Afterwards I removed the hard drive and made some adjustments to a referenced picture (?) and it allowed me to do so?
    Then your images probably are not yet referenced but still managed. Did you use "File -> Export -> Master" or "File -> Relocate Master" to turn your images into referenced images? Export will just create copies, you need to relocate the masters.
    To check if your images are relocated, you can either turn on Badge overlays, then you will see arrow badges on the referenced images (see: How Badge Overlays Appear in Aperture: 
    http://documentation.apple.com/en/aperture/usermanual/index.html#chapter=11%26se ction=9%26tasks=true
    ) or create a smart album with the rule: "File status is: referenced".
    This album will collect any referenced file.
    Regards
    Léonie

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Composite Release Roles Best Practice

    I have a question in regards to best practice for utilizing composite release roles.
    We had an issue recently where Purchasing Doc Type (M_BEST_BSA - BSART), Release Code (M_EINK_FRG - FRGCO) and Release Group (M_EINK_FRG - FRGCO), which are maintained at the task role were over written with blanks when derived from the template role.  The template role has these three fields maintained as blanks.  All other data is consisten from the template role to the task role with the exception of the Organizational Levls (ie Plant, Purchasing Org, Purchasing Group).  We then have a variety of task roles that make up the composite.
    Would it make sense to maintain these three fields as Org Level data at in the task role?
    What are our other options?
    Thanks for your assistance.

    We do have DEV, QA, PRD, Training and Sandbox environments.  Our standard practice is to develop in DEV (200) role out to the other DEV clients and then transport to QA for UAT.  I have come across on occasion where the roles are not consistant across all DEV clients and if development work was completed on a role in DEV that was not consistant with the production role then we would be fubar.  This did occur a few weeks back; however, it was caught in time.
    Chain of events went as follows
    1. Request submitted to remove a plant value
    2. Dev work completed and moved to QA.  Based on screen shots of UAT we can see that the three fields were yellow at this point (blank values)
    3. End user did not recognize the caution flags as they were only looking at org value to ensure plant was removed.
    4. Developer failed to highlight the unmaintained fields
    5. Roles moved to production which halted purchasing teams
    This hole thing is very confusing 
    My only guess was the development work was completed on an old role in the wrong dev client.  But then this opens up another issue.  Why was there an old role as standard practice is to move the new roles to all dev clients once completed.

  • What's the Best Open Source DB for use with Kodo?

    Hi everyone,
    In terms of ease of setup and use, tools to view info in the database, and least difficulty in
    running with Kodo, what is the best open source database to use? I'm use to using Oracle and
    SQLPlus. I need to use an open source DB for a learning environment, and I'd like your informed
    opinion.
    Thanks,
    David Ezzio

    I have been using postgresql 7.1 with Kodo for a while with mostly positive results, and currently
    have it deployed with Kodo 2.2.3. I prefer it to mysql because its feature set is a little richer,
    and supports transactions natively. My experience with mysql (without jdo - I haven't tried it with)
    is good, but there were little things missing in mysql 3.x, e.g. the ability do do a join in a
    DELETE statement.
    BTW, there are some severe problems somewhere in the 2.2.4 release with postgresql if you intend to
    eventually deploy on it. It also apparently has some problems invoking postgresql's indices, making
    it a little less than optimal. But as a learning environment, it's what I'd recommend.
    -Mike
    Marc Prud'hommeaux wrote:
    David-
    Here is the rundown of the databases I have experience with:
    MySQL: Fairly simple to install (especially if you run Debian Linux), but
    configuration, especially adding users, can be a pain. A separate open
    source project called "mysql-navigator" makes it fairly easy to do
    simple queries, inserts, etc. In most of my tests it outperforms
    PostgreSQL, but your mileage will vary. It has a sane CLI that supports
    modern features like line history, etc (unlike the horrific SQLPlus).
    PostgreSQL: People often say that it is a more "academically
    correct" database than MySQL. I've also found it to be quite a bit
    easier to set up. Their GUI (called "pgaccess") is simplistic, but does
    most things you need. Has a CLI similiar to MySQL's.
    HypersonicSQL: By far the easiest to set up (just drop the jar in your
    CLASSPATH), but is java only, can be rather slow, and has no GUI tools
    available that I know of (except various free vanilla-JDBC GUI tools).
    We feel that Kodo works quite well with all these DBs. The MySQL JDBC
    driver seems a bit less buggy than Postgres', but their transaction
    support is very recent and not very well tested. If I had to pick one or
    the other, I would probably go with MySQL.
    David Ezzio <[email protected]> wrote:
    Hi everyone,
    In terms of ease of setup and use, tools to view info in the database, and least difficulty in
    running with Kodo, what is the best open source database to use? I'm use to using Oracle and
    SQLPlus. I need to use an open source DB for a learning environment, and I'd like your informed
    opinion.
    Thanks,
    David Ezzio--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com
    Kodo Java Data Objects Full featured JDO: eliminate the SQL from your code--
    Mike Bridge

  • [ANN] XINS 2.1 open source Web Services framework release

    XINS 2.1 Web Services Framework has been released.
    XINS is an open source Web Services Framework based on simple specifications of the Web Service in XML and
    generation of code and documentation from the specification.
    The generation includes Client JAR with its Javadoc, Server side template with its Javadoc, documentation in OpenDocument Format,
    documentation in HTML including the test forms, WSDL file, unit tests (JUnit) and stubs.
    The Web Services accept several protocols including REST, SOAP, XML-RPC, XML, JSON Yahoo! and JSON-RPC.
    What's new:
    * Start the API with java -jar <api name>.war
    * Improved generated specification in OpenDocument Format
    * Include/exclude calling convention with ACLs
    * New calling convention that maps SOAP request and response as the wsdl2api command mapping.
    * Smaller generated build.xml
    * Added possibility to include other runtime properties files
    * The runtime property location can be a URL
    * Swing Graphical User Interface
    * New tools: emma, glean, webstart
    * New target: javadoc-test-<api name>, javadoc-apis
    * Bug fixes and small RFEs
    Download XINS 2.1:
    Windows installer: http://prdownloads.sf.net/xins/xins-2.1.exe?download
    TAR GZ archive: http://prdownloads.sf.net/xins/xins-2.1.tgz?download
    Resources:
    Web site: http://xins.sourceforge.net/
    XINS demos: http://xins.sourceforge.net/demo.html
    Documentation: http://xins.sourceforge.net/documentation.html
    User guide: http://xins.sourceforge.net/docs/index.html

    I recommend you implement your web service with JAX-WS 2.0
    Axis (both version) are good but why do you want to use something that is not included in JEE API, when Java provide same thing with better performance.
    personally try to prevent non standard technologies despite they can be better than core java implementation sometimes.
    I don't know Xfire.
    the good:
    -JAX-WS performance is better than axis,
    - you can create your web service simply with annotation.(this means write class and then make it as a service easily)
    - support every kind of service invocation(callback,Asynchronous,...)
    - architecture is nice (you can operate on SOAP level)
    the Bad:
    - It is JEE 5 or JSE 6 dependent.
    - there is seriously lack of documentation and examples for it, on java web sites and internet.

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • What is the best practice for package source locations?

    I have several remote servers (about 16) that are being utilized as file servers that have many binaries on them to be used by users and remote site admins for content. Can I have SCCM just use these pre-existing locations as package sources, or is this
    not considered best practice? 
    Or
    Should I create just one package source within close proximity to the Site Server, or on the Site Server itself?
    Thanks

    The primary site server is responsible for grabbing the source data and turning it into packages for Distribution points.  so while you can use ANY UNC to be a source location for content, you should be aware of where that content exists in regards
    to your primary site server.  If your source content is in Montana but your primary server is in California ... there's going to be a WAN hit ... even if the DP it's destined for is also in Montana.
    Second, I strongly recommend locking down your source UNC path so that only the servers and SCCM admins can access it.  This will prevent side-loading of content  as well as any "accidental changing" of folder structure that could cause
    your applications/packages to go crazy.
    Put the two together and I typically recommend you create a DSL (distributed source library) share and slowly migrate all your content into it as you create your packages/applications.  You can then safely create batch installers, manage content versions,
    and other things without fear of someone running something out of context.

  • Best practice for opening/closing JDBC conection

    I've written a program that accesses a database, and I'd like to know the best practice for opening and closing that connection. For example should I use a try{} finally {} block,
    try
        //load driver
        //create conection
        //create statement object
        //sql statement to execute
        //execute statement
    finally
        //close connection
    }Or should I split the code into seperate methods, maybe an init() method that loads the driver and makes the connection and an execute() method that creates the statement and executes it and finally a cleanUp() method that closes the connection.
    So, which would be the best way to do this?

    Hallo,
    your idea seems OK to me. However, there are a couple of points to consider:
    1. Do you just want to execute one SQL query? Or will your program execute several? If the latter case is possible, then you do not want to close your connection between queries. Opening and closing a connection to a database takes 'a lot of time', relatively speaking. In this case, it is better to save the connection and reuse it every time that you need it.
    2. Do not forget to close the statements and result sets that you create or get back from JDBC. Depending on the database (eg Oracle) that you are using, you can run out of cursors, and your application will stop.

  • Best practice for keeping a mail session open in web application?

    Hello,
    We have a webmail like application where users login with their IMAP credentials, then are taken to an authenticated area of the site where they can manage different things about their email account.
    Right now the application is opening and closing a mail store connection (via a new javax.mail.Session) for each page load based on the current logged in user credentials. To me this seems like it would be a bad practice to keep opening and closing a connection each page load.
    Are there any best practices for this situation? It would be nice to be able to open the connection to the mail server on login, then keep that connection open until the person logs out, session expires, etc.
    I can probably put the javax.mail.Session into the HTTP session, but that seems like it would break any clustering functionality of tomcat. This would be fine if the machine the user is on didn't fail, but id assume if they failed over to another the mail session would be gone. Maybe keeping the mail session in the http session, checking for a connection, then first attempting to reconnect with the logged in credentials before giving up would be a possiblity?
    Any pointers would be appreciated

    If you keep the connection open across pages, you're going to need to deal with
    timeouts - from the http session and from the mail server.
    If you don't keep the connection open, you're going to need to "resynchronize"
    your view of the store/folder with each operation, in case the folder is modified
    by another session.
    The former is easier in the common cases, especially if you don't care how gracefully
    you handle failures. The latter is more difficult in the common cases, but handles
    failure better, and in particular handles clustering better. You'll need to measure it to
    see if it meets your performance and scalability requirements. You may need to mix
    the two approaches to get acceptable performance.

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

Maybe you are looking for