Build specifications best practice

I am just wondering what other users do when building Installers to find what is the most efficient method.
I used to include the LVRT Engine (LabVIEW Runtime) in the Installer Build
PROS:
1) Only one Installer required for the Target Computer to install everything making it much easier to install.
CONS:
1) Installer is very large - hundreds of Megabytes making it harder to distribute upgrades (eg. through e-mail).
2) Each time you re-build the Installer takes a VERY long time, because it has to include the LVRT.
3) If you change to a different LV version you then have to re-build using the new LVRT and the application EXE.
So I decided to split the installation of the LVRT from the Application EXE.
PROS:
1) The Executable is now very small - say 10 Megabytes even for a large application making it easier to send upgrades.
2) Very quick to build a new Installer.
3) For each version of the LVRT installed, there might be several upgrades to the EXE making this approach more efficient.
CONS:
1) I have to now build & run an Installer for the Apllication EXE AND run the LVRT Installer to install application on Target Computer.
Assuming we use the second approach, what is the best way of building and installing new and/or upgrading EXE applications?
In my application I build a new Executable for each new software change and then create an Installer (with the same version number) to install the new executable as shown below.
However each time I build a new Installer a new Upgrade code is generaterd - {45D6510F-EB72-4DA2-A8E6-A4CB2363129A} in the Version Information window and when I run the new Installer, if there is an existing installation present it just installs over it - BUT when you look in the ADD/REMOVE programs list there is a NEW entry each time a new Installer is run.
Now to overcome this I could just use one common Installer that I upgrade each time with the latest EXE build as follows...
But then I don't have separate Installers for each version and if I want to go back a version I have to build the Installer again.
So what is the best method?
Chris

Norbert_B wrote:
For longterm support, i suggest you to split the installers apart. One containing all components for executing the application (so drivers, LV RTE), another one for the application and it's support files (ReadMe, config, ...). Let's call the first installer "Framework Installer", the second "Application Installer".
One correction in terminology btw:
Since LV RT is the abbrevation for LabVIEW Real Time, we use LV RTE (LabVIEW RunTime Engine) as abbrevation for the components required to execute compiled LV code.
So why split up?
First, the big Framework Installer is not going to change much. It has to be updated once you update LV or some of the drivers, but for anything else, this could stay the same.
Second, updates for the application can be supplied in small packages. Most developers like to use simple "copy&replace" mechanism to do this, which works fine once you update only a few files; but ojnce you have to update significant things (several files, changes in expected directory structure), i recommend you to use an installer.
Additional note on installers:
Each installer includes something called "upgrade code". This is the key the installer uses to register the application on the system. Once running another installer sharing this key, the OS will treat that as an update of the existing component. This is suggested to use once you want to replace things for a newer version of the application. Manual copy and replacing might induce errors....
just my 5 cents,
Norbert
I also wonder about the upgrade code. Would you use the same code for framework installer and application installer? If so, wouldn't the application installer remove the framework again?
When using different upgrade codes the user will have 2 entries in the control panel > add/remove software. In that case I how would installing a new framework deal with the files modified by another installer (the exe installer)?
Also, will uninstalling both "products" leave a clean system (independent of the uninstallation order)?

Similar Messages

  • Activate Scenarios with Solution Builder CRM Best Practices V1.2007

    Hi,
    I finished all steps in Quickguide for CRM Best Practices V1.2007 until the end.
    All worked fine without any problem.
    Now I want to activate a scenario.
    1. In the field Workbench I get a list of 15 Request/Task, I`m only able to select one.
    2. In the field Customizing I do not get any values.
    3. How to maintain this fields?
    3. Do I have to create a customizing request?
    Can anybody tell me how to proceed with this step? I copied the standard solution to my favorite Solution and marked seven scenarios.
    Perhaps there is a another documentation than Solution_Builder_Quick_Start_V4
    Regards
    Andreas

    Hi Andreas,
    In the same popup window, at the bottom, you will find options to create work bench and customising requests.
    You can assign only one workbench and one customizing request for all the activities of solution builder.
    If you do not have an existing customizing request, choose the option to create one.
    Regards,
    Padma

  • Building a best practice web application using ColdFusion and Jave EE

    I've been tasked with rewriting a software using ColdFusion.  I cannot seem to find a lot of information on best practice development in ColdFusion.  I am an experience Java developer who has never used ColdFusion before.  I want to build this application using a synergy of ColdFusion and Java EE technologies.  Can someone recommend me a book that outlines how to developer in ColdFusion?  Ideally this book assumes the reader is an experienced developer with no exposure to ColdFusion.  Ideally the methods outlined in the book are still "best practice" methods.

    jaisheela wrote:
    Hello Friends,
    I am also in the same situation.
    I am a building a new web application using JSF and AJAX.
    Requirement is I need to use IBM version of DOJO and JSF but I need to develop the whole application using Eclipse 3.3,2 and Tomcat 5.5.
    With IBM version of DOJO and JSF, will Eclipse and Tomcat help to speed up the development or do you suggest me to go for Rational Application Developer and WebSphere Application Server.
    If I need to go with RAD and WAS, then I am new to RAD and WAS, is it easy to use RAD and WAS for this kind of application and implement web applicaiton fast.
    Any feedback will be great help.Those don't sound like requirements of the system to me. They sound more like someone wants to improve their CV/resume
    From what I've read recently, if it's just fast you want, look at Ruby on Rails

  • Building Block  & Best Practice

    Dear All,
                I havea query. I am searching on net for quite some time but  I amunable to find the bulding block  configuration and Best practices for E Recruitment.
    CAn any one please help me out with it. In case if SAP doesnot offer them then what is the next best possible option do I have.
    Your feed back and help would be highly appreacited.
    points will be awarded as well.
    Regards

    hi
    Just keep in mind that before you do any configurations , there has to be a transition program from the mannual system to the automated system.
    Document every step in the AS IS stage and then let them know what all is possible and not possible in the E-rec module .
    Do not ever agree for major 'Z'. that is more important.
    Regards
    Sameer

  • Building tables - Best practices?

    Hi all,
    I hope this is a good place to ask a general question like this. I'm trying to improve myself as a DB designer/programmer and am wondering what are the current practices used when deploying a database that must be kept running at the highest performance possible (as far as selecting data and keeping the database clean).
    Basically, here are the specific topics of concern for me:
    - table sizing
    - index sizing
    - oracle parameter tuning
    - maintenance work required to be done on tables/indexes
    The things I've studied on were all based on Oracle 8i, and I'm wondering if much has changed for 9i and/or 10g.
    Thanks.
    Peter

    Actually I'm not very new to doing database work now, but I do still consider myself not quite sufficient in certain aspects of a typical DBA. For that reason, I'm trying to keep my questions very general as though as I'm learning them afresh.
    It does seem that I'm trying to ask something that is too broad to bring up in forum discussions... I'll go back and do some independent studies then come back to the forum with better questions. :)
    When looking through the 10g bug reports in metalink, it made me uncomfortable on some issues that people have been running into (been a while since I did the initial evaluation, and I forgot which specific issues I looked at). I realized that Oracle 10g provided a lot of conveniences with their new web-based EM and EMServer (especially interested in the new reports and built-in automations that Oracle provided), and also on grid deployments for high-availability systems, but we've been held back by many reasons to not go forward with 10g at this time. Having said that, moving to 10g is still planned for the future, so I am continuing the evaluation in several aspects that are specific to our design to determine what we can use and/or abandon in our existing deployment processes.
    Thanks for everyone's time, best wishes.
    Peter

  • Best practice for intervlan routing?

    are there some best practices for intervlan routing ?
    I've been reading allot and I have seen these scenarios
    router on a stick
    intervlan at core layer
    intervlan at distribution layer.
    or is intervlan needed at all if the switches will do the routing?
    I've done all of the above but I just want to know what's current.

    The simple answer is it depends because there is no one right solution for everyone. 
    So there are no specific best practices. For example in a small setup where you may only need a couple of vlans you could use a L2 switch connected to a router or firewall using subinterfaces to route between the vlans.
    But that is not a scalable solution. The commonest approach in any network where there are multiple vlans is to use L3 switches to do this. This could be a pair of switches interconnected and using HSRP/GLBP/VRRP for the vlans or it could be stacked switches/VSS etc. You would then dual connect your access layer switches to them.
    In terms of core/distro/access layer in general if you have separate switches performing each function you would have the inter vlan routing done on the distribution switches for all the vlans on the access layer switches. The core switches would be used to route between the disribution switches and other devices eg. WAN routers, firewalls, maybe other distribution switch pairs.
    Again, generally speaking, you may well not need vlans on the core switches at all ie. you can simply use routed links between the core switches and everything else. 
    The above is quite a common setup but there are variations eg. -
    1) a collapsed core design where the core and distribution switches are the same pair. For a single building with maybe a WAN connection plus internet this is quite a common design because having a completely separate core is usually quite hard to justify in terms of cost etc.
    2) a routed access layer. Here the access layer switches are L3 and the vlans are routed at the access layer. In this instance you may not not even need vlans on the distribution switches although again to save cost often servers are deployed onto those switches so you may.
    So a lot of it comes down to the size of the network and the budget involved as to which solution you go with.
    All of the above is really concerned with non DC environments.
    In the DC the traditional core/distro or aggregation/access layer was also used and still is widely deployed but in relatively recent times new designs and technologies are changing the environment which could have a big impact on vlans.
    It's mainly to do with network virtualisation, where the vlans are defined and where they are not only routed but where the network services such as firewalling, load balancing etc. are performed.
    It's quite a big subject so i didn't want to confuse the general answer by going into it but feel free to ask if you want more details.
    Jon

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • What is the best practice way of stopping a sub-domain from being indexed?

    Hi there
    I notice that a client site is being indexed as both xxx.com.au [their primary domain] as well as xxx.PARTNERDOMAIN.com.au.
    I have Googled quite a bit on the subject and have browsed the forums, but can't seem to find any specific best practice approach to only having the primary domain indexed.
    One method that seems to be the most recommended is having a second robots.txt site for the sub-domain xxx.PARTNERDOMAIN.com.au with Disallow: /
    Does anyone have a definitive recommendation?
    Many thanks
    Gavin

    Sorry I assumed they were two different sites, they are the same "content" just two different URLs?
    Canonical links will help but it wont stop or remove you being indexed it only adds higher index weight to the Canonical linked URL. Plus only search engines that support that meta tag will work.
    You essentially need two robots.txt to do this effectively or add the META TAG if you can split the sites somehow.
    There is a more complex way, you could host the second domain somewhere else, use htaccess or similar to do a reverse proxy to the main site to pull the contents in realtime, all except the robots.txt file. This way you could have two sites with only 1 to update but still have two robots.txt's
    http://en.wikipedia.org/wiki/Reverse_proxy
    I've done this for a few sites, you are essentially adding a middle man, it will be a tad slower depending on how far the two servers are apart, but it is like having a cname domain but with total control.

  • SCM Best Practice Scenarios - can't find them on Service Marketplace

    I'd like to run Best Practice SCM scenarios such as:
         S24 Vendor-Managed Inventory
         S41 Supplier-Managed Inventory
         S51 Available-to-Promise with Product Allocation
    but can't find them anywhere.
    In version SCM 5.0 they could be located in the Best Practice Area, but the links are missing now.  When I go to the Industry Specific Best Practices (i.e. for Vendor-Managed Inventory), all I see is scenarios related to ECC 6.0.
    Thanks in advance for any help locating these.

    Chris,
    Small world.  I live out in Chester county.
    BPPs can be lengthy and sometimes take a while to load.  They are in MS word format, so you must have a way to read MS Word docs.  Also, our friends in Waldorff don't always have their servers up.  Sometimes it helps to try again later, or during a period of low activity.  I usually have good luck during late afternoon and early evening EST time zone. 
    Here are some links.  I haven't tied to load all of them, but I was successful in the first four.  Give it a shot.
    http://help.sap.com/bp_scmv250/documentation/S24_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S35_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S37_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S38_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S39_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S40_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S41_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S45_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S46_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S47_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S48_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S51_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S52_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S53_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S66_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S68_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S70_BPP_EN_DE.doc
    http://help.sap.com/bp_scmv250/documentation/S72_BPP_EN_DE.doc
    Rgds,
    DB49

  • Best Practice / Solutions for using 11g DB+x86 or Small Computer to build iaas/paas?

    My customer wants to build their own iaas/paas using Oracle 11g DB, plus x86 or other small computer, running Linux or Solaris or Unix OS.
    Oracle Exadata is not feasible for them to use currently.
    Customer wants to know whether there are other customers have implemented their cloud solution based on these or not?
    If yes, would like to share the experience, presentation slides, best practices etc.
    Is there an Oracle email DL for asking this kind of question?
    Thanks,
    Boris

    Like Rick, I'm not aware of a specific "cloud implementors forum". Internally, Oracle has lots of material on implementing cloud, using any platform at all, although obviously we feel Engineered Systems are the most cost-effective solution for many customers. Are you interested in IaaS i.e. virtualised hardware, or PaaS i.e. DBaaS? They should not be confused, neither is required for the other, in fact, using IaaS to implement "DBaaS", as the OpenStack trove API attempts to do, is probably the most counter-productive way to go about it. Define the business-visible services you will be offering, and then design the most efficient means of supporting them. That way you gain from economies of scale, and set up appropriate management systems that address issues like patching, security, database virtualisation and so on.

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

  • Where to find best practice building blocks

    Hello,
    Does anyone know where to find best practice building blocks. I found it sometime ago, but could not find it now.
    It is a link to all the documentation list for a specific industry. (not the SAP Lib)
    Thank you!

    http://help.sap.com/bp_bblibrary/600/BBlibrary_start_newlook.htm
    Exclusively for retail preconfigured scenarios best practice  - http://help.sap.com/bp_retail603/Retail_US/HTML/index.htm
    for upcoming SAP Best Practices versions for SAP ERP 6.0 unde - http://service.sap.com/bestpractices
    Enjoy!

  • Best practices for building menus using resource bundles?

    Greetings; I am curious to find out what the current best practices people are using to build menus/menu bars using resource bundles, specifically ListResourceBundle.
    What I am trying to figure out is how best to write my Swing application so it does not need to know what menu items it needs to grab from the resource bundle.
    The only idea I have come up with is this:
    class MyBundle extends ListResourceBundle {
    private Object[][] contents = {
            {"menubar", { {"menu.file.item", "blah"}, ....} }
    }Inside the GUI class:
    Object[][] menubar = resourceBundle.getObject("menubar");I would then iterate over the menu bar items and build the menu. I would have to use a naming scheme and then parse appropriately to know when to start a new menu, when a submenu occurs, etc.
    Is this the common practice, or does anyone know of a more clever way of doing this? I've searched various FAQs and googled about, but I have yet to come across any sort of tutorial or page that covers this.

    Anyone have any input on this? Am I close to the solution people are
    using out in real production environments?

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • Best Practice Guide for object management in Integration Builder

    Hi All,
    I'm looking for a best practice guideline, or a strategy pattern for the management and maintainable of Integration Builder objects.
    Is their such a document, or has someone experience with a huge amount of objects ( 200-300 interfaces) ?
    I was thinking about a folder to sub-folder strategy, but don't feel comfortable with that solution:
    -->Root folders ( business relationship e.g. "Customer", "Supplier", "Bank" etc.)
    >Sub folders ( e.g. "AB Customer", "ZX Supplier", etc)
    Regards
    Oleg

    Hi ,
      I am in the safe confusion one year back, we developed 200 interfaces in XI3.0 ,every interfaces has minimum 10 mapping programs,its difficult manage the alla scenarios ,
    That what i created Software components for as per region like SA(South America), North Americ , Europe, Asia Pacific,we developed interafces for above regions,that why  created software compinehts like that.
    Then i created name spaces inside outbound and inboud to easy to diffentiate ..like that i had done.
    If you are with PI7.1 betetr to go for Folders,create folders and move objects.
    my answer not satisfies you..pls dont mind..:)
    Reagards,
    Raj

Maybe you are looking for

  • Upload Directory Configuration Problem in jdev 10.1.3.2 adfbc

    I am configuring a file upload utiltity. By default my files are created in /tmp directory i.e on c:\tmp folder. I intend to change this behaviour as i want this to be uplaoded into a drive other than c:\ When we define the upload directory i have sp

  • Connecting iMac 24 to PowerMac G5

    I would like to know the best way to connect an iMac 24, model A1225 to a PowerMac G5 in order to share files between these two computers. As simple as this seems, my searches turn up very little that I can trust. The OS on the G5 is OSX 10.3.9 which

  • MacBook Pro Retina, AHT Error: 4HDD/11/40000000: SATA(0,0)

    I have written this post to note a strange occurrence with my MacBook Pro, with retina display. When I had connected my classic iPod to iTunes, I had received an error message telling me that I was only using a USB 1.1 port. I knew that this was impo

  • SLD Transport issue

    Hi All,               I have  transport issue with SLD objects. I have different SLD's for my Development and Quality. 1) Is CMS a option to transport the SLD objects like Business systems and Technical systems. 2) I have technical System XYZ on Deve

  • Updating the delivery document once the picking is done :

    Hi Iam designing a custom screen, Here i have a icon for automatic picking, should be a check box for that, if it is once checked all the lines items in the transaction should be automatically picked as of the delivery quantity.(The delivery quantity