Best Practices: BIP Infrastructure and Multiple Installations/Environments

Hi all,
We are in process of implementing BI Publisher as the main reporting tool to replace Oracle Reports for a number of Oracle Form Applications within our organization. Almost all of our Forms environments are (or will be) SSO enabled.
We have done a server install of BIP (AS 10gR3) and enabled BIP with SSO (test) and everything seems in order for this one dev/test environment. I was hoping to find out how others out there are dealing with some of the following issues regarding multiple environments/installs (and licensing):
Is it better to have one production BIP server or as many BIP severs as there are middle tier form servers? (Keeping in mind all these need to be SSO enabled). Multiple installs would mean higher maintenance/resource costs but is there any significant gain by having more autonomy where each application has its own BIP install?
Can we get away with stand alone installations for dev/test environments? If so, how do we implement/migrate reports to production if BIP server is only accessible to DBAs in production (and even real UAT environment where developer needs to script work for migration)? In general, what is the best way to handle security when it comes to administration/development?
I have looked at the Oracle iStore for some figures but this last question is perhaps one for Oracle Sales people but just in case anybody knows... How's licensing affected by multiple installations? Do we pay per installation or user? Do production and test/dev cost the same? Is the cost of stand alone environment different?
I would appreciate if you can share your thoughts/experiences in regards to any of the above topics. Thank you in advance for your time.
Regards,
Yahya

Your data is bigger than I run, but what I have done in the past is to restrict their accounts to a separate datafile and limit its size to the max that I want for them to use: create objects restricted to accommodate the location.

Similar Messages

  • Best Practice for SUP and WSUS Installation on Same Server

    Hi Folks,
    I have a question, I am in process of deploying SCCM 2012 R2... I was in process of deploying Software Update Point on SCCM with one of the existing WSUS server installed on a separate server from SCCM.
    A debate has started with of the colleague who says that the using remote WSUS server is recommended by Microsoft because of the scalability security  that WSUS will be downloading the updates from Microsoft and SCCM should be working as downstream
    server to fetch updates from WSUS server.
    but according to my consideration it is recommended to install WSUS server on the same server where SCCM is installed... actually it is recommended to install WSUS on a site system and you can used the same SCCM server to deploy WSUS.
    please advice me the best practices for deploying SCCM and WSUS ... what Microsoft says about WSUS to be installed on same SCCM server OR WSUS should be on a separate server then the SCCM server ???
    awaiting your advices ASAP :)
    Regards, Owais

    Hi Don,
    thanks for the information, another quick one...
    the above mentioned configuration I did is correct in terms of planning and best practices?
    I agree with Jorgen, it's ok to have WSUS/SUP on the same server as your site server, or you can have WSUS/SUP on a dedicated server if you wish.
    The "best practice" is whatever suits your environment, and is a supported-by-MS way of doing it.
    One thing to note, is that if WSUS ever becomes "corrupt" it can be difficult to repair and sometimes it's simplest to rebuild the WSUS Windows OS. If this is on your site server, that's a big deal.
    Sometimes, WSUS goes wrong (not because of ConfigMgr)..
    Note that if you have a very large estate, or multiple primary site servers, you might have a CAS, and you would need a SUP on the CAS. (this is not a recommendation for a CAS, just to be aware)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • Best practice to move things between various environments in SharePoint 2013

    Hi All SharePoint Gurus!! - I was using SP deployment wizard to move Sites/lists/libraries/items etc. using SP Deployment Wizard (spdeploymentwizard.codeplex.com) in SP 2010. We just upgraded to SP 2013. I have few Lists and Libraries that I need to push
    into the Staging 2013 and Production 2013 environment from Development 2013 environment. SP Deployment Wizard  is throwing error right from the startup. I checked SP 2013 provides granular backups but is restricted to Lists/Library level. Could anybody
    let me know if SP Deployment Wizard works for 2013? I love that tool. Also, Whats the best practice to move things between various environments?
    Regards,
    Khushi
    Khushi

    Hi Khushi,
    I want to let you know that we built
    SharePoint Migration tool
    MetaVis Migrator that can copy and migrate to and from on-premise or hosted SharePoint sites. The tool can copy entire
    sites with sub-site hierarchies, content types, fields, lists, list views, documents, items with attachments, look and feel elements, permissions, groups and other objects - all together on at any level of granularity (for
    example, just lists or just list views or selected items). The tool preserves created / modified properties, all metadata and versions. It looks like Windows Explorer with copy/paste and drag-n-drop functions so it is easy to learn. It does not require any
    server side installations so you can do everything using your computer or any other server. The tool can copy the complete sites or just individual lists or even selected items. The tool also supports incremental or delta copy based on the previous migrations.
    The tool also includes Pre-Migration Analysis that helps to identify customizations.
    Free trial is available:
    http://www.metavistech.com . Feel free to contact us.
    Good luck with your migration project,
    Mark

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to handle multiple site to site IPsec vpn on ASA, any best practice to to manage multiple ipsec vpn configrations

    how to handle multiple site to site IPsec vpn on ASA, any best practice to to manage multiple ipsec vpn configurations
    before ver 8.3 and after version 8.3 ...8.4.. 9 versions..

    Hi,
    To my understanding you should be able to attach the same cryptomap to the other "outside" interface or perhaps alternatively create a new crypto map that you attach only to your new "outside" interface.
    Also I think you will probably need to route the remote peer ip of the VPN connection towards the gateway IP address of that new "outside" and also the remote network found behind the VPN connection.
    If you attempt to use VPN Client connection instead of L2L VPN connection with the new "outside" interface then you will run into routing problems as naturally you can have 2 default routes active at the sametime (default route would be required on the new "outside" interface if VPN Client was used since you DONT KNOW where the VPN Clients are connecting to your ASA)
    Hope this helps
    - Jouni

  • SAP Business One Best-Practice System Setup and Sizing

    <b>SAP Business One Best-Practice System Setup and Sizing</b>
    Get recommendations from SAP and hardware specialists on system setup and sizing
    SAP Business One is a single, affordable, and easy-to-implement solution that integrates the entire business across financials, sales, customers, and operations. With SAP Business One, small businesses can streamline their operations, get instant and complete information, and accelerate profitable growth. SAP Business One is designed for companies with less than 100 employees, less than $75 million in annual revenue, and between 1 and 30 system users, referred to as the SAP Business One sweet spot. The sweet spot covers various industries and micro-verticals which have different requirements when it comes to the use of SAP Business One.
    One of the initial steps during the installation and implementation of SAP Business One is the definition of the system landscape and architecture. Numerous factors affect the system landscape that needs to be created to efficiently run SAP Business One.
    The <a href="http://wiki.sdn.sap.com/wiki/display/B1/BestPractiseSystemSetupand+Sizing">SAP Business One Best-Practice System Setup and Sizing Wiki</a> provides recommendations on how to size and configure the system landscape and architecture for SAP Business One based on best practices.

    For such high volume licenses, you may contact the SAP Local Product Experts.
    You may get their contact info from this site
    [https://websmp209.sap-ag.de/~sapidb/011000358700001455542004#India]

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • How can i get best practice for SD and MM

    Please, can any body tell me how can i get best practices for SD and MM for functional approach?
    Thanks
    Utpal

    Hello Utpal,
    I am really surprised, in just 10 minutes you searched that site and found it not useful. <b>Check out my previous reply "you will not find screen shot in this but you can add it in this"</b>
    You will not find readymade document, you need to add this as per your requirement.
    btw, the following link gives you some more link for new SAP guys, this will be helpful. <b>Check out HOW to BASIC transaction</b>
    New to Materials Management / Warehouse Management?
    Hope this helps.
    Regards
    Arif Mansuri

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • DNS best practices for hub and spoke AD Architecture?

    I have an Active Directory Forest with a forest root such as joe.co and the root domain of the same name, and root DNS servers (Domain Controllers) dns1.joe.co and dns2.joe.co
    I have child domains with names in the form region1.joe.com, region2.joe.co and so on, with dns servers dns1.region1.joe.co and so on.
    Each region has distribute offices that may have a DC in them, servers named in the form dns1branch1.region1.joe.co
    Over all my DNS tests out okay, but I want to get the general guidelines for setting up new DCs correct.
    Configuration:
    Root DC/DNS server dns1.joe.co adapter settings points DNS to itself, then two other root domain DNS/DCs dns2.joe.co and dns3.joe.co.
    The other root domain DNS/DCs adapter settings point to root server dns1.joe.co and then to itself dns2.joe.co, and then 127.0.0.1
    The regional domains have a root dns server dns1.region1.joe.co with adapter that that points to root server dns1.joe.co then to itself.
    The additional region domain DNS/DCs adapter settings point to dns1.region1.joe.co then to itself then to dn1.joe.co
    What would you do to correct this topology (and settings) or improve it?
    Thanks in advance
    just david

    Hi,
    According to your description, my understanding is that you need suggestion about your DNS topology.
    In theory, there is no obvious problem. Except for the namespace and server plaining for DNS, zone is also needed to consideration. If you place DNS server on each domain and subdomain, confirm that if the traffic browsed by DNS will affect the network performance.
    Besides, fault tolerance and security are also necessary.
    We usually recommend that:
    DC with DNS should point to another DNS server as primary and itself as secondary or tertiary. It should not point to self as primary due to various DNS islanding and performance issues that can occur. And when referencing a DNS server on itself, a DNS client
    should always use a loopback address and not a real IP address. detailed information you may reference:
    What is Microsoft's best practice for where and how many DNS servers exist? What about for configuring DNS client settings on DC’s and members?
    http://blogs.technet.com/b/askds/archive/2010/07/17/friday-mail-sack-saturday-edition.aspx#dnsbest
    How To Split and Migrate Child Domain DNS Records To a Dedicated DNS Zone
    http://blogs.technet.com/b/askpfeplat/archive/2013/12/02/how-to-split-and-migrate-child-domain-dns-records-to-a-dedicated-dns-zone.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • PKGBUILD best practice for autotools and missing required files

    I am trying to update one of my packages in the AUR.  Upstream using GNU automake/autoconf tools and has worked just fine for previous versions.  This time around, the download from upstream is missing several of the mandatory files required by autoconf.  I am trying to figure out the best way to deal with this.
    1.  I can add just create them, and distribute them with the Tarbell, and push them into src directory prior to invoking autoconf.
    or
    2. I can use the --add-missing flag, but that requires the running of autoconf multiple times (unless I am confused) 
    What is the best practice when files such as NEWS and README are missing?

    I highly recommend you review Brad Hedlund's videos regarding UCS networking here:
    http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
    You may want to focus on Part 10 in particular, as this talks about running UCS in end-host mode without vPC or VSS.
    Regards,
    Matt

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Best practice to maintain code across different environments

    Hi All,
    We have a portal application and we use
    JDEV version: 11.1.1.6
    fusion middleware control 11.1.1.6
    In our application we have  created many portlets by using iframe inside our jspx files and few are in navigation file as well and the URL's corresponding to these portlets are different
    across the environments (dev,test and prod). we are using subversion to maintain our code .
    problem we are having is: Apart from changing environment details while deploying to Test and prod, we also have to change the portlet URL' s from dev portlet URL's to corresponding env manually.
    so is there any best practice to avoid this cumbersome task? can we achieve this by creating deployment profile?
    Thanks
    Kotresh

    Hi.
    Put a sample of the two different URLs. Anyway you can use EL Expression to get current host instead of hardcoded. In addition, you can think in using a common DNS mapped in hosts file for all environments.
    Regards.

  • Best Practice: SAPGUI Version and Patch Upgrades

    Hello -
    Does anyone have some thoughts/information on best practices relating to SAPGUI version and patch upgrades.
    Obviously, sometimes upgrades are forced upon us (e.g. 7.10 for Vista) and in other cases it may just be considered "nice to have".
    Either way - it always signifies regression test and deployment effort.  How do we balance the benefit and cost?
    Thanks, Steve

    Hi Steve,
    you're right for the first part, yes, we (usually) patch twice a year.
    Now for the rest
    An uninstall will only happen on release changes (6.20->6.40->7.10), i. e. about every 4-5 years as SAP releases them.
    Patches are applied to the installation server and the setup on the client will only update changed program parts. For example, upgrading 6.40->7.10 took about 10 minutes (incl. uninstall), applying patch 1 less than 5 minutes.
    I recommend, you read the "SAP Frontend Installation Guide - 7.10" which you find at SMP alias sapgui. Navigate to  Media Library - Literature. It explains setup of the installation server (sounds like a big thing, but ain't much more than creating a share), creating packages, applying updates etc.
    Peter
    Points always appreciated

Maybe you are looking for

  • Non-modal JDialog hides its JFrame "owner"

    I'm writing my first significant swing application. It has a JFrame that launches a JDialog. When constructing the JDialog, I have the JFrame as the owner and set it to be non-modal. The dialog is indeed non-modal, since it allows me to click on the

  • Query to find customers who have not purchased anything

    I have a query to find customers who have purchased what we call consumables (using item property) over a given period: SELECT T0.CardCode, T0.CardName, T0.DocDate, T0.DocTotal, T1.ItemCode, T1.Dscription,T1.quantity, T2.ItmsGrpNam FROM OINV T0  INNE

  • How to check if XI installed

    Hi, I know this is a basic question. Our basis guys installed, BI of NW2004s, how to check if they have installed XI. Any links, documents in this direction would help me. Thanks, -Naveen.

  • Order Quote Management using Worklist application

    Hi, I am trying to assess the best option to implement a Order Quote Management Use Case. Use Case: 1. User creates a list of items and create an order 2. User selects 3 (or more) Suppliers and submit the order for a quotation 3. In parallel the Supp

  • Global Input Schedule across Different Applications

    HI, In BPC can I build one Global input Schedule and use it across all applications. Say if I have multiple applications one for G&A and one for Sales, Can I  use one Global Input Schedule to feed multiple applications. In BPS we need to have seperat