Best Practice Regarding Maintaining Business Views/List of Values

Hello all,
I'm still in the learning process of using BOXI to run our Crystal Reports.  I was never familiar with the BO environment before but I have recently learned that every dynamic parameter we create for a report, the Business View/Data Connectors/LOV are created on the Enterprise Repository the moment the Crystal Report is uploaded.
All of our reports are authored from a SQL Command statement and often times, various reports will use the same field name from the database for different reports.  For example, we have several reports that use the field name "LOCATION" that exists on a good number of tables on the database.
When looking at the Repository, I've noticed there are several variations of LOCATION, all which I'm assuming belongs to one specific report.  Having said that, I see that it can start to become a nightmare in trying to figure out which variation of LOCATION belongs to what report.  Sooner or later, the Repository will need to be maintained a bit cleaner, and with the rate we author reports, I forsee a huge amount of headache down the road.
With that being said, what's the best practice in a nutshell when trying to maintain these repository items?  Is it done indirectly on the Crystal Report authoring side where you name your parameter field identifiable to a specific report?  Or is it done directly on the Repository side?
Thank you.

Eric, you'll get a faster qualified response if you post to the  Business Objects Enterprise Administration forum as that forum is monitored by qualified support for BOE

Similar Messages

  • Best practice for E-business suite 11i or R12 Application backup

    Hi,
    I'm taking RMAN backup of database. What would be "Best practice for E-business suite 11i or R12 Application backup" procedure?
    Right now I'm taking file level backup. Please suggest if any.
    Thanks

    Please review the following thread, it should be helpful.
    Reommended backup and recovery startegy for EBS
    Reommended backup and recovery startegy for EBS

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Best Practices regarding program RCOCB004

    Dear Colleagues
    I'd like to discuss the Best Practices regarding the setup of jobs to send Process Messages
    In my company we have a batch job with two steps. Each steps contain one variant of program RCOCB004.
    The first step will send messages with Status "To be sent", "To be resubmitted" and "To be resubm. w/warng"
    The second step will send messages with Status "Destination Error", "Terminated", "Incomplete"
    However, this job sometimes fails with error "Preceding job not yet completed (plant US07)"
    I'd like to discuss what is best way to set up this job in order to avoid this error and also improve performance.
    Thanks and Regards

    Dear,
    To keep the number of message logs in the system low, proceed as follows:
          1. Check the report variants for report RCOCB004 used in your send jobs.The sending of messages in status "Destination error" or "Terminated" is only useful if the error is corrected without manual intervention;for example with messages of category PI_PHST, if the sequence of the messages or time events was swapped in the first send process.
          2. Regularly delete the logs of the messages that were not sent to the destination PI01 using report RCOCB009 (Transaction CO62).
          3. Check whether it it is actually required to send messages to the destination PI01.This is only useful if you want to evaluate the data of these messages by means of the process data evaluation, or if the message data including the logs are to be a part of the process data documentation or the batch log. Remove destination PI01 for the message categories to which the above-mentioned criteria does not apply.You can activate destination PI01 again at a later stage.
          4. If you still want to send process messages to destination PI01, carry out a regular archiving of your process orders.As a result of the archiving, the message copies and logs in the process message record are also deleted.
          5. If the described measures do not suffice, you can delete the logs using Transaction SLG2.
    Control recipe send = RCOCB006 and you need to set the job to run after event SAP_NEW_CONTROL_RECIPES
    Process message send = RCOCB002 (cross plant) and RCOCB004 (specific plant). You need to create variants for these.
    Check the IMG documentation in PPPI for control recipes and process instructions where there is more information about this. Also standard SAP help is quite good on these points.
    Finally, if you are automatically generating process instructions then you need program RCOCRPVG plus appropriate variants.
    Hope it will help you.
        Regards,
    R.Brahmankar

  • OVD best practices for app-specific views?

    I have a requirement to create app-specific views of joined (OID+AD) ldap directory data. It occurs to me that logically I could take 2 approaches to this as laid out below as option1 & 2. Although I'm not sure how to actually create option 2. I've listed the adapters i'd construct, the adapter type, and name/purpose of each. The end product of each option is 2 join adapters that present different app-specific views derived from the same source ldap data. Each join adapter would be consumed by different apps and present different subsets and transformations of that directory data.
    OPTION1:
    1 ldap oid1
    2 ldap ad1
    3 ldap oid2
    4 ldap ad2
    5 join oid1+ad1 (for app1)
    6 join oid2+ad2 (for app2)
    OPTION2:
    1 ldap oid1
    2 ldap ad1
    3 ? oid2 (a transformed subtree derived from oid1)
    4 ? ad2 (a transformed subtree derived from ad1)
    5 ? oid3 (a transformed subtree derived from oid1)
    6 ? ad3 (a transformed subtree derived from ad1)
    7 join oid2+ad2 (for app1)
    8 join oid3+ad3 (for app2)
    With option 1, i would create create 2 OID and 2 AD adapters; repeating the connectivity configuration for each; and each adapter once deployed is going to establish its own pool of ldap connections to the source ldap servers. this is a little clunky as you scale it beyond the initial two app-specific views, and leaves me a little concerned about how well this model scales considering each ldap adapter is going to setup its own pool of connections. i.e. with 5 app-specific view to construct, i'd have 5 OID and 5 AD pools... seems to somewhat defeat the whole point of pooling.
    Option 2 is predicated on the idea of creating 1 single ldap adapter in ovd for oid and another single one for ad; then create secondary adapters which pull and transform data from those two primary source adapters. No matter how many secondary OID & AD adapters I create, only the two primary adapters actually have pooled connections to OID and AD. The advantage here clearly is in how we manage and limit how many pools we are setting up. But I'm not sure what kind of adapter to use for oid2/3 and ad2/3. I looked at using a join adapter, configured not to actually join anything, but rather just pull from a single primary adapter, but I couldn't see any way to change the subtree being pulled from the primary adapter. The alternative might be to create ldap adapters that connect to oid1 and ad1... a loopback approach... but this gets us into pools on top of pools. Again, a little clunky.
    Any thoughts or recommendations with regard to best practices here?

    I haven't done this, so I haven't solved the problem as such. But those organizations who I've seen mention it either just get free apps via this process:
    http://support.apple.com/kb/HT2534
    or use a corporate credit card with the accounts. You can use a single credit card for all the accounts, to the best of my knowledge. There's also a Volume Purchase Plan for businesses which can simplify matters:
    http://www.apple.com/business/vpp/
    I believe that a redemption code obtained through this program can be used to set up an iTunes Store account, but I'm not certain.
    Regards.

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Best Practice to implement Business Packages

    Hello All,
    Need some clarification -
    What is the best practice to implement ESS/MSS Business Package onto Portal -
    (1) Should I just import and configure the content     OR
    (2) Should I import, create a copy and then configure it? If yes, are there any points to be kept in mind?
    All blogs/articles that have been published show to configure the standard content.
    The requirement is to maintain a different prefix/namespace for each Portal content that comes along the business package?
    Thanks,
    Ritu

    Hi Paul,
    I also build my own roles. The only time I might use the standard roles is for demo purposes early in a project.  You will find that in some cases the business packages like MSS don't always even include standard roles, so you have no choice but to build.
    I never change any of the standard iViews/Pages/Worksets - ever.
    The most contentious issue seems to be whether to do a full or delta link copy of the standard objects.  I tend to initially do a full copy of the objects into a custom folder set in the PCD and modify those. Then I only use delta links from Page to iViews where I need the option of setting different properties for the same iView if it appears in multiple pages.  Delta links can be a bit flakey at times, so I tend to only use them where I have to.  I suspect that I may get to a point where I don't use them at all.
    Just my 2 cents worth....
    Regards,
    John

  • Best Practices for Maintaining SSAS Projects

    We started using SSAS recently and we maintain we one project to deploy to both DEV and PROD instances by changing the deployment properties. However, this gets messy when we introduce new fact tables in to DEV data warehouse (that are not promoted to
    Production data warehouse). While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business
    users. Sorry for long question but is there is a best practice to manage projects and migrations? Thanks.

     While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business users.
    Hi Sbc_wisc,
    You can create a new project by importing the metadata from the production cube on the server, using the template, Import from Server (Multidimensional and Data Mining) Project, in SQL Server Data Tools (SSDT). And then make some changes on this project
    and then redeploy it to production server.
    Referencec:
    Import a Data Mining Project using the Analysis Services Import Wizard
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best Practice in using Business Packages

    Hi All,
    Are there any Best Practices in the use of Business Package content?   Do you assign the Roles delivered by the Business Package and do you make changes to the original iViews?
    or
    Do you copy the content delivered in the Business Package to a new folder and work with there?
    These questions are purely at the configuration level and not at the Java coding level.   For instance if I want to turn of the iView Tray, or change a parameter such as height, or even remove an iView from a page or Role.
    I would like to know the various approaches the SDN community uses and the different challenges and benefits that result in each approach.
    Look forward to hearing from you all
    Paul

    Hi Paul,
    I also build my own roles. The only time I might use the standard roles is for demo purposes early in a project.  You will find that in some cases the business packages like MSS don't always even include standard roles, so you have no choice but to build.
    I never change any of the standard iViews/Pages/Worksets - ever.
    The most contentious issue seems to be whether to do a full or delta link copy of the standard objects.  I tend to initially do a full copy of the objects into a custom folder set in the PCD and modify those. Then I only use delta links from Page to iViews where I need the option of setting different properties for the same iView if it appears in multiple pages.  Delta links can be a bit flakey at times, so I tend to only use them where I have to.  I suspect that I may get to a point where I don't use them at all.
    Just my 2 cents worth....
    Regards,
    John

  • Best Practice in maintaining multiple apps and user logins

    Hi,
    My company is just starting to use APEX, and none of us (the developers) have worked on this before either. It is greatly appreciated if we can get some help here.
    We have developed quite a few applications in the same workspace. Now, we are going to setup UAT and PRD environments and also trying to understand what the best practice is to maintain multiple apps and user logins.
    Many of you have already worked on APEX environment for sometime, can you please provide some input?
    Should we create multiple apps(projects) for one department or should we create one app for one department?
    Currently we have created multiple apps for one department, but, we are not sure if a user can login once and be able to access to all the authenticated apps.
    Thank you,
    LC

    LC,
    I am not sure how much of this applies to your situation - but I will share what I have done.
    I built a single 700+ page application for my department - other areas create separate smaller applications.
    The approach I chose is flexible enough to accomdate both.
    I built a separate access control application(Control) in its own schema.
    We use database authenication fo this app - an oracle account is required.
    We prefer to use LDAP for authentication for the user applications.
    For users that LDAP is not option - an encrypted password is stored - reset via email.
    We use position based security - priviliges are based on job functions.
    We have applications, appilcations have roles , roles have access to components(tabs,buttons,unmasked card numbers,etc.)
    We have positions that are granted application roles - they inherit access to the role components.
    Users have a name, a login, a position, and a site.
    We have users on both the East Coast and the West Coast, we use the site in a sys_context
    and views to emulate VPD. We also use the role components,sys_contexts and views to mask/unmask
    card numbers without rewriting the dependent objects(querys,reports,views,etc.)
    The position based security has worked well, when someone moves,
    we change the position they are assigned to and they immediately have the privileges they need.
    If you are interested I can rpovide more detail.
    Bill

  • "best practice to maintain the SAP OM Org Structure"

    Hi SAP Experts,
    My client want to have an best practice or an safe process to update, better and maintain their existing SAP HCM Organizational Structure. In one way you can say that i am doing an process oriented job.
    Our client system is not up-to-date due to the lack of user awareness and complete knowledge on the system. Due to this they are unsure on the accuracy of the reports that comes out of the system.
    As a HCM functional consultant i can look from the technical perspective but not on this process oriented role. I need your guidance in this regard, please sugguest me how can i move ahead and make  some really valuable recommendations ? I am confused on where to start and how to start. Please help me in this regard.
    Thanks in advance,
    Amar

    The only thing u need to keep in mind the Relatioships between the objects in OM
    check the Tcode OOVK  for relationships and assigning those objects   PP01 , PP02
    Re: Organization Structure
    this thread may help u
    let us know if there is anything else
    Edited by: Sikindar on Dec 4, 2008 9:36 AM

  • Best practice to maintain code across different environments

    Hi All,
    We have a portal application and we use
    JDEV version: 11.1.1.6
    fusion middleware control 11.1.1.6
    In our application we have  created many portlets by using iframe inside our jspx files and few are in navigation file as well and the URL's corresponding to these portlets are different
    across the environments (dev,test and prod). we are using subversion to maintain our code .
    problem we are having is: Apart from changing environment details while deploying to Test and prod, we also have to change the portlet URL' s from dev portlet URL's to corresponding env manually.
    so is there any best practice to avoid this cumbersome task? can we achieve this by creating deployment profile?
    Thanks
    Kotresh

    Hi.
    Put a sample of the two different URLs. Anyway you can use EL Expression to get current host instead of hardcoded. In addition, you can think in using a common DNS mapped in hosts file for all environments.
    Regards.

  • OIM best practice and E-Business

    I have the business requirement to provision different types of users in EBS. There are different applications developed within EBS for which the user provisioning flow may vary slightly.
    what is the best practice with regards to creating resource objects and forms ? should I create a separate RO and set of Forms for each set of users

    EBS, and SAP, implementations with complex and varying approval work flows is clearly one of the most challenging applications of OIM. There are a number of design patterns but without a lot of detail about your specific implementation it is very hard to say which pattern is the most appropriate.
    (Feel free to contact me on [email protected] if you want to discuss this in more detail but don't want to put all the detail in a public forum.)
    Best regards
    /M

  • Best practices for EUL/Business Areas

    What is the best way to handle multiple business areas? Even if they have no relations with each other?
    Should I have one EUL with many business areas in it? Or many EULs with one business areas in each?
    Does any one know where I can find the advantages and disadvantages of each?
    Thanks,
    Rob

    Hey there (repost of a similar thread in this forum).
    Many clients I've been at have the exact same discussion. And the one I push for is to have 1 EUL, let the BIS views have their own business areas and like you said, don't customize them.
    For all corporate business areas, folders, etc. just create them in the same EUL but the business areas have a corporate prefix (ie: coca cola would have a CC_ prefix). That way ALL folders can access the BIS views if wanted (ie: join across business areas), special setups (ie: LOVs) could be in a particular business area where ALL folders could access) and updating BIS views with new versions would be a problem.
    I really don't like multiple EUL as it's one of those simple things that can be a gotcha' for many end users that drives the helpdesk nuts trying to understand why a particular user can see his info.
    Just my take (and of course, what I think is the 'best practice' 8-) ).
    Russ

  • Best Practice Regarding Large Mobility Groups

    I was reading the WLC Best Practices and was wondering if anyone could put a number to this statement regarding the largest number of APs, end users, and controllers which can contained in a Mobility Group.
    We would be deploying WiSMs in two geographically dispersed data centers. No voice is being used or is planned.
    "Do not create unnecessarily large mobility groups. A mobility group should only have all controllers that have access points in the area where a client can physically roam, for example all controllers with access points in a building. If you have a scenario where several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues and access points inside the group, which would not interact anyway.
    Keep in mind that WLC redundancy is achieved through the mobility groups. So it might be necessary in some situations to increase the mobility group size, including additional controllers for
    redundancy (N+1 topology for example)."
    I would be interested in hearing about scenarios where a Catalyst 6509 with 5 WiSM blades is deployed in data centers which back each other up for cases of disaster recovery.
    Can I have one large Mobility group? This would be easier to manage.
    or
    Would it be better to back up each blade with a blade in the second data center? This would call for smaller Mobility Groups.
    Be glad to elaborate further if anyone has a similar experience and needs more information.
    All responses will be rated.
    Thanks in advance.
    Paul

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

Maybe you are looking for