Best practice for documenting Hyperion BQY's?

I was wondering if anyone had anyone had any suggestions on the best way to document a BQY within the BQY itself? I do use comments inside the calculated fields but I was looking for a way to document the complete BQY. Any thoughts or ideas would be greatly appreciated.
Thanks,

I have used a couple different approaches.  User Documentation has been done on a Notes Dashboard and Developer documentation done in the Document Scripts as comment section.  I often keep history of technical changes in the Document Scripts
I have also written documentation on the file and printed that file to PDF then put a link on the Dashboard to the document whether it is hosted on the EPM Server as static content or a document library maybe hosted on a portal like SharePoint.  This approach is often good if you are using Use Cases, Design Docs, etc and want them available

Similar Messages

  • Best practice for Documenting SOA Composites

    Hi,
    I am looking for any general guideline or best practice to create documentation for composites developed as part of a project.
    Are there any plugins which help export in Visio or other tool?
    I don't see a create JPEG button on composite editor similar to BPEL, so any suggestions for documentating that?
    In general, i would like to take your opinions/suggestions to adapt a process for a better documentation.
    Thanks.

    Hi,
    As such there is no such guidelines or best practices which are followed for documentation.
    You may plug in your source control system with Jdeveloper but it will help during the coding process.
    We have used OER in our project for maintaining documentations and the relationships among different files (be it xsd's, wsdls, bpels, mediator , etc)
    Thanks

  • Best Practice for documenting projects BW

    Hi people.
    We are studing at the moment a way of documenting new developments of applications of SAP BW.
    Is there a standard of documentation recommended by SAP, or a good practice that I use to create and document objects developed in SAP BW.
    Can someone help me?

    Hi Marques, From my experience this varies from one costumer to another. Everywhere they have different practices, formats of documents, tools, habits, etc.
    What I found very useful in BW is using of meta data repository part of TA RSA1. You can easily get some nice screen shots like data flows. Moreover in BW 7.x you have possibility to get documentation of particular transformations, DTPs etc. This can be done when you select particular object via left click of mouse and then hit F1 key.
    BR
    m./

  • Anybody has documentation on the SAP Best Practices for BI 7.0

    I am currently looking for SAP Best Practices for BI 7.0 and speccifically needed documentation on installation of Business Content. Please email me at bala215 "at" yahoo.com

    Hi,
    There's some more links in the threads below :
    SAP BI Best Practices - Info
    Best Practices for Implementing BI7.0
    Cheers,
    Kedar

  • Best Practices for JMS Service Documentation

    Our software consists of a variety of JMS producers and consumers used to transform and transmit business to business messages on a large scale.
    The service nodes do a variety of things, and we've had trouble over the years ensuring every queue-based service clearly identifies the parameters and payload it accepts, so that everyone from programmers to system administrators can easily see what services are available and how they are to be used.
    Some have advocated always adding a web service in front of each message-based service to guarantee interface contracts are all well publicized. I think there are reasons to choose web services and reasons to choose message-based services, and am not convinced this is the right answer to address limitations in expressing the design contract for a message-based service. That said, I really like what we've been able to do with self-documenting web services based on annotations, and wonder if there's a conceptual equivalent in message-based software.
    What are your best practices for ensuring your message-based services are as self-documenting as your modern web service?
    Thanks in advance for your advice!
    Edited by: lsamaha on Apr 16, 2012 11:46 AM

    *bump                                                                                                                                                                                                                                                           

  • Best Practices for Workshop IDE (Development Workstation Setup)

    Is there any Oracle documentation that describes best practices for setting up Workshop and developing on a workstation that includes Oracle's ODSI, OSB, Portal, and WLI? We are using all these products on a weblogic server for each developer's machine and experiencing performance and reliability issues. What's the optimal way to use these products on a developer's workstation. Thanks.

    Hi,
    Currently you dont see such best practice site with in workshop.
    but you can verify most issues from doc.
    http://docs.oracle.com/cd/E13224_01/wlw/docs103/
    if you need any further assistance let me know.
    Regards,
    Kal

  • Workflow not completed, is this best practice for PR?

    Hi SAP Workflow experts,
    I am new in workflow and now responsible to support existing PR release workflow.
    The workflow is quite simple and straightforward but the issue here is the workflow seems like will never be completed.
    If the user released the PR, the next activity is Requisition released that using task TS20000162.
    This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow.
    The thing here is, in our organization, user does not access SAP inbox hence there are thousands of work item that has not been completed. (our procurement system started since 2009).
    Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes.
    Since the documentation is not clear enough, i can't digest why the implementer used this approach.
    May I know whether this is the best practice for PR workflow or not?
    Now my idea is to modify the send email program to complete the workitem after the email being sent to user outlook mail.
    Not sure whether it is common or not though in workflow world.
    Any help is deeply appreciated.
    Thank you.

    Hello,
    "This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow."
    It sounds liek they are sending a workitem where an email would be enough. By completing the workitem they are simply acknowledging that they have received notification of the completion of the PR.
    "Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes."
    I hope (and assume) that they only receive the email once.
    I would change the workflow to send an email (SendMail step) to the initiator instead of the workitem. That is normally what happens. Either that or there is no email at all - some businesses only send an email if something goes wrong. Of course, the business has to agree to this change.
    Having that final workitem adds nothing to the process. Replace it with an email.
    regards
    Rick Bakker
    hanabi technology

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best practice for RAC connections

    Got a question of what people consider best practice for setting up high-availability connection pools to a RAC cluster. Now that you can specify the fail-over logic right in the thin connection string it seems like there are three options.
    A) Use OCI connections and allow the fail-over logic to be maintained in the TNSNAMES.ORA file.
    B) Use simple thin connections with multi-pools and let WebLogic maintain the fail-over logic.
    C) Use simple thin connections with fail-over logic in the connection string.
    Thanks,
    Rodger...

    If you need XA, then follow the WebLogic documentation. If not, then
    you have much more freedom. The thin driver can be configured to
    use the tnsnames.ora file if that helps you. WebLogic much prefers the
    thin driver to the OCI-based one, which can kill a JVM with OCI bugs.
    If you do driver-level failover, each failed connection will cost a test
    and replace. If you use multipools, WLS can be configured to flush a
    whole pool when it finds a connection bad, and also make the failover
    at the pool level, right then, so application delay is minimized.
    Joe

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best Practice for trimming content in Sharepoint Hosted Apps?

    Hey there,
    I'm developing a Sharepoint 2013 App that is set to be Sharepoint Hosted.  I have a section within the app that I'd like to be Configuration-related, so I would like to only allow certain users or roles to be able to access this content or even see
    that it exists (i.e. an Admin button, if you will).  What is the best practice for accomplishing this in Sharepoint 2013 Apps?  Thusfar, I've been doing everything using jQuery and the REST api and I'm hoping there's a standard within this that I
    should be using.
    Thanks in advance to anyone who can weigh in here.
    Mike

    Hi,
    According to
    this documentation, “You must configure a new name in Domain Name Services (DNS) to host the apps. To help improve security, the domain name should not be a subdomain
    of the domain that hosts the SharePoint sites. For example, if the SharePoint sites are at Contoso.com, consider ContosoApps.com instead of App.Contoso.com as the domain name”.
    More information:
    http://technet.microsoft.com/en-us/library/fp161237(v=office.15)
    For production hosting scenarios, you would still have to create a DNS routing strategy within your intranet and optionally configure your firewall.
    The link below will show how to create and configure a production environment for apps for SharePoint:
    http://technet.microsoft.com/en-us/library/fp161232(v=office.15)
    Thanks
    Patrick Liang
    Forum Support
    Please remember to mark the replies as answers if they
    help and unmark them if they provide no help. If you have feedback for TechNet
    Subscriber Support, contact [email protected]
    Patrick Liang
    TechNet Community Support

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best practice for the Update of SAP GRC CC Rule Set

    Hi GRC experts,
    We have in a CC production system a SoD matrix that we would like to modified extensively. Basically by activating many permissions.
    Which is a best practice for accomplish our goal?
    Many thanks in advance. Best regards,
      Imanol

    Hi Simon and Amir
    My name is Connie and I work at Accenture GRC practice (and a colleague of Imanolu2019s). I have been reading this thread and I would like to ask you a question that is related to this topic. We have a case where a Global Rule Set u201CLogic Systemu201D and we may also require to create a Specific Rule Set. Is there a document (from SAP or from best practices) that indicate the potential impact (regarding risk analysis, system performance, process execution time, etc) caused by implementing both type of rule sets in a production environment? Are there any special considerations to be aware? Have you ever implemented this type of scenario?
    I would really appreciate your help and if you could point me to specific documentation could be of great assistance. Thanks in advance and best regards,
    Connie

Maybe you are looking for

  • How to export "tab delimited" format via local file

    hi experts i executed a query (bseg-bkpf) but in background. in sm37 i viewed my report but i coudnt get this report in alv format. is there any option for get this report ( backgrond run) in alv format?

  • Contact list

    Does anyone have a problem with dialing a number from you contact list. I have a number saved in my contact list and when i go in there to select dial it does not ring but automatically disconnects me. Everyone other number works but just the particu

  • Rough PNG lines after render

    Hi, im doing a videogame teaser which includes a lot of PNG assets. We're working on 1080p compositions and the images are even bigger (near to 4k to allow a nice upscaling). When i create a 1080p render, everything looks wonderful, but when i choose

  • User List with their roles

    Hi Guys, I want a list of all users in the database with their specified roles. Any help will be appreciated. Regards,

  • Troubles downloading the update for iphone

    When I try to download the update, it goes fine until 95% then it stops. The error message is something about lost connection, but there's no lost connection. And the firewall doesn't stop it either. I need to update it because I bought some apps, th