OWB Repository Performance, Best Practice

Hi
We are considering installing OWB repository in its own database, designed solely to the design repository to achieve maximum performance at the design center.
Does anyone have knowledge of best practice in setting up the database to OWB repository? (db parameters, block size and so on).
We are currently using Release 11.1.
BR
Klaus

You can found all this informations in the documentation. Just here:
http://download.oracle.com/docs/cd/B31080_01/doc/install.102/b28224/reqs01.htm#sthref48
You will find all Initialization Parameters for the Runtime Instance and for the design instance.
Success
Nico

Similar Messages

  • Reflection Performance / Best Practice

    Hi List
    Is reflection best practice in the followng situation, or should I head down the factory path? Having read http://forums.sun.com/thread.jspa?forumID=425&threadID=460054 I'm now wondering.
    I have a Web servlet application with a backend database. The servlet currently handles 8 different types of JSON data (there is one JSON data type for each table in the DB).
    Because JSON data is well structured, I have been able to write a simple handler, all using reflection, to dynamically invoke the Data Access Object and CRUD methods. So one class replaces 8 DAO's and 4 CRUD methods = 32 methods - this will grow as the application grows.
    Works brilliantly. It's also dynamic. I can add a new database table by simply subclassing a new DAO.
    Question is, is this best practice? Is there a better way? There are two sets of Class.forName(), newInstance(), getClass().getMethod(), invoke() ; one for getting the DAO and one for getting the CRUD method.....
    What is best practice here. Performance is important.
    Thanks, Len

    bocockli wrote:
    What is best practice here. Performance is important.I'm going to ignore the meat of your question (sorry, there are others who probably have better insights there) and focus on this point, because I think it's important.
    A best practice, when it comes to performance is: have clear, measurable goals.
    If your only performance-related goal is "it has to be fast", then you never know when you're done. You can always optimize some more. But you almost never need to.
    So you need to have a goal that can be verified. If your goal is "I need to be able to handle 100 update requests for Foo and 100 update requests for Bar and 100 read-only queries for Baz at the same time per second", then you have a definite goal and can check if you reached it (or how far away you are).
    If you don't have such a goal, then you'll be optimizing until the end of time and still won't be "done".

  • Can anyone suggest me the OBIEE Repository/Answers best practice document?

    Hi,
    I'm looking for the OBIEE repository/answers/dashboard development best practice doument.can you suggest me where can i find this document?

    Hi,
    Below the links are helpful for you,
    Oracle BI Applications Installation and Configuration Guide
    http://download.oracle.com/docs/cd/E12104_01/books/AnyInstAdm/AnyInstAdmTOC.html
    Creating a Repository Using the Oracle Business Intelligence Administration Tool
    http://www.oracle.com/technology/obe/obe_bi/bi_ee_1013/bi_admin/biadmin.html
    Creating Interactive Dashboards and Using Oracle Business Intelligence Answers
    http://www.oracle.com/technology/obe/obe_bi/bi_ee_1013/saw/saw.html
    Hope its helpful for you and award points,
    Thanks,
    Balaa...

  • CE Benchmark/Performance Best Practice Tips

    We are in the early stages of starting a CE project where we expect a high volume of web service calls per day (e.g. customer master service, material master service, pricing service, order creation service etc).
    Are there any best-practice guidelines which could be taken into account to avoid any possible performance problems within the web service u201Cinfrastructureu201D? 
    Should master data normally residing in the backend ECC server be duplicated outside ECC? 
    e.g. if individual reads of the master data in the backend system take 2 seconds per call, would it be more efficient to duplicate     the master data on the SAP AS Java server, or elsewhere u2013 if the master data is expected to be read thousands of times each    day.
    Also, what kind of benchmarking tools (SAP std or 3rd party) are available to assess the performance of the different layers of the infrastructure during integration + volume testing phases?
    I've tried looking for any such documentation on SDN, OSS, help.sap.com, but to no avail.
    Many thanks in advance for any help.
    Ali Crawshaw

    Hi Ali,
    For performance and benchmarking have you had a look at Wiley Introscope?
    The following presentation has some interesting information [Wiley Introscope supports CE 7.1|http://www.google.co.za/url?sa=t&source=web&ct=res&cd=7&ved=0CCEQFjAG&url=http%3A%2F%2Fwww.thenewreality.be%2Fpresentations%2Fpdf%2FDay2Track6%2F265CTAC.pdf&ei=BUGES-yyBNWJ4QaN7KzXAQ&usg=AFQjCNE9qA310z2KKSMk4d42oyjuXJ_TfA&sig2=VD1iQvCUmWZMB5OB-Z4gEQ]
    With regards to best practice guidelines, if you are using PI for service routing try to keep to asynch services as far as possible, asynch with acknowledgments if need be. Make sure your CE Java AS is well tuned according to the SAP best practice.
    Will you be using SAP Global Data Types for your service development? If you are then the one performance tip i have regarding the use of GDT's is to keep your GDT structures as small (number of fields) as possible, as large GDT structures have an impact on memory consumption at runtime.
    Cheers
    Phillip

  • XDK -Performance best practices etc

    All ,
    Am looking for some best practices with specific emphasis on performance
    for the Oracle XDK ..
    can any one share any such doc or point me to white papers etc ..
    Thanks

    The following article discusses how to choose the most performant parsing strategy based on your application requirements.
    Parsing XML Efficiently
    http://www.oracle.com/technology/oramag/oracle/03-sep/o53devxml.html
    -Blaise

  • OBIA repository customization best practices

    Hello experts,
    I have what I believe to be a very simple question but it's one that I can not find a clear answer to. I'm looking to use OBIA 7.9.6.3 prebuilt data models for Financials, Supply Chain, and Order Management. My business would prefer the following customizations to the repository:
    -BMM column name changes
    -BMM description changes
    -BMM logical hierarchy (drill down) changes
    What are the best practices for achieving these changes to be sure that a future upgrade to OBIA will not overwrite my changes?

    Hi,
    Check Doc ID 546999.1 on support.oracle.com
    Thanks,
    Wilson

  • Performance best-practices?

    Does a program run slower if for each method invoked I declare and use a bunch of intermediate references/variables inside the method? or faster, if some of those references/variables were declared as members of the class that owns the method?

    Does a program run slower if for each method invoked I
    declare and use a bunch of intermediate
    references/variables inside the method? or faster, if
    some of those references/variables were declared as
    members of the class that owns the method?Theoretically, it would make thing run faster to declare everything on the class level, since the JVM would not have to allocate/deallocated memory for the temporary variables. However in practice, this method of development can actually be seriously detrimental to your physical health as
    1) You tear out your hair (if you have any left), blood pressure skyrockets, etc... as you try to maintain the code in the face of large numbers of static/global variables.
    2) The person who maintains this after you tracks you down and causes you seriously bodily injury.
    Unless the method-local variable initialization is expensive (i.e. database connections, large chunks of memory, etc...) , then the increase is negligible and not worth the fact that extracting local methods to class level in general makes your code:
    1) less robust/flexible
    2) not thread safe
    3) less cohesive
    4) harder to maintain
    For the certain items where it is expensive for allocation, then other data structures rather than class level instances can help out. (c.f. fly weight pattern, singleton pattern, object pooling). Be careful of haphazard and poorly considered scope changes. If you are experiencing performance problems or want to tune your code, chances are the bottleneck is NOT with method-local variables. As Donald Knuth says, premature optimization is the root of all evil.
    - N

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • Best practice when developing APEX apps and using a SVN repository

    Hi experts,
    I wanted to get your opinion on best practice regarding how to use SVN and APEX combined.
    The idea is basically how to structure and how to save APEX apps the best way in a repository.
    I am currently working with a custom SVN structure, not using the default TRUNC/TAGS one : every app has a folder , under every app folder i have PAge number folders, and for each page reports, regions and global objects separated.
    This helps me because its more readable then saving the whole page export, its good for small changes and i have a clear overview of every bit and piece.
    What is everybody else using or is there a best practice to follow here that i dont know?
    Kind regards,
    Alex

    @tomaugerdotcom
    Something like this might help: https://testflightapp.com/
    Concevably, you could roll your own internal service if that particular one doesn't suit you. (I don't have any knowledge about how they are doing it, but it shouldn't be hard to figure out since Apple's constraining rules would only allow a few possibilities.)
    USB app install and debugging isn't supported on iOS. You have to use wireless.
    Another option specifically for multi-touch dev/testing, is to use an Android device.

  • Function Module performance in Crystal Reports - Best practices

    Hi all,
    We are following a function module based approach for our crystal reporting needs. We tried to follow an infoset approach, but found that most of the critical fields required for reports were retrieved from function modules and bapis.
    Our reports contain some project filters/parameter fields based on which the task reports would be created. I was wondering what would be the best approach/best practices to be considered while designing the FM so as not to impact the crystal report performance? 
    We created a sample FM in our test system with just the table descriptions ( without the input parameters) which would retrieve all the projects and found that crystal reports crashed while trying to retrieve all the records. I am not sure if this is the right approach since this is our project in using FMs for crystal reports.
    Thank you
    Vinnie

    yes. We did try following the infoset approach against the tables however since our project reports contain long text fields and status texts ( retrieved via FMs), we opted for the FM approach. Do you know how texts can be handles from ABAP to Crystal reports?

  • Performance Tuning Best Practices/Recommendations

    We recently went like on a ECC6.0 system.  We have 3 application servers that are showing a lot of swaps in ST02. 
    Our buffers were initially set based off of SAP Go-Live Analysis checks.  But it is becoming apparent that we will need to enlarge some of our buffers.
    Are there any tips and tricks I should be aware of when tuning the buffers? 
    Does making them too big decrease performance?
    I am just wanting to adjust the system to allow the best performance possible, so any recommendations or best practices would be appreciated.
    Thanks.

    Hi,
    Please increase the value of parameters in small increments. If you set the parameters too large, memory is wasted. This can result in paging if too much memory is taken from the operating system and allocated to SAP buffers.
    For example, if abap/buffersize is 500000, change this to 600000 or 650000. Then analyze the performance and adjust parameters accordingly.
    Please check out <a href="http://help.sap.com/saphelp_nw04/helpdata/en/c4/3a6f4e505211d189550000e829fbbd/content.htm">this link</a> and all embedded links. The documentation provided there is fairly elaborate. Moreover, the thread mentioned by Prince Jose is very good for a guideline as well.
    Best regards

Maybe you are looking for