Impact of Query Logging on Performance of Queries in OBIEE

I see from [An Oracle BI Blog post|http://obieeblog.wordpress.com/2009/01/19/obiee-performance-tuning-tip-%e2%80%93-turn-off-query-logging/] that Query Logging has a performance impact in OBIEE.
What is the experience with Query Logging at different levels in a Production environment with, say, 50 or 100 or 500 concurrent users ?
I am completely new to OBIEE, I know the Database. So, please bear with me.
Hemant K Chitale

Kumar's blog that you reference says it all really.
I don't know if anyone's going to be able to give you the kind of information you're looking for, because it's a no-brainer not to enable this level of logging :)
Is there are reason you're even considering it?
Imagine in the database running a low-level trace or debug log for every user session... you just wouldn't do it

Similar Messages

  • Query logging?

    Hi,
    A customer of mine picked up this in the essbase log
    Missing Database Config File [C:\Hyperion\AnalyticServices\APP\<app name>\OHead\OHead.cfg], Query logging disabled
    The essbase app is used for planning.
    I cannot find the file concerned. The other databases do not have a config file.
    Any ideas??
    Thanks,
    Nathan

    You will only have a query loggin cfg file if you set one up. It is related to the query logging capability.
    From the DBAG
    Implementing Query Logs
    Query logging provides a way for Essbase administrators to track query patterns of an Essbase database. The query log file tracks all queries performed against the database regardless of whether the query originated from Spreadsheet Add-in or Report Writer. Query logging can track members, generation or level numbers of members belonging to specific generations or levels, and Hybrid Analysis members. Query logging also offers the flexibility to exclude logging of certain dimensions and members belonging to generations or levels. Because the query log file output is an XML document, you can import the log file to any XML-enabled tool to view the log. For details about the query log file structure, refer to querylog.dtd in the ARBORPATH/bin directory.
    To enable query logging, create a query configuration file (distinct from the essbase.cfg file) and add to the file the configuration settings that control how query logging is performed.
    In the ARBORPATH\App\appname\dbname directory of Essbase, create a query log configuration file. The configuration file must be named dbname.cfg, where dbname matches the name of the database. For example, the query log configuration file for Sample Basic is basic.cfg. The output query log file is located at, by default, ARBORPATH\App\appname\dbname00001.qlg.
    For more information about query logging and how to set up the query log configuration file, see the Technical Reference.

  • Query Logging Enabled but cannot find

    As per the DBAG I should be able to find the query log hereHYPERION_HOME/logs/essbase/app/appname/dbname/dbname00001.qlg
    When I use this path to I can only get to to app D:\EPM\EPMSystem11R1\logs\essbase\app there is nothing inside the app folder. Am I looking at the wrong folder?
    Thanks for the help.

    Hello,
    From the DBA guide
    To enable query logging, create a query configuration file ... and add to the file the configuration settings that control how query logging is performed.
    In the ARBORPATH/app/appname/dbname directory, create a query log configuration file. The configuration file must be named dbname.cfg, where dbname matches the name of the database. For example, the query log configuration file for Sample.Basic is basic.cfg. The output query log file is located, by default, at ARBORPATH/app/appname/dbname00001.qlg.
    You must stop and start the application, so it can read the config file.
    Contents of a query log config file:
    http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_tech_ref/frameset.htm?launch.html
    Query Log Settings File SyntaxThe query log settings filename must be of the form dbname.cfg, where dbname represents the name of a database. The dbname.cfg file must be located in the ARBORPATH\App\appname\dbname directory of Essbase. The dbname.cfg file consists of the following syntax:
    QUERYLOG [dimension_name]
    QUERYLOG NONE GENERATION generation-range
    QUERYLOG NONE LEVEL level-range
    QUERYLOG GENERATION generation-range
    QUERYLOG LEVEL level-range
    QUERYLOG LOGHAMBRS ON | OFF
    QUERYLOG LOGPATH path-expression
    QUERYLOG LOGFORMAT CLUSTER | TUPLE
    QUERYLOG LOGFILESIZE n
    QUERYLOG TOTALLOGFILESIZE n
    QUERYLOG ON | OFF
    Regards,
    Philip Hulsebosch.
    www.trexco.nl
    p.s. To all users, close questions which were answered. Then we do not have to open them to see if we can help = saves us time to help others.

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

  • Performance of queries against large AD CS databases - how to optimize?

    I am asking experts with experience with AD CS databases with 100.000s or millions of certificate to confirm or correct my "theories".
    I am aware of these two articles that state performance is not an issue for millions of certificates:
    Windows CA Performance Numbers and
    Evaluating CA Capacity, Performance, and Scalability
    However, here performance is mainly evaluated in terms of database size and request / certificate throughput. I am more interested in the performance of queries as I have seen that it might take minutes to build up views for databases with 100.000s of certificates
    - no matter if you use certutil -view, certsrv.msc, or access to CCertview.
    Could this just be due to an "unfortunate" combination of non-indexed fields? Any advice on which queries to avoid?
    Or is the solution just as simple as to throw more memory or CPU or both at the problem?
    In case it hinges on an unfortunate choice fields and you absolutely have to do this query my guess is that you have to use a custom policy(*) module (FIM or third-party) to dump certificates to a SQL database and do your queries there.
    Am I right or did I miss something? Any input is highly appreciated!
    Elke
    PS / edit: That should have been 'Exit module' - I don't know why I wrote Policy Module. Thanks for Vadims for pointing it out.

    > I meant 'exit module'
    exit module is correct one. However, it is notified by a CA only when new request is issued/processed. This means that you can use Exit Module to copy certificate information to SQL only for new requests, for existing requests you are still sticking
    with a database dump.
    > but I should probably check how I dealt with the row handles
    I don't know how COM handle are working in VBS, but in PowerShell (and other CLR languages) COM handle may not be handled properly by a garbage collector, therefore, when COM object is not necessary, you should set reference count to zero. In CLR it is made
    by calling Marshal.ReleaseComObject method. This will mark COM object as safe for garbage collector. For example, the typical row/column iterator scheme is:
    $Row = $ICertView.OpenView()
    # do row iteration
    while ($Row.Next() -ne -1) {
    # acquire IEnumCERTVIEWCOLUMN COM object
    $Column = $Row.EnumCertViewColumn()
    # do column iteration for the current row
    while ($Column.Next() -ne -1) {
    # collect column information and other stuff
    # do other stuff if necessary
    # release IEnumCERTVIEWCOLUMN object. This is the last line in the while/do loop.
    [Runtime.InteropServices.Marshall]::ReleaseComObject($Column)
    # release IEnumCERTVIEWROW COM object as well
    [Runtime.InteropServices.Marshall]::ReleaseComObject($Row)
    My weblog: en-us.sysadmins.lv
    PowerShell PKI Module: pspki.codeplex.com
    PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
    Check out new: SSL Certificate Verifier
    Check out new:
    PowerShell FCIV tool.

  • The Connection String for the Query Log table is automatically encrypted.

    When I try to use the Usage Based Optimization to apply Aggregation Design to my measure group, it shows me the following
    error message.
    The connection string cannot be found. Open Microsoft SQL Server Management Studio and, in the Analysis Server Properties
    page, check the value of the Log\QueryLog\QueryLogConnectionString
    property.
    I encountered this error like two weeks ago.  At that time I just reset the connection string and every things seem
    to be fine.  A week ago, I successfully applied the Usage Based Optimization for one of my cubes.  However when I tried to apply UBO for my other cubes today, I encountered the same issue again!  I believe no one has changed the property of
    the connection string.
    Also if I query the Query Log table, I can see those latest queries made by the users.  I'm sure the queries are still
    logging into this table.
    This is really strange.  Anyone else has encountered the same issue?  Thanks.

    Hello Thomas,
    I encounterd this issue. And I am struggling trying to solve this problem. If you have resolved this issue and I guess you must've, because this post is two years old, could you kindly post how you resolved this issue?
    Thanks in advance
    Best Regards,
    Neeraja

  • Query log file missing

    I am trying create a query log file for an ASO cube. I have created the database.cfg file in the database directory and restarted the application. Then I run a few queries, and stop the application. Query log file (database.qlg) is not created. My database.cfg file looks like this:<BR>QUERYLOG [LOB]<BR>QUERYLOG LOGFILESIZE 2<BR>QUERYLOG TOTALLOGFILESIZE 1024<BR>QUERYLOG ON<BR><BR>Thanks

    I am trying create a query log file for an ASO cube. I have created the database.cfg file in the database directory and restarted the application. Then I run a few queries, and stop the application. Query log file (database.qlg) is not created. My database.cfg file looks like this:<BR>QUERYLOG [LOB]<BR>QUERYLOG LOGFILESIZE 2<BR>QUERYLOG TOTALLOGFILESIZE 1024<BR>QUERYLOG ON<BR><BR>Thanks

  • SSAS 2012 Tabular. Analysis Services Query Log not working

    Hello.
    It is possible to configure Analysis Services Query Log on SSAS 2012 Tabular (11.0.3321) server? Or this feature works only with Multidimensional server mode?
    It is not a problem for me to configure Query Log on Multidimensional server. But when I'm trying to do this in Tabular server result is always the same: table S_OlapQueryLog created but no data there.
    This problem exist in our production server. And is fully reproduced in test environment. This problem was reproduced with SQL 2012 RTM, CU4, SP1 and SP1 CU1.
    I tryed to configure Analysis Services Query Log by using SQL account. Later by using windows authentification. Accounts was db owners. Later I have used accounts with sysadmin rights. Results was the same.
    In msmdsrv.log i can find only related events like this:
    (12/7/2012 9:22:59 PM) Message: The query log was started. (Source:
    \\?\P:\Olap\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x41210003)
    (12/7/2012 9:25:03 PM) Message: The query log was stopped. (Source:
    \\?\P:\Olap\Log\msmdsrv.log, Type: 1, Category: 289, Event ID: 0x41210004)
    I'm stuck completely with this functionality. So if someone was much luckier than I please write to me.
    Regards
    Audrius

    I agree with Gerhard's comments, but there are a couple of additional points
    - I am not 100 percent sure if queries that are answered by the SE cache are recorded in the query log
    Queries answered by the SE cache are not definitely not recorded in the QueryLog. So depending on the types of queries and the design of your cube you could possible miss a large proportion of the actual end user queries.
    The QueryLog records the QuerySubcube events and there are zero to many of these generated for a given end user query (zero if the query can be answered from cache). The optimizer may also choose to pre-fetch a wider range of data or to break a single range
    into a few smaller requests so it is not a true indication of the actual query that the end user generated.
    Doing a trace is the only way to catch the actual queries submitted by end users.
    http://darren.gosbell.com - please mark correct answers

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • OBIEE 11g : query log not found

    Hi,
    I am not able to see the query log in 11g answers manage session throwing error query log not found.
    I am using obiee 11g. 11g admin client is installed in local machine and I upload the rpd through enterprise manager. But I can not able to open the rpd in online mode that's why cannot change the query log level=2 (as in obiee 10g) for seeing the query log in Answers. Usually after making changes in 11g rpd, I upload that in server via enterprise manager console.
    Can anyone please tell me what should be correct option to see the query log and how I can open the rpd in online mode and how I can set the query log level in obiee 11g????
    Please help.
    Thanks
    Titas

    Hi,
    Its known bug and it can be done by below methods,
    Method1:
    If you enabled loglevel for each users wise it may be override with below place also can you confirm both places.
    enabled Tools-->Options-->Repository-->
    System log level by default will be 0 just try to increase to 2 or 3 and save it.
    Method1:
    by each report wise enabling loglevel
    try putting the below syntax in prefix section of advanced tab.
    SET VARIABLE LOGLEVEL=2,DISABLE_CACHE_HIT=1;
    It should generate the log with database sql as well.
    Method 3:
    Create Session variable(LOGLEVEL) with initblock
    in your init block --> datasource place put it like below query
    select 3 from IW_POSITION
    Note:just point any existing physical table from u r RPD.
    Then try to save it and test it.
    Refer screen
    http://bidevata.wordpress.com/2012/03/03/no-log-found-error-in-obiee-11g/
    Thanks
    Deva
    Edited by: Devarasu on Oct 11, 2012 11:44 PM

  • SQL query log File..

    Hi All,
    I am using a standalone BIP environment... Is there way we can see the actual query log the BIP generates? which location on the server..or how to get it activated.
    I want to see what is the actual sql query that is generated along with variables in it.
    Fyi : I want to have the query log n not the server error log..
    -dev

    Hi Eric,
    I would like you to re-check on the content level settings here as they are the primary causes of this kind of behavior. You could notice that the same information might have written down in the logical plan of the query too.
    Also, as per your description
    "In the SOURCES for this logical table, I've set the logical level of the content for E2 appropriately (detail level, same as E1)"
    I would like to check on this point again, as if you had mapped E2 to E1 in the same logical source with an inner join, you would get to set the content level at E1 levels themselves but not E2 (Now, that E2 would become a part of the E1 hierarchy too). This might be the reason, the BI Server is choosing to elimiate(null) the values from E2 too (even you could see them in the sql client)
    Hope this helps.
    Thank you,
    Dhar

  • How to pass parameter to the Query String of the Named Queries'SQL

    Firstly to say sorry,I'm a beginner and my English is very little.
    Now I want to know
    How to pass parameter to the Query String of the Named Queries'SQL in the Map editor.
    Thanks.

    benzi,
    Not sure if this is on target for your question, but see #5 in the link below for some web screencasts that show how to pass an input text form field value to the bind variable of a view object. If you're looking for something different, maybe provide some more details such as what you are trying to accomplish and what technology stack you are using - for example, ADF BC, JSF, etc.
    http://radio.weblogs.com/0118231/stories/2005/06/24/jdeveloperAdfScreencasts.html
    Also see section 5.9 and chapter 18 in the developer's guide.
    thanks

  • Location of query log files in OBIEE 11g (version 11.1.1.5)

    Hi,
    I wish to know the Location of query log files in OBIEE 11g (version 11.1.1.5)??

    Hi,
    Log Files in OBIEE 11g
    Login to the URL http://server.domain:7001/em and navigate to:
    Farm_bifoundation_domain-> Business Intelligence-> coreapplications-> Dagnostics-> Log Messages
    You will find the available files:
    Presentation Services Log
    Server Log
    Scheduler Log
    JavaHost Log
    Cluster Controller Log
    Action Services Log
    Security Services Log
    Administrator Services Log
    However, you can also review them directly on the hard disk.
    The log files for OBIEE components are under <OBIEE_HOME>/instances/instance1/diagnostics/logs.
    Specific log files and their location is defined in the following table:
    Log Location
    Installation log                     <OBIEE_HOME>/logs
    nqquery log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    nqserver log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    servername_NQSAdminTool log      <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    servername_NQSUDMLExec log                          <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    servername_obieerpdmigrateutil log (Migration log)           <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    sawlog0 log (presentation)                          <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
    jh log (Java Host)                               <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIJavaHostComponent\coreapplication_obijh
    webcatupgrade log (Web Catalog Upgrade)                <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
    nqscheduler log (Agents)                          <OBIEE_HOME>/instances/instance1/diagnostics/logsOracleBISchedulerComponent/coreapplication_obisch1
    nqcluster log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIClusterControllerComponent\coreapplication_obiccs1
    ODBC log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIODBCComponent/coreapplication_obips1
    opmn log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    debug log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    logquery log                               <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    service log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    opmn out                              <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    Upgrade Assistant log                         <OBIEE_HOME>Oracle_BI1/upgrade/logs
    Regards
    MuRam

  • How to set up Query Logging in OBIEE 11g 11.1.1.6

    Hi all,
    I couldn't find the option for setting up the query logging for testing and debugging purpose. Could anyone plz help me in this ...
    Regards,
    Arun

    Hi Arun,
    For simplicity I'll direct you to a link http://obiee11gqna.blogspot.com.au/2011/04/obiee-11g-query-log-log-level.html
    Richard

  • Performing sql queries in java without using java libraries

    i wonder whether its possible to perform sql queries beginning from create table to updating queries without using the java sql library.
    has anyone written such code.

    You could use JNI to talk to a native driver like the Oracle OCI driver. Doing this is either exiting or asking for trouble depending on your attitude to lots of low level bugs.

Maybe you are looking for

  • Moving data out of an ODS

    I'm playing with ODS for the first time.  I've created the ODS and the update rules.  Now I have data loaded and activated in the ODS.  I want to move that data to the next ODS level.  What steps do I take to move this data from one ODS to another. 

  • Addind art to more than one song ?

    I was wondering how I could add album art to a selection of songs as opposed to having to click on every song within a CD and adding the art individually. It'd be much easier to just select them all and only drag on the picture once. I h ave some CDs

  • F1 Help Topic Not Displaying

    We provide F1 help for almost every field in our application. I converted our Winhelp project into .CHM file and used the .hh file provided by the developer. The .chm file compiles just fine. But the developer states that the F1 help does not show up

  • Procedure in Discoverer

    Hi All, Is there a way to use PL/SQL Procedure using DML operations in Discoverer and How ? Are they any events in Discoverer like Before Reports or After Report like Oracle Reports? - Raghav

  • IPhoto Blank Disc is getting Disc Full

    When I put in a blank CD, iPhoto sees it but then says "Disc Full", other can can burn CDs off this same machine, so its an iPhoto or a firmware issue, Any help would be greatly appreciated, thanks, diane specs: iMac G5 1.8GHz OS X 10.4.7 Boot ROM Ve