Aggregate Persistence

Hi all
I try to create three aggregate table based on my dimensions. After run Aggregate Persistence Wizard, I run the command to create the table.
The command interrupt with the error:
"[nQSError: 84008] [Aggregate Persistence] Error while processing aggregates (refer previous in log).
Statement preparation failed"
NQServer.log :
"SA_Doc_Nam00001D1D: [nQSError: 84004] [Aggregate Persistence] Database create/populate failed.
*****ABORTING AggregateManager*****: [nQSError: 84008] [Aggregate Persistence] Error while processing aggregates (refer previous errors in log)"
When i login to Administration tool, I can see the table only in the physical layer, but it's empty or not accesible. In the Business Model nothing changed.
I try to grant any permission to my db user specified in the connection pool.
thank all

Cosimo,
If you've built your business model correctly with the complex joins and the dimensions relating to your fact, then the wizard will pick them up normally and will allow you to generate a script like this:
prepare aggregates
"ag_Fact_Budget"
for "Core"."Fact - Budget"("Budget Amount")
at levels ("Core"."Conformed Date"."Fiscal Year")
using connection pool "Siebel Data Warehouse"."Data Warehouse Connection Pool"
in "Siebel Data Warehouse"."Catalog";
Check that your BM is consistent and sound (stars in physical and BM layer built right).
Documentation-wise, refer to the Oracle Business Intelligence Server Administration Guide Version 10.1.3.2, page 189 f. or the Server Architect course, module 9 "Using Aggregates".

Similar Messages

  • Error in Using Aggregate Persistence

    Hi All,
    I created an aggregate file called NewScript.sql in C drive using aggregate persistence wizard.
    And am trying to run the file using command prompt using following command ..
    C:\OracleBI\server\Bin>nqcmd.exe -d AnalyticsWeb -u Administrator -p Administrator -s c:NewScript.sql
    and i get the following error..
    Open Input file failed
    Please help me to create the aggregate tables..
    Thanks in advance..
    Regards
    Mehaboob

    Hi Dpka,
    Sorry for the late reply.
    I executed the script in sql server 2008 and got the following error.
    "Msg 343, Level 15, State 1, Line 1
    Unknown object type 'aggregates' used in a CREATE, DROP, or ALTER statement."
    This is my query
    create aggregates
    *"ag_GL_Fact"*
    for "Finance"."GL Fact"("Revenue","Avg Of Revenue")
    at levels ("Finance"."GL Accounts"."Acc Type", "Finance"."Posting Time"."Quarter")
    using connection pool "arj"."Arj Connection Pool"
    in "arj"."Arj"."dbo";
    Regards
    Mehaboob

  • Bug in Aggregate Persistence Wizard with multiple hierarchies

    Hi,
    Let's say you have a dimension with 2 hierarchies. In example:
    product -> subcategory -> category -> all products
    product -> subtype -> type -> all products
    Then you run the "Aggregate Persistence Wizard" and generate the code for 2 aggregations, one for each hierarchy.
    Then you execute the generated code wiithout errors.
    But no aggregate fact table is created, not inside the rpd, neither in the database.
    You only found the aggregated dimension tables (both in rpd and database), which btw generate consistency check errors due to the missing fact table.
    Workaround:
    The code generated by the Persistence Wizard contains a single "create aggregates" statement.
    Multiple aggregation are separated by commas inside the same statement.
    Instead create multiple "create aggregates" commands, one for each aggregation, each separated by a semicolon.
    OBIEE 11.1.1.5
    Hope it helps,
    Corrado

    Cosimo,
    If you've built your business model correctly with the complex joins and the dimensions relating to your fact, then the wizard will pick them up normally and will allow you to generate a script like this:
    prepare aggregates
    "ag_Fact_Budget"
    for "Core"."Fact - Budget"("Budget Amount")
    at levels ("Core"."Conformed Date"."Fiscal Year")
    using connection pool "Siebel Data Warehouse"."Data Warehouse Connection Pool"
    in "Siebel Data Warehouse"."Catalog";
    Check that your BM is consistent and sound (stars in physical and BM layer built right).
    Documentation-wise, refer to the Oracle Business Intelligence Server Administration Guide Version 10.1.3.2, page 189 f. or the Server Architect course, module 9 "Using Aggregates".

  • Aggregate Persistence wizard fail to generate script

    I was following the instructions for the aggregate persistence wizard, but at the last step, the screen just shows:
    The following script has been generated based on your input and will be saved at
    Create Aggregate Script:
    D:\script.sql
    And inside the box is just... blank. with 3 line breaks.
    The script file just show "create aggregates ". Opening it inside sql developer shows "create aggregates ". Is there some funny encoding going on here?
    I have no clue on what is wrong with my set up. Is it the BI? Is it the repository? (But, i've tried it out with a variety of repositories, including official sample ones) Or is it my database connection? Googling doesn't give me anything relevant to my situation.
    I'm running BI 11.1.1.5, I'd really appreciate it if someone can help me with this!

    Hi Lum,
    I have seen aggregate persistence wizard not creating any aggregate tables in the database etc but something not like this. However, you could see what is going on with this wizard with the steps below.
    1. Please check if you have ticked "Generate DDL" file.This option is used to create a second script that defines the aggregate tables on the database and repository, but does not populate them. This is useful for database administrators who want to make granular changes to the database tables generated by the system.
    2. If you are pointing your wizard to some database which has more than one connection pool, please make sure that the option "Use first connection pool" in the .rpd -> Tools -> Options is on. I have seen that sometimes this does not let go things on the correct way.
    3. Please check the .rpd version . Please make sure it the latest with 11.1.1.5
    Hope this helps.
    Thank you,
    Dhar

  • Aggregate persistence Wizard.

    Hi,
    I'm new to Obiee, I was following the Rittman Mead article at http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    I beleive I followed all the steps correctly, The Aggregate persistence wizard generated the an sql script, however, when i run the script using the nqcmd, it says 0 queries processed correctly,
    Could anyone here point out to me If there something obvious I've missed out somewhere
    Cheers
    -Chris

    When you run the script generated by the Wizard, you should see something like:
    Statement execute succeeded
    and
    Processed: N queries where N is the number of fact aggregates in your script.
    Check out my blog entry on the Aggregate Persistence Wizard
    http://www.obieeabc.com/2011/04/how-to-use-aggregate-persistence-wizard.html
    http://www.obieeabc.com/2011/04/how-to-use-aggregate-persistence-wizard_13.html

  • Errors when Creating Aggregate Tables in OBIEE 11.1.1.6 within SQL server

    Hi All,
    I was trying to create an aggregate table in OBIEE 11.1.1.6 within SQL Server. The sql was generated successfully as below. But an error occurred when I use NQCMD to execute the sql and the following error showed:
    1. SQL for creating Aggregate Table:
    create aggregates
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌", "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市", "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市")
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo";
    2. Error Message:
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌"
    , "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市"
    , "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群
    組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo"
    [343][State: 37000] [Microsoft][SQL Server Native Client 10.0][SQL Server]CREATE
    、DROP or ALTER 陳述式中使用未知的物件類型 'aggregates'。
    Statement execute failed
    Which means "Using unknown object type 'aggregates' in CREATE. DROP or ALTER statements" in English.
    Can anyone give me a suggestion for this error?? Many thanks!!!

    Hi Martin,
    I guess, I was not clear enough. Let me try again
    How Aggregate Persistence works in OBIEE?
    Once you are done choosing options in the Aggregate Persistence wizard, it generates an intelligent Query.
    What query is it?
    If you happen to understand the query, it is not like any ANSI standard SQL (I would say DDL) query. As you might have noticed there are no SQL Server datatypes, lengths, keys, constraints etc. This query can only be understood by the BI Server.
    How do I issue this query?
    Since the logical query could only be understood by BI Server, it has to be issued only to BI Server Engine using some tool viz NQCMD in this case.
    What does issuing this query using NQCMD do?
    The execution steps are as follows, the moment the query is issue via NQCMD
    Aggregate Persistent Wiz Generate Query ----- Issued to ---> NQCMD ----- Passes the logical query to ---> BI Server ----- Parses the query ---> Builds the corresponding physical DDL statements Issued --->To the Database --- If successful ---> .RPD is automatically updated with the aggregated sources etc.
    How do I pass the query to BI Server using NQCMD?
    The format of issuing this logical query to BI Server using NQCMD is
    nqcmd -d <Data Source Name> -u <Analytics UserId> -p <Password> -s <command> > output.log
    where
    <Data Source Name> : Is the DSN name which OBIPS uses to talk to Oracle BI Server. Yes, it's the very same DSN that can be found in InstanceConfig.xml
    <Analytics UserID> : Any user in obiee with admin privileges.
    <Password> : Password of the obiee UserId
    <Command> : Logical SQL Command which you already have handy.
    Hope I was good this time..
    Dhar

  • Error when creating aggregate table

    Hello,
    I am creating an aggregate table using the Aggregate Persistence Wizard. When trying to run the batch file, I am receiving an error: "Could not connect to the Oracle BI Server instance".
    But then, the Oracle BI Server is running and I am able to do queries in answers with no connection issues. (Pls see below)
    Please help.
    Thanks,
    Felicity
    D:\OracleBI\server\Repository>create_agg.bat
    D:\OracleBI\server\Repository>nqcmd -d AnalyticsWeb -u Administrator -p Administ
    rator -s D:\OracleBI\server\Repository\CREATE_AGG.sql
    Oracle BI Server
    Copyright (c) 1997-2009 Oracle Corporation, All rights reserved
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    [10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general er
    ror has occurred.
    [nQSError: 37001] Could not connect to the Oracle BI Server instance.
    Statement preparation failed
    Processed: 1 queries
    Encountered 1 errors

    Will this help you solve issue http://forums.oracle.com/forums/thread.jspa?messageID=3661598
    Check the comments in this blog http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    It deals with use permissions for the database.
    hope answers your question..
    Cheers,
    kk

  • Aggregate tables in Administration tool

    Hello!
    I have a problem when I want to create aggregate tables.
    I create query with Aggregate Persistence, but when I run it in Job Manager in it is running but it never ends.
    Can you help me please?!
    Regards, Karin

    11.5
    Edited by: 914091 on Mar 26, 2012 5:30 AM

  • Aggregate table

    Hi !
    I have a problem when I try to use the aggregate wizard, but only with some dimension tables.
    This is the error message given by Job Manager :
    Processed: 1 queries
    Encountered 1 errors
    [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred.
    [nQSError: 84008] [Aggregate Persistence] Error while processing aggregates (refer previous errors in log).
    Statement preparation failed
    I have created all my dimensions with OWB so I don't understand why it works with some tables and not with others.
    Thanks for your help

    Hi, here is my error :
    -------------------- Aggregate Manager (Error): [nQSError: 32003] The object "DATAWARE".."INFOCENTRE"."SA_Action00002041"."K" of type 'TABLE KEY': is missing a list of type 'COLUMN'.
    : [nQSError: 84003] [Aggregate Persistence] Checkin failed.
    EDIT*
    I think I find a way. This error occur when I select the most detailed hierarchy of a dimension. I have DimA, DimB and Fact tables. DimA has 2 hierarchy levels (DimA1 and DimA2) and DimB 3 (DimB1, DimB2, DimB3). When I want to create aggregates tables using DimA2 or DimB3, I have the error above. So how can I do to solve this ?
    Edited by: Yannis on 5 juil. 2010 01:42
    EDIT 2*
    I finally solve my problem creating a new parent level for my most detail hierarchy.
    Edited by: Yannis on 5 juil. 2010 02:33

  • Aggregate tables in 10G

    Hi Experts,
    In OBI 10G how to use Aggregated Tables? How does the server knows when it should use Aggregated Table? And when to use ordinary table at the time of fetching the data?
    What is the purpose Aggregate persistence wizard in RPD.
    Thanks in advance,

    Aggregate Table (Aggregate Persistence Wizard)
    Aggregate Table: Aggregate tables store precalculated measures that have been aggregate over a set of dimensional attributes.
    This is very useful technique for speeding up query response time in decision support systems. This eliminates the need of run time calculations and delivers faster results to users The calculations are done ahead of time and the results are stored in the tables.
    The key point is that the aggregate table should have fewer rows than the non aggregate table and therefore processing should be quicker.
    Aggregate Persistence Wizard
    Go to: OBIEE Admin > tool> Utilities > Aggregate Persistence Wizard
    http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    http://obieetutorialguide.blogspot.com/2012/03/creating-aggregate-tables-in-obiee.html

  • Rebuilding Aggregate Tables

    Hi,
    In the OTN tutorial for the Aggregate Persistence Wizard, it instructs you to rebuild Aggregate Tables by putting a "delete aggregates;" command in the beginning of the script. I am just wondering if this is a standard practice for production environments as well. I have just began work on a project where the person here before me set up the aggregate table script but did not put the delete aggregates; command in the beginning.
    Any ideas?
    Thanks,
    Kevin

    Yes,
    You should always clean up first! If there have been copy paste action in the repository you into the risk that the aggregate tables have new "underwater" id's.
    http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    regards
    John
    http://obiee101.blogspot.com

  • Reg Aggregate tables

    Hi Experts,
    I want small clarification regarding aggregate tables if i want to implement this which method i have i have to follow weather importing aggregate tables from DB to the physical layer or by using aggregate persistence wizard in RD itself.
    In which situations we have to go for importing and which situations we have to go for aggregate persistence wizard.
    Regards,

    You import the aggregated table's if you have them built already in the database and you use the wizard in order to create aggregate tables in the RPD that does not exist in the database. The utility automates the creation and initial population of aggregates, persists them in a back-end database and configures the BI Server metadata layer so that they’re used when appropriate. What’s particularly interesting about this feature is that you can create aggregates for, say, an Oracle database and store them in an SQL Server, DB2 or Teradata database

  • Aggregate table showing wrong data

    Hello Gurus:
    I am working on an issue, where a report is showing wrong value for an aggregated fact table.
    If I try not including it in query, then results work fine. But by default BI server is pointing to aggregate table ( which it should ).
    So my questions are
    1) Does Aggregate tables are refreshed automatically with DAC?
    2) I know data is wrong in aggregate table. How do I verify?
    3) How to make a particular report hit regular table instead of aggregate table?
    Please let me know.
    Thanks.
    ~Vinay

    Alright. So I made some progress on this issue, HOWEVER it is still not solved.
    1) Aggregated tables are refreshed daily with DAC. There is a script for that on server which is created using Aggregate Persistence wizard.
    2) ONLY one column is showing wrong data. I have verified this using Toad. However I dont know why is it showing wrong data. Theoretically it should be fine.
    3) this question is still same.
    How to make a particular report hit regular table instead of aggregate table?
    Please help me out.
    Thanks.
    ~Vinay

  • Best practice for making a report of 10,000 to 20,000 rows(OBIEE 10.3.4.1)

    My Scenario is like this:*
    Hi i have 2 fact tables fact1 and fact 2 and four dimension tables D1,D2,D3 ,D4 & D1.1 ,D1.2 the relations in the data model is like this :
    NOTE: D1.1 and D1.2 are derived from D1 So D1 might be snow Flake.
    [( D1.. 1:M..> Fact 1 , D1.. 1:M..> Fact 2 ), (D2.. 1:M..> Fact 1 , D2.. 1:M..> Fact 2 ), ( D3.. 1: M.> Fact 1 , D3.. 1:M..> Fact 2 ),( D4.. 1:M..> Fact 1 , D4 ... 1:M..> Fact 2 )]
    Now from D1 there is a child level like this: [D1 --(1:M)..> D1.1 and from D1.1.. 1:M..> D1.2.. 1:M..> D4]
    Please help me in modeling these for making a report of 10,000 rows and also let me know for which tables do i need to enable cache?
    PS: There shouldn't be performance issue so please help me in modeling this.
    Thanks in Advance for the Experts who are helping me for a while.

    Shudn't be much problem with just these many rows...
    Model something like this only Re: URGENT MODELING SNOW FLAKE SCHEMA
    There are various ways of handling performance issues if any in OBIEE.
    Go for caching strategy for complete warehouse. Make sure to purge it after every data load..If you have aggr calculations at higher level then you can also go for aggregated tables in OBIEE for better performance.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    Hope this is clear...Go ahead with actual implementation and lets us know incase you encounter any major issues.
    Cheers

  • Syntax error in NQSConfig.ini.file

    # NQSConfig.INI
    # Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    # INI file parser rules are:
    # If values are in literals, digits or _, they can be
    # given as such. If values contain characters other than
    # literals, digits or _, values must be given in quotes.
    # Repository Section
    # Repositories are defined as logical repository name - file name
    # pairs. ODBC drivers use logical repository name defined in this
    # section.
    # All repositories must reside in OracleBI\server\Repository
    # directory, where OracleBI is the directory in which the Oracle BI
    # Server software is installed.
    [ REPOSITORY ]
    Star     =     OracleBIAnalyticsApps.rpd, DEFAULT
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     =     NO;
    // A comma separated list of <directory maxSize> pair(s)
    // e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
    DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;
    // Cluster-aware cache
    // GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
    // MAX_GLOBAL_CACHE_ENTRIES = 1000;
    // CACHE_POLL_SECONDS = 300;
    // CLUSTER_AWARE_CACHE_LOGGING = NO;
    # General Section
    # Contains general server default parameters, including localization
    # and internationalization, temporary space and memory allocation,
    # and other default parameters used to determine how data is returned
    # from the server to a client.
    [ GENERAL ]
    // Localization/Internationalization parameters.
    LOCALE     =     "English-usa";
    SORT_ORDER_LOCALE     =     "English-usa";
    SORT_TYPE = "binary";
    // Case sensitivity should be set to match the remote
    // target database.
    CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
    // SQLServer65 sorts nulls first, whereas Oracle sorts
    // nulls last. This ini file property should conform to
    // that of the remote target database, if there is a
    // single remote database. Otherwise, choose the order
    // that matches the predominant database (i.e. on the
    // basis of data volume, frequency of access, sort
    // performance, network bandwidth).
    NULL_VALUES_SORT_FIRST = OFF;
    DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
    DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
    TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
    // Temporary space, memory, and resource allocation
    // parameters.
    // You may use KB, MB for memory size.
    WORK_DIRECTORY_PATHS     =     "C:\OracleBIData\tmp";
    SORT_MEMORY_SIZE = 4 MB ;
    SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
    VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
    // Analytics Server will return all month and day names as three
    // letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
    // To use complete names, set the following values to YES.
    USE_LONG_MONTH_NAMES = NO;
    USE_LONG_DAY_NAMES = NO;
    UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
    // Aggregate Persistence defaults
    // The prefix must be between 1 and 8 characters long
    // and should not have any special characters ('_' is allowed).
    AGGREGATE_PREFIX = "SA_" ;
    # Security Section
    # Legal value for DEFAULT_PRIVILEGES are:
    # NONE READ
    [ SECURITY ]
    DEFAULT_PRIVILEGES = READ;
    PROJECT_INACCESSIBLE_COLUMN_AS_NULL     =     NO;
    MINIMUM_PASSWORD_LENGTH     =     0;
    #IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
    #SSL=NO;
    #SSL_CERTIFICATE_FILE="servercert.pem";
    #SSL_PRIVATE_KEY_FILE="serverkey.pem";
    #SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
    #SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
    #SSL_VERIFY_PEER=NO;
    #SSL_CA_CERTIFICATE_DIR="CACertDIR";
    #SSL_CA_CERTIFICATE_FILE="CACertFile";
    #SSL_TRUSTED_PEER_DNS="";
    #SSL_CERT_VERIFICATION_DEPTH=9;
    #SSL_CIPHER_LIST="";
    # There are 3 types of authentication. The default is NQS
    # You can select only one of them
    #----- 1 -----
    #AUTHENTICATION_TYPE = NQS; // optional and default
    #----- 2 -----
    #AUTHENTICATION_TYPE = DATABASE;
    # [ DATABASE ]
    # DATABASE = "some_data_base";
    #----- 3 -----
    #AUTHENTICATION_TYPE = BYPASS_NQS;
    # Server Section
    [ SERVER ]
    SERVER_NAME = Oracle_BI_Server ;
    READ_ONLY_MODE = NO;     // default is "NO". That is, repositories can be edited online.
    MAX_SESSION_LIMIT = 2000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
    SERVER_THREAD_RANGE = 40-100;
    SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    DB_GATEWAY_THREAD_RANGE = 40-200;
    DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
    MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
    INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
    CLIENT_MGMT_THREADS_MAX = 5; // default is 5
    # The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
    # a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
    RPC_SERVICE_OR_PORT = 9703; // default is 9703
    # If port is not specified with a host name or IP in the following option, the port
    # number specified at RPC_SERVICE_OR_PORT will be considered.
    # When port number is specified, it will override the one specified with
    # RPC_SERVICE_OR_PORT.
    SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
    # or "IP1","IP2":port or
    # "hostname":port,"IP":port2.
    # Note: When this option is active,
    # CLUSTER_PARTICIPANT should be set to NO.
    ENABLE_DB_HINTS = YES; // default is yes
    PREVENT_DIVIDE_BY_ZERO = YES;
    CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
    # SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
    # for the cluster participant yet.
    // Following required if CLUSTER_PARTICIPANT = YES
    #REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
    #REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
    DISCONNECTED = NO;
    AUTOMATIC_RESTART = YES;
    # Dynamic Library Section
    # The dynamic libraries specified in this section
    # are categorized by the CLI they support.
    [ DB_DYNAMIC_LIBRARY ]
    ODBC200 = nqsdbgatewayodbc;
    ODBC350 = nqsdbgatewayodbc35;
    OCI7 = nqsdbgatewayoci7;
    OCI8 = nqsdbgatewayoci8;
    OCI8i = nqsdbgatewayoci8i;
    OCI10g = nqsdbgatewayoci10g;
    DB2CLI = nqsdbgatewaydb2cli;
    DB2CLI35 = nqsdbgatewaydb2cli35;
    NQSXML = nqsdbgatewayxml;
    XMLA = nqsdbgatewayxmla;
    ESSBASE = nqsdbgatewayessbasecapi;
    # User Log Section
    # The user log NQQuery.log is kept in the server\log directory. It logs
    # activity about queries when enabled for a user. Entries can be
    # viewed using a text editor or the nQLogViewer executable.
    [ USER_LOG ]
    USER_LOG_FILE_SIZE = 10 MB; // default size
    CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
    # Usage Tracking Section
    # Collect usage statistics on each logical query submitted to the
    # server.
    [ USAGE_TRACKING ]
    ENABLE = YES;
    //==============================================================================
    // Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
    STORAGE_DIRECTORY = "<full directory path>";
    CHECKPOINT_INTERVAL_MINUTES = 5;
    FILE_ROLLOVER_INTERVAL_MINUTES = 30;
    CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
    //==============================================================================
    DIRECT_INSERT = YES;
    //==============================================================================
    // Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
    PHYSICAL_TABLE_NAME = "OBI Usage Tracking"."Catalog"."dbo"."S_NQ_ACCT" ; // Or "<Database>"."<Schema>"."<Table>" ;
    CONNECTION_POOL = "OBI Usage Tracking"."Usage Tracking Writer Connection Pool>" ;
    BUFFER_SIZE = 10 MB ;
    BUFFER_TIME_LIMIT_SECONDS = 5 ;
    NUM_INSERT_THREADS = 5 ;
    MAX_INSERTS_PER_TRANSACTION = 1 ;
    //==============================================================================
    # Query Optimization Flags
    [ OPTIMIZATION_FLAGS ]
    STRONG_DATETIME_TYPE_CHECKING = ON ;
    # CubeViews Section
    [ CUBE_VIEWS ]
    DISTINCT_COUNT_SUPPORTED = NO ;
    STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
    USE_SCHEMA_NAME = YES ;
    USE_SCHEMA_NAME_FROM_RPD = YES ;
    DEFAULT_SCHEMA_NAME = "ORACLE";
    CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
    LOG_FAILURES = YES ;
    LOG_SUCCESS = NO ;
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\CubeViews.Log";
    # MDX Member Name Cache Section
    # Cache subsystem for mapping between unique name and caption of
    # members for all SAP/BW cubes in the repository.
    [ MDX_MEMBER_CACHE ]
    // The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
    ENABLE = NO ;
    // The path to the location where cache will be persisted, only applied to a single location,
    // the number at the end indicates the capacity of the storage. When the feature is enabled,
    // administrator needs to replace the "<full directory path>" with a valid path,
    // e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
    DATA_STORAGE_PATH     =     "C:\OracleBIData\cache" 500 MB;
    // Maximum disk space allowed for each user;
    MAX_SIZE_PER_USER = 100 MB ;
    // Maximum number of members in a level will be able to be persisted to disk
    MAX_MEMBER_PER_LEVEL = 1000 ;
    // Maximum size for each individual cache entry size
    MAX_CACHE_SIZE = 100 MB ;
    # Oracle Dimension Export Section
    [ ORA_DIM_EXPORT ]
    USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
    DEFAULT_SCHEMA_NAME = "ORACLE";
    ORA_DIM_SCHEMA_NAME = "ORACLE";
    LOGGING = ON ; # OFF, DEBUG
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\OraDimExp.Log";
    Help me out Gurus thanks in advance

    Hi,
    Star = OracleBIAnalyticsApps.rpd, DEFAULTthis should be end with semicolon
    Star = OracleBIAnalyticsApps.rpd, DEFAULT;
    Assign Points and close thread, if your question is answered...
    Cheers,
    Aravind

Maybe you are looking for

  • Drag and drop not working

    I use iTunes on my PC and sync my iPod to it.This has worked fine for a few years now. Suddenly this morning I can no longer add mp3 files to iTunes from my PC using drag and drop. Anyone know why this is happening and what I can do about it?

  • How can I get iMovie to unfreeze? It won't play my video I made.

    How can I get iMovie to unfreeze? I made a video from different clips and I tried to add audio. After I tried to add audio iMovie won't play my video any longer. How can I fix this?

  • Internal tables for OM infotypes when LDB is used

    It's pretty much clear that for the infotypes used in a report, when a logical database is used the corresponding structures with pxxxx is created. Is that limited only for PA infotypes or is that applicable to OM infotypes also?  I could identify th

  • *** Dropping an HDV sequence into a DV sequence for DVD output

    I would like to output my HDV project to DVD but don't have an HDV camera or deck. I've dropped a few clips into an SD (anamorphic) sequence and rendered it out, but when the playback is viewed on a tv, it is jittery and not smooth as it needs to be.

  • Is it possible to achieve this effect in Muse?

    The effect is on the menu system at http://blazdesign.com/portfolio/. When you click on a menu item the contents below animate and rearrange. Very slick. Does anyone know how this was achieved? And if there is anyway to achieve this using Muse, perha