Aggregate table - Create command

Hi all,
I have to create aggregate tables. For that I use aggregate persistance wizard and created two files (.sql & .dll ).
When i run the .sql file using nqcmd.exe -u XXXX -p YYYY -d ZZZZ -s D:\GL_AGGR.SQL
Its throws an errror as Invalid create command
How to solve this ?
Thanks,
Haree

Hi,
See the syntax:
D:\Oracle\Middleware\Oracle_BI1\bifoundation\server\bin>nqcmd.exe -u <DB userid> -p <DB Password>
-d <DSN NAME> -s <SQL script path>
D:\Oracle\Middleware\Oracle_BI1\bifoundation\server\bin>nqcmd.exe -u userA -p
<Password> -d sampe -s D:\test.sql
Example execution steps:
Microsoft Windows [Version 6.1.7600]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\Deva>d:
D:\>cd D:\Oracle\Middleware\Oracle_BI1\bifoundation\server\bin
D:\Oracle\Middleware\Oracle_BI1\bifoundation\server\bin>nqcmd.exe -u userA -p
Password1 -d sampe -s D:\test.sql
Oracle BI ODBC Client
Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
rocessed: 1 queries
D:\Oracle\Middleware\Oracle_BI1\bifoundation\server\bin>
Thanks
Deva

Similar Messages

  • Errors when Creating Aggregate Tables in OBIEE 11.1.1.6 within SQL server

    Hi All,
    I was trying to create an aggregate table in OBIEE 11.1.1.6 within SQL Server. The sql was generated successfully as below. But an error occurred when I use NQCMD to execute the sql and the following error showed:
    1. SQL for creating Aggregate Table:
    create aggregates
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌", "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市", "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市")
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo";
    2. Error Message:
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌"
    , "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市"
    , "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群
    組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo"
    [343][State: 37000] [Microsoft][SQL Server Native Client 10.0][SQL Server]CREATE
    、DROP or ALTER 陳述式中使用未知的物件類型 'aggregates'。
    Statement execute failed
    Which means "Using unknown object type 'aggregates' in CREATE. DROP or ALTER statements" in English.
    Can anyone give me a suggestion for this error?? Many thanks!!!

    Hi Martin,
    I guess, I was not clear enough. Let me try again
    How Aggregate Persistence works in OBIEE?
    Once you are done choosing options in the Aggregate Persistence wizard, it generates an intelligent Query.
    What query is it?
    If you happen to understand the query, it is not like any ANSI standard SQL (I would say DDL) query. As you might have noticed there are no SQL Server datatypes, lengths, keys, constraints etc. This query can only be understood by the BI Server.
    How do I issue this query?
    Since the logical query could only be understood by BI Server, it has to be issued only to BI Server Engine using some tool viz NQCMD in this case.
    What does issuing this query using NQCMD do?
    The execution steps are as follows, the moment the query is issue via NQCMD
    Aggregate Persistent Wiz Generate Query ----- Issued to ---> NQCMD ----- Passes the logical query to ---> BI Server ----- Parses the query ---> Builds the corresponding physical DDL statements Issued --->To the Database --- If successful ---> .RPD is automatically updated with the aggregated sources etc.
    How do I pass the query to BI Server using NQCMD?
    The format of issuing this logical query to BI Server using NQCMD is
    nqcmd -d <Data Source Name> -u <Analytics UserId> -p <Password> -s <command> > output.log
    where
    <Data Source Name> : Is the DSN name which OBIPS uses to talk to Oracle BI Server. Yes, it's the very same DSN that can be found in InstanceConfig.xml
    <Analytics UserID> : Any user in obiee with admin privileges.
    <Password> : Password of the obiee UserId
    <Command> : Logical SQL Command which you already have handy.
    Hope I was good this time..
    Dhar

  • How to refine data in Aggregate tables  in Oracle BI

    Hello!
    How to refine data in aggregates tables (created by "create aggregates ..." statement) after the data in corresponding database tables was updated?
    It is unable to use the "delete aggregates" statement and then "create aggregates" again, because "delete aggregates" eliminates all aggregates in a model, but there are several aggregates in our model, that are intended for refining in different time moments and periods.
    Thanks

    903830 wrote:
    Hi folks,
    suppose we have table emp01, have 10 records & we create another table emp02 as
    create table emp02 as select * from emp01;
    now both the table have identical data.
    How to find "data" in two tables are identical?Why wouldn't they be identical? You've just created one table as a copy of the other. They are identical in data terms.

  • Error when creating aggregate table

    Hello,
    I am creating an aggregate table using the Aggregate Persistence Wizard. When trying to run the batch file, I am receiving an error: "Could not connect to the Oracle BI Server instance".
    But then, the Oracle BI Server is running and I am able to do queries in answers with no connection issues. (Pls see below)
    Please help.
    Thanks,
    Felicity
    D:\OracleBI\server\Repository>create_agg.bat
    D:\OracleBI\server\Repository>nqcmd -d AnalyticsWeb -u Administrator -p Administ
    rator -s D:\OracleBI\server\Repository\CREATE_AGG.sql
    Oracle BI Server
    Copyright (c) 1997-2009 Oracle Corporation, All rights reserved
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    [10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general er
    ror has occurred.
    [nQSError: 37001] Could not connect to the Oracle BI Server instance.
    Statement preparation failed
    Processed: 1 queries
    Encountered 1 errors

    Will this help you solve issue http://forums.oracle.com/forums/thread.jspa?messageID=3661598
    Check the comments in this blog http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    It deals with use permissions for the database.
    hope answers your question..
    Cheers,
    kk

  • Creating aggregate tables

    Hi gurus,
    I have problem with creating aggregate tables. I use Oracle business intelligence 11g R1 cookbook book for help but I think author forget some important things and I can't create it.
    I have fact table and dimension (fact_table, group_product_dim). I summery them and now I got  fact aggregate table and  aggregate dimension  (fact_table_agg, group_product_dim_agg):
    summary:
    fact_table
    fact_table_agg
    group_product_dim
    group_product_dim_agg
    After I create physical diagram for base and aggregate combination in physical layer.
    Now I need to move this tables in BMM. Which combination I need to bring in BMM layer? Both, basic combination or aggregate combination?
    I move basic combination (fact_table, group_product_dim) and try two add two logical table source in dimension group_product_dim. Create a new logical table source by right-clicking on the group_product_dim logical
    table and selecting the New Object | Logical Table Source option. This will bring up the new window Logical Table Source.
    I select the physical table by clicking the + sign and select (group_product_dim_agg) the aggregate dimension table. Now I make the column mapping on the Column Mapping tab of the same Logical Table Source. Only few columns are mapped and all the others are unmapped. This is because the aggregate dimension table doesn't store all columns from basic dimension group_product_dim. The last modification I do on the Content tab of the Logical Table Source.
    I need define the levels at which this logical table source can satisfy queries. This step is very important. Making any kind of mistake will cause the BI server to use a wrong logical table source for the queries.
    Here I have a problem because I can't put Logical Level into Aggregation content, group by menu. I can't put anyting else then the default value Column.
    Tnx,
    best regards

    1.) http://www.amazon.com/Oracle-Business-Intelligence-Cookbook-ebook/dp/B00DHCKTXQ/ref=wl_it_dp_o_pC_nS_nC?ie=UTF8&colid=322X8WO18RFEG&coliid=I13QJ38KK8PYTB
    2.)  I create NEW business model in BMM and after drag a fact_table and group_product_dim into it. Do I need drag aggregate objects into New business model, also?
         I don't, I drag only fact_table and group_product_dim and create New Logical Table Source on group_product_dim where I try make new source.
    Can you tell me what I need to do in background? Do I need map columns from dimension to aggregate dimension?

  • Single date dimension while creating Aggregate tables

    hi Guys ,
    I have single date dimension(D1-D) with key as date_id and granularity is day-level. I have fact table(F1-D) which gives daily transactions . Now i created three aggregate tables with F2-M(aggregated to monthly),F3-Q(Aggregated to quarterly),F4-Y(Aggregated to yearly). As I said.I have a single date dimension table with date-id as a key.I have other columns month,quarter,year in Date dimension.
    My question is : Is this single dimension table is enough to create the joins and maintain the BMM layer .I have joined the date_id to all the facts in physical layer. In BMM Layer i have one fact and 4 logical table sources .Ii have created dimension hierarchy on Date dimension and created the logical levels as year ,quarter,month and daily level and also set their respective level keys . Now after doing these i have also set the logical levels to the 4 logical table sources in the fact table.
    Here i am getting an error saying :
    WARNINGS:
    BUSINESS MODEL Financial Model:
    [39059] Logical dimension table D04_DIM_DATE has a source D04_DIM_DATE at level D04_DIM_DATE Detail that joins to a higher level fact source F02_FACT_GL_DLY_TRAN_BAL.F03_FACT_GL_PERIOD_TRAN_BAL
    Can anyone tell me why am i getting this error .

    1) as you suggest, creating mini dimension tables for each level
    3) using views with select distinct x,y,z to model 1) but without the need for phsical tables
    I have thought of these scenarios .But it's just creating a multiple Date Dimension tables with there respective grains if we follow 1 and 3 steps you mentioned.
    But i don't want to create aliases or views either on Datawarehouse or physical layer (Admin tool).Just with using a single Date Dimension table I need to comeup with a solution as mentioned by my Colleagues.
    2) pushing all the calendar attributes into the aggreate table (eg your year aggregate has all the year fields, your month aggregate has all the year fields and all the month fields etc.). Then in BM&M use the aggregate table as a logical table source for the date logical table.
    What do you mean by "pushing all the calendar attributes into the aggreate table".If possible can you eloborate a bit so i can try in this way to get this done .
    One morething .Here What they were saying is .Join the Single Date Dimension to all the 4 fact tables in Physical layer . Then drag all the tables into the BMM layer and create a Dimensional hierarchies for the Date dimension with levels as Year -->Quarter--> Month--> day level. With the Fact tables: Single fact tables with multiple logical table sources .Then we need to set the levels on the respective logical table sources with the desired levels .Say monthly aggregate table--> Month, Quarter Aggregate table--> quarter level in the Contents TAB.
    When i do this I am getting the Warning:
    WARNINGS:
    BUSINESS MODEL Financial Model:
    [39059] Logical dimension table D04_DIM_DATE has a source D04_DIM_DATE at level D04_DIM_DATE Detail that joins to a higher level fact source F02_FACT_GL_DLY_TRAN_BAL.F03_FACT_GL_PERIOD_TRAN_BAL

  • Create table space command with no specification of data file path.

    I am using following command for creating table space in oracle 11g
    CREATE TABLESPACE testTbSpace DATAFILE 'dataFileName.dbf' SIZE 50M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 32767M NOLOGGING"
    But it is creating datafile dataFileName.dbf at disk at following path
    echo $ORACLE_HOME/dbs
    I dont want to create datafile at this path and also dont want to specify data file path in 'create table space' command.
    Is there is any parameter,which i can set and above command start to create dataFileName.dbf at that path
    Edited by: user8680179 on May 15, 2012 1:54 AM

    user8680179 wrote:
    i issued following commands from 'SYS' user;
    1. show parameter db_create_file_dest;
    NAME TYPE VALUE
    db_create_file_dest string
    2.alter system set db_create_file_dest='dataFilePath';
    System altered.
    3.show parameter db_create_file_dest;
    NAME TYPE VALUE
    db_create_file_dest string dataFilePath
    4.CREATE TABLESPACE testTbSpace2 DATAFILE 'test1.dbf' SIZE 50M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 32767M NOLOGGING;
    Tablespace created.
    But still my test1.dbf file is creating at old path($ORACLE_HOME/dbs)Is datafilepath a real location? I don't think so! Give a proper path like "d:\oracle\" and retry.
    Aman....

  • When we create aggregates ? r tables created for aggregates?

    Hi all,
    When we create aggregates are tables generated for aggregates ? if so what are all the tables created and let me know the naming conventions of those tables /bic/???
    regds
    hari

    Yes - when you create an aggregate, it creates E and F tables.  The aggregate tables use a BW assigned number.
    So InfoCube 0IC_C03 has an F fact table /BI0/F0IC_C03 and an E fact table /BI0/E0IC_C03, but the aggregates fact tables are like /BIC/F100032 and /BIC/E100032.  If you create more than one aggregate on this cube, the aggregates are assigned whatever the next available number is, so you could have aaggregate 100032, 100036, 100041.
    You also have Dimension tables  with aggregates that are like /BIC/D1000321.  If a dimension for an aggregate is identical to the base cube's dimension, then the BW is smart enough to recognize that, and it will use the base cube's dimension table instead creating another one for the aggregate which would be identical.

  • Finding out the 'create' command that i gave to make a table?

    any way to find out what the 'create' command that i gave to make a table.
    i have already given the create and its created a table but is there any way of knowing what the syntax was.? after giving it?

    It's not as easy to come up with the actual (entire) command that created the table. You might be able to use
    the DESC table_name to see the structure. However, if you need to know if any fields have constraints, default values,
    what the keys are, etc.. You would have to come up with a script to create the CREATE TABLE script
    The system tables to look at are:
    USER_TABLES
    USER_TAB_COLUMNS
    USER_CONSTRAINTS
    USER_CONS_COLUMNS
    USER_INDEXES
    USER_IND_COLUMNS
    any way to find out what the 'create' command that i gave to make a table.
    i have already given the create and its created a table but is there any way of knowing what the syntax was.? after giving it?

  • Error in usage of aggregate tables

    Hi friends-
    I have created script for Aggregate Table its Good-
    but when i want to use this it shows error
    My command in command prompt:
    C:\MW_HOME2\Oracle_BI1\bifoundation\server\bin>nqcmd.exe -u Administrator -p Admin123 -d coreapplication_OH1341941094 -s c:\test.sql
    and Error
    Argument error near: ûd
    Command: nqcmd.exe - a command line client which can issue SQL statements
    against either Oracle BI server or a variety
    of ODBC compliant backend databases.
    need help-
    @zia

    Hi Zia,
    I suspect the BI Server is not able to understand the command just at the "-d" parameter. From the command of yours, I notice that the commands rolls over to a new line just at the -d parameter.
    I think there is no space between the -d and <DSN NAME> of yours in the command. Just try giving a space between them and see if it works.
    Hope this helps.
    Thank you,
    Dhar

  • Has anyone managed to configure aggregate tables correctly?

    Hi all,
    Although OBIEE supports the use of aggregate tables in fact tables as an additional source (for performance improvements), has anyone actually correctly configured them for metrics that use AVGs?
    We are seeing that if you create an aggregate table which already calculates averages, when these values are used in further aggregations at the higher levels, they are actually incorrect, as they are based on the pre-calculated averages of the aggregate table, and not on the average of the lowest grain of detail of the fact. This is not a problem for SUM, MIN or MAX metrics, but is a problem for AVG metrics.
    This is partly logical, but is a serious limitation for the use of aggregate tables. These tables are correct for all the dimensions included in that aggregated table, but not when you remove one of those dimension in an OBIEE request.
    The only workaround we see is to redefine the metric as a metric with a logical definition:
    SUM(column that contains value which we want an average for) / SUM (no. of columns in question)
    At the lowest detail this has no impact, but on the aggregated levels above, this would assure a correctly calculated average.
    Any advice? Opinions? Prior experiences? anything?

    Thanks Wildnight.
    Yes, it seems logical that with sum all would work correctly, however i think this is only one type of common metrics in use in any datawarehouse. If anyone has any other ideas let me know.
    There is also the possibility to play around with the aggregate configuration based on dimensiones (maybe introducing some clever formula in the 'Others' dimension option), or maybe defining AVG in the actual metric definition in the column mappings, and setting the aggregate rule to None.

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • How to configure an existing aggregate table?

    Generally i have created the aggregated fact tables using the wizard, but now i have to use an existing aggregate fact table, could someone help me in configuring it out?

    If you want to use an existing aggregate fact table then import the table ,create joins in physical layer.Add this aggregate table to LTS of your fact in BMM and set the level under content tab accordingly.
    Thanks
    Sandeep

  • Distinct count makes BI server to choose wrong aggregate table

    Hi experts,
    I have 4 dimension tables time, store, product and client and one fact table sales.
    The sources from table sales are from 3 aggregate tables:
    agg_sales_1: aggregate sales for one product of one client in one store per day
    agg_sales_2: aggregate sales for one product in one store per day (all clients)
    agg_sales_3: aggregate sales for one store per day (all products , all clients)
    You can see that agg_sales_1 have a lot of lines, agg_sales_2 has few lines and agg_sales_3 has very few lines (one line per store and day)...
    What I need is: all stores to see the average sales per one month (I don't care about products or clients - all of them)
    so I create :
    one fact logical column wich has sum(sales) and at time level i set it to month : total_sales_per_month
    one fact logical column wich has count(distinct(date)) and at time level i set it to month - wich gives me in one month how many days with sales I have: '#_of_days_in_with_sales_in_month'
    and I want to have the average_sales_per_month = total_sales_per_month / '#_of_days_in_with_sales_in_month'.
    So far so good:
    if in Presentation in my report I put day and total_sales_per_month then the server choses agg_sales_3 (wich is the best solution)
    if in Presentation in my report I put day and total_sales_per_month and '#_of_days_in_with_sales_in_month', or just average_sales_per_month then the server choses agg_sales_1 (wich is the worst solution).
    The question is why?
    another clue:
    if I change the aggregate function from count(distinct()) in count() (This is no good for me) then the server choses again the good table agg_sales_3.
    So, I'm thinking that the function count(distinct()) is makeing this bad behavior...
    Any suggestions pls...
    And Happy Hollydays
    Thanks
    Nicolae Ancuta

    One of the dimension table have joins to other fact tables and query routed through unwanted dim and fact tables. this is happeneing because of aggregate navigation in fact sources, content tab set to detailed level. I'm trying to use aggregate functions...

  • Aggregate tables in Administration tool

    Hello!
    I have a problem when I want to create aggregate tables.
    I create query with Aggregate Persistence, but when I run it in Job Manager in it is running but it never ends.
    Can you help me please?!
    Regards, Karin

    11.5
    Edited by: 914091 on Mar 26, 2012 5:30 AM

Maybe you are looking for