Errors in Aggregator

We're using the PAR tool for our statistics but the aggregator is now failing after running fine for a few days.  I see the following in the error:
query = INSERT INTO WCR_AGGINFO(TIMESTAMPHOUR,AGGREGATIONLEVEL,LASTAGGLEVEL,NUMA
GGREGATES) VALUES (0,'h',' ',0)
Warning: Update of AggInfo table failed!
caused by:
com.sap.sql.DuplicateKeyException: [SAP_NWMss][SQLServer JDBC Driver][SQLServer]
Violation of PRIMARY KEY constraint 'PK__WCR_AGGINFO__4E1E9780'. Cannot insert d
uplicate key in object 'WCR_AGGINFO'.
Any help would be most helpful. Thanks!

Hi,
Due to duplicate entry in database table it's happens
you should manualy delete the entry from WCR_AGGINFO
table.
regards,
kaushal

Similar Messages

  • Internal error: Invalid Aggregation Info No entry for InfoObject: 0FI_DSBIT

    Hi,
    When i create a transformation rule b/w 0fi_ar_4 & ods 0FIAR_C04
    i am mapping the fields which i need rest of them I am not mapping the fields for which i am not mapping  leads to errors like
    Internal error: Invalid Aggregation Info No entry for InfoObject: 0FI_DSBITEM  - contact SAP
    when I map this error I cant see
    is it mandtory to map all the fields

    Harish,
       not a problem. check which feilds you don't want to map. Make ithose as "No Transformation" for those fields in Rule details.
    Note: What ever is possible in 3.X for UR or TR, everything is possible in Transformation of BI 7.0 As well( excpet return table, etc).
    all the best.
    Regards,
    Nagesh Ganisetti.

  • Numeric overflow error in aggregation level formula

    The formula for Revenue/Quantity is giving me the following error:
    Error: [314]: numeric overflow: search table error:  [6944] AttributeEngine: overflow in numeric calculation;Error executing physical plan: exception 6944: AttributeEngine/Parallel/Aggregation.cpp:573 AttributeEngine: overflow in numeric calculation; $function$=read; $message$=unable to read measures RAOL01:_SYS_CE__popid_24_531C272BF80A349FE10000007F000002_558972en TEST_Revenue fixed16.12 ,in executor::Executor in cube: RAOL01:_SYS_CE_$REQUEST$_popid_24_531C272BF80A349FE10000007F000002_558973: calcEngine search on olapIndex failed.
    I am aware that if Quantity is zero then we get this error, but I have already handled it and Quantity is never zero.
    Both the measures are Decimal, any suggestions as to how to handle this?

    My first guess you have reached the upper limit of the datatype on concerned column and the AttributeEngine is not capable now to handle the numeric overflow, hence throwing the error message.
    Try converting it to higher datatypes like double, you can also try to do an workaround with datatype conversion functions like TO_DOUBLE before aggregation func.
    If you are using something like sum("COL1") try using sum(TO_DOUBLE("COL1"))

  • Error When Executing DTP

    Internal error: Invalid Aggregation Info No entry for InfoObject: 0CURTYPE  - contact SAP
    Message no. RSTRAN000.
    when i am loading data to DSO using Data Transfer Process i am getting this error. Transformation is also in Active Mode. I dont know why i am getting this error. Can anybody help me regarding this one.
    Regards,
    Mahesh.

    Hi Anil,
    With respect to the note 1020813 - we have already installed with this package BI Patch 13 or SAPKW70013. After that also i am getting this error. Can u Please guide me if u know about any solution regarding this one.
    Regards,
    Mahesh.

  • Custom Folder SQL Error

    Hi,
    I am constantly getting the following error trying to create a custom folder in Disc Admin 9.02.00.00
    "The custom SQL entered contains enhanced aggregation and is therefore invalid"
    The first time I tried to create the folder the SQL did have aggregation ie:
    select dim3.group_id, dim3.description, sum(fact2.revenue)
    from dim3, fact2
    where dim3.group_id = fact2.group_id
    group by dim3.group_id, dim3.description
    Q1. Can you aggregate like this in a custom foler?
    But I still get the error without aggregation ie:
    select dim3.group_id, dim3.description, fact2.revenue
    from dim3, fact2
    where dim3.group_id = fact2.group_id
    and even with:
    select dim3.group_id from dim3
    I have restarted admin a couple of times but that has not helped.
    Q2. Why am I still getting the above error message with the simpler SQL statements.
    Thanks Phil

    PS: I also tried creating the folder in a new business area with the SQL:
    select dim3.group_id from dim3
    Phil

  • MORE THAN 1 OF THE DETAIL FOLDERS USES NON-AGGREGATED ITEMS

    What does this join configuration message mean? I am a user not an administrator but I notice that sometimes I get this message when I am trying to create reports in Discoverer.
    Currently I have a report that is using a control table and a detail table. I need to link to another table (not a detail table). The join did not exist but I was able to take a transaction and join to the table with sql. I asked that this join be created in Discoverer so I can link to the other table in the report and it was created, but when I go to pull in the field I need I get this message:
    Join configuration cannot be resolved. Reason: more than 1 of the detail folders uses non-aggregated items.
    Can this be resolved and if so how? We seem to get this message fairly often.

    I am always leery of disabling fan traps, as it can cause errors with aggregation (and by errors, I mean including values multiple times). Make sure you check the documentation (the Disco user guide) for an explanation of fan traps, and the types of problems that can occur, and the check the results of your queries (or have the data owners do it) to make sure the results are valid.
    I use the fan trap warning as an indication that there is something amiss in the BA, and then try to find the problem there. Sometimes the solution is changing a join (you can check the box that states there is a 1 to 1 relationship and that will also eliminate the fan traps), other times it is creating a view or custom/complex folder to specify how the data is to be joined. Rarely is the only solution to disable fan traps (and while I am on my soapbox, I'll add that I use it as a last resort).
    Sometimes removing fan traps is the correct solution, but you need to test your reports to make certain the results are valid.

  • Before aggregation in NW2004s

    Hi All,
    We moved to NW2004S but we are still using BEx3.5
    I have created a simple calculated kf(say Amt) at ODS level (Amt = qty*price).
    The Amt numbers are coming incorrectly.I have used Before aggregation and it throws me an error " Before aggregation is not allowed for Amt".
    Can some one please explain how can i achieve this.?
    Thanks in advance.
    Regards,
    Raj

    refer:
    Before Aggregation in NW2004s(BI7)
    /message/3943188#3943188 [original link is broken]

  • Invalid Aggregation Info No entry for InfoObject: 0CRM_NO_IO- contact SAP

    hello gurus...
    when I try to load data into DSO.. I'm getting errors like..
    Internal error: Invalid Aggregation Info No entry for InfoObject: 0CRM_NO_IO  - contact SAP
    I'm not using this IO. not only this, I'm getting this error for all IO which I dint map in Transformations... any ideas???
    Thanks
    RKR

    restart the transaction.

  • EYE 007 Aggregated Value for Analysis Authorisations

    Hi there,
    I'm attempting to unit test a new report in our development environment via RSECADMIN. Having created the role and assigned to the test user I get the error that aggregated values for particular characteristics are empty. However I've already added these to an analysis authorisation and used this for another report where it finds the characteristics.
    I'm stumped as to why this report doesn't find the same values. I've generated the role and run a user master compare, but this still fails. Any help is appreciated.
    Thanks.

    1. Please take the InfoProvider on which you have created your query and find which characteristics are Authorization Relavant for that MultiProvider/InfoProvider.
    2. Make sure all these characteristics are added to the analysis authorizations assigned to the user: Detailed feild values for the one your report is about and aggregated value for the other one and all the relevant 0TCA* content as well
    The report should work, however in your case it seems like you are assigning the characteristics using separate analysis authorizations, in that case make sure the concerned InfoProvider is mentioned in each analysis authorization under 0TCAIPROVfor the analysis authorizations to combine.

  • OBIEE Query not hitting the other fact table

    Hi All,
    I am trying to create report based on two fact column and one dimension. Dimension is connected with these two facts table. When i create report using one column from dimension and one column from respective facts so i get two scenerio...
    For example let say..
    D1 is dimension and F1 and F2 are two fact tables.
    First i used a column which have aggregation rule from one fact and one column from other fact which also have aggregate column.
    That is report like...
    D1.c1,Agg(F1.c2),Agg(F2.c3)
    When i run the report I get the data from dimension and only from first fact table. When i check the query, Query contain only one fact table and it doesnt hit the other one.
    Now in second scenerio i used one column from dimension, one column from first fact which have aggregation rule and one column from second fact which doesnt have any aggregation rule.
    like...
    D1.c1,Agg(F1.c2),F2.c3
    When i run the report i got the error. It says
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: F1 -C2 . Please fix the metadata consistency warnings. (HY000).
    But there is no warning in RPD.
    I am amazed that it is not taking both the fact columns even the dimension is confirmed dimension and have joined with both the fact tables.
    As i am just started to learn OBIEE, So i am find it bit difficult that how OBIEE select the tables and formed physical query.
    Waiting for your help.
    Regards
    Suhail

    Aadi-Wasi,
    Thumb rule, OBIEE BMM layer must contain a simple star schema.
    Did your BMM layer suffice the above condition? If hope, Not.
    My prediction of your BMM layer structure contains 3 logical tables, i.e. dimension & 2 logical facts...which is not a simple star.
    Thus to make it a simple star collapse 2 logical fact tables into 1 logical fact table. As you mentioned dimension is linked to both facts, collapsing 2 logical fact tables into 1 logical fact table will provide the result for your query1.
    regarding your second error:
    All aggregations must be contained within Fact tables with few exceptions.
    Let us know If I did resolve your issue
    mark posts promptly...
    J
    -bifacts
    http://www.obinotes.com

  • Can we add an info object and its nav. attr in an aggregate together?

    Hello
    I am getting an error when I try to add 0MATERIAL and 0MATERIAL_OPROD_HIER in an aggregate at the same time. In the cube, both 0MATERIAL and its attr 0MATERIAL_0PROD_HIER sit in the same dimension. Therefore, when I try to create an aggregate and drag these chars in, they try to come into the same dimension here also.
    1st scenario:
    1. I  add 0MATERIAL in the aggregate.
    2. I try to add 0MATERIAL_0PROD_HIER in the aggreate, I get an error message ' Aggregation levels cannot be defined for 0MATERIAL_0PROD_HIER.
    3. And it does not let me add this nav attr at all.
    2nd scenario:
    1. I first add 0MATERIAL_0PROD_HIER in the aggregate.
    2. I try to add 0MATERIAL now.. I get the same error as above for 0MATERIAL this time.
    3. But it lets me add 0MATERIAL inside.
    4. Now, when I go its context menu (right click) and select aggregation as 'All Char Values' for 0MATERIAL, then the error goes away.
    5. But after I activate and fill the aggregate, I have a look it again.. and now I only see 0MATERIAL and its nav attr which I added first has disappeared.
    Why do the above 2 happen? And if I have an info object and its nav attr in ROWS in a query, and if I am not able to add both of them to the aggregate.. how can I improve the performance of the query?
    Please advise.

    Hi Srinivas
    In your scenario, it is sufficient if you just add 0Material when you are creating the aggregate. Need not add the nvaigational attribute in the aggregate definition. The procedure you followed the first time is right i.e. Drag and drop 0material first.
    The system does not allow you to add the nav attribute because it is not required. I say this because a navigational attribute added to your infoprovider is like (in terms of SQL) joining your infoprovider with your material master table. Navigational attribute's value is not physically stored in your cube. It is brought in during execution time in both list cube as well as in your query output.
    Similarly when you execute your query with 0material as your aggregate, even if you add a filter or variable restriction on 0Material_prodHier your query WILL use the aggregate since 0material is referenced.
    You can even check this in RSRT. Give your query name, and generate report first. Then click on Execute + Debug, Choose Display SQL/BIA Query and enter your selection and choose enter. You will see the SQL of your query and in the SQL select statement you will see that your aggregate table is chosen instead of your infocube. This aggregate table only will be read during your query execution in Bex as well.
    I hope this helps.
    Thanks.

  • Slow query using view on cube

    I have created a cube using Analytic workspace manager (oracle 10G R2) which is to be used (via a view) in OBIEE.
    One of the dimensions (event_dim) is extremely sparse (it has been marked as sparse and is at the appropriate level in the cube's dimensional hierarchy).
    In general, when I query the cube (via the view) at high levels of aggregation, the performance is good, but when I query the cube at the lowest level of granulrity for the event_dim dimension, the performance is extremely poor (more than a minute to return).
    The problem seems to be that the view is returning data for all possible rows in the cube even if most of the measures are NA (i.e null since there is no data present).
    For example if I run a query against the cube with no filter on the measures I get around 20,000 rows returned - obviously this takes a while. If I then put a 'my_measure > 0' clause on the query I get 2 rows back (which is correct). However this still takes more than a minute to return - I assume that this is because the query is having to process the 20,000 rows to find the two that actually have data.
    Is there any way to control this - I never need to see the NA data so would like to be able to disable this in either the cube or the view - and hence improve performance.
    Note: I cannot use the compression option since I need to be able to override the default aggregation plan for certain dimension/measure combinations and it appears that compression and overriding the plan are incompatible (AWM gives the error "Default Aggregation Plan for Cube is required when creating cube with the Compression option").
    Thanks,
    Chris

    I have seen this in some examples/mails. I havent tried it out myself :)
    Try using a OLAP_CONDITION filter with an appropriate entry point option (1) on the OLAP_TABLE based query and achieve the goal of restricting output from query to value with meas > 0. This condition can be added as part of a specific query or as part of the OLAP_TABLE view definition (applicable to all queries). Hopefully this way there is no need to customize the limitmap variable to suit the cube implementation internal details like compression, partitioning, presummarization, global composite etc.
    NOTE1: The olap_condition entry point 1 pushes the measure based dimension filter within the cube before fetching results from cube. Hopefully this will help speed up the retrieval of results. This should work well if we want the restriction to apply across 1 dimension.. Time or Product alone.. only 1 olap_condition is sufficient.
    SELECT ...
    FROM <olap_table_based_view>
    where ...
    and olap_condition(olap_calc, ' limit time KEEP sales_sales > 0', 1)=1
    --and olap_condition(olap_calc, ' limit time KEEP any(sales_sales, product) > 0', 1)=1
    NOTE2:
    For cases where both time and product (and more dimensions) need to be restricted then we can use 2 olap_conditions to restrict data to set of time and products where some data exists but you could still end up with a specific row (cross combination of product and time) with zero value. You may want to bolster the pre-fetch filtering by olap_condition via a regular sql filter referencing the external measure column (and sales_view_col >0) which is applied on to the results after it is fetched from the cube.
    E.g:
    SELECT ...
    FROM <olap_table_based_view>
    where ...
    and olap_condition(olap_calc, ' limit product KEEP any(sales_sales, time) > 0', 1)=1
    and olap_condition(olap_calc, ' limit time KEEP any(sales_sales, product) > 0', 1)=1
    and sales_view_col >0
    HTH
    Shankar

  • How to have amounts and amounts splitted at the same time

    Hi everybody, I need to show data as in the example (4 fields):
    Product Category           Number of tickets by category               Product               Number of tickets by product
    Category 1                              100                            product A                             50
                                                                           product B                              50
    Category 2                               10                             product C                              8
                                                                           product D                               2As you can see I need to see the total of tickets for each product category, and then splitted by product.
    In order to try to have this result, I take the following fields from the business area:
    Product Category
    count(tickets_id)
    Product
    but I'm not able to have the amount of tickets by category and the amount of tickets by product at the same time inside the report.....because as soon as I split by product I loose the total of tickets by Category.....
    How can I have this two amounts together ? The first referred to the total of tickets by Category and the second splitted by product at the same time?
    I tried to do a calculation count(count(tickets_id))....but it's not possible...I have the error "Nested Aggregated functions are not allowed"..
    Any help will be appreciated
    Thanks in advance
    Alex

    Tamir, thank you so much !!!
    It work perfectly ! but...just the last question...How can I group these records ?
    I explain to you better. Now, the situation, after using the analytic functions (as you mentioned) is the following:
       Product_Category      Count                  Product             Count      
         Category 1          100                   Product A              40
         Category 1          100                   Product A              40
         Category 1          100                   Product A              40
         Category 1          100                   Product B              60
         Category 2          10                    Product C              8
         Category 2          10                    Product D              2
         Category 2          10                    Product D              2 If I use the SORT (order by), I have empty records (blank) as below:
       Product_Category      Count                  Product             Count      
         Category 1          100                   Product A              40
         blank               blank                 blank                  blank
         blank               blank                 blank                  blank
         Category 1          100                   Product B              60
         Category 2          10                    Product C              8
         Category 2          10                    Product D              2
         blank               blank                 blank                  blank ...but I like to have the data grouped as:
       Product_Category      Count                  Product             Count      
         Category 1          100                   Product A              40
                                                   Product B              60
         Category 2          10                    Product C              8
         Category 2          10                    Product D              2 So, my question is this: how can I group the data using Discoverer ?
    Thanks again
    Alex

  • Job Termination in source system-Dataload failure

    Hello Forum,
    We are on BW 3.1 and our source is SAP R/3 Enterprise.
    We have been facing a recurring problem for a dataload, that is JOB TERMINATION IN SOURCE SYSTEM.
    Details of the load:
    Target:Master Data
    Full load
    Processing:      PSA and InfoObject table in se

    Sorry, it got posted before I completed it.
    Please find the full details here:
    Hello Forum,
    We are on BW 3.1 and our source is SAP R/3 Enterprise.
    We have been facing a recurring problem for a dataload, that is JOB TERMINATION IN SOURCE SYSTEM.
    Details of the load:
    Target:Master Data
    Update Mode: Full load
    Processing: PSA and InfoObject table in series
    Selections: None
    Records Check:Terminate if errors occur; Aggregation    permitted
    Character Check: Switched off
    This has been delaying the loads for a long long time.
    Could you please suggest us the ways to fix this problem?
    Thanks is advance,
    Raj

  • Error : Reading from Aggregation Level not permitted

    Hello Gurus,
          Could somebody please give some help or advice regarding this?
    I have a multiprovider on a regular cube and an aggregation level, for some reason the multicube gives me the following error message when I try to display data using listcube.
    Reading from Aggregation Level is not permitted
    Message no. RSPLS801
    Also the  Query on the multicube does not display data for any of the KF's in the Agg Level but when I create a query on the Agg level itself it is fine.
    Any suggestions?
    Thanks.
    Swaroop.
    Edited by: Swaroop Chandra on Dec 10, 2009 7:29 PM

    Hi,
    transaction LISTCUBE does not support all InfoProviders, e.g. aggregation level are not supported. LISTCUBE is a 'low level' to read data from the BW persistence layer, e.g. InfoCubes. Since aggregation level always read transaction data via the so called planning buffer and the planning buffer technically is a special OLAP query LISTCUBE does not support aggregation level.
    Regards,
    Gregor

Maybe you are looking for

  • Pela segunda vez tenho um gravador de DVD danificado pelo Premiere 11!

    A gravação de mídias diretamente através do Premiere 11 é lenta e de resultado nem sempre positivo. Quando solicitamos a gravação da midia, nos é apresentado um tamanho de arquivo a ser gravado que requer a utilização de midias de maior capacidade (b

  • How to stop logging external jar debug statements

    Hi, I am developing a java app which uses FOP to convert xml to PDF. Log4j is used for logging and in my appender's threshold is set to "debug". I only want debug output of my app not the FOP, how can I achieve that. At this stage I get thousands of

  • Convert ALV or layout in HTML or xls and send attachment through mail

    Dear Expert, Actually i have a ALV program and layout. Now i want to open these program in HTML and XLS both the option, according to Radio Button selection for Html or XLS. Then send  this output to concern person via email as an attachment. Plz. He

  • I can't permanently delete files from my macbook

    I looked at my disk utility and then used an app to tell me what files are taking up the most space. As I'm looking back there are a whole bunch of files that I uninstalled and deleted. Those files are still listed as taking up space. When I search i

  • Launchd dependencies

    I have an service that needs to run 24/7. The problem is that it requires 2 separate processes, the first a server running some java for the backend, and the other an Adobe AIR app drawing images and encoding them for saving. So the server obviously