Granular level of a table

Hi frnds , in a process , i have been assigned an issue ,,,,below is the requirement given by my client
We need to configure the Budget Table in the RPD Side.
The granularity of this table is at : Customer Site Use Id -- Sales Rep Id -- Ent Period Id
Include the Budget amount, Budget YTD, Budget QTD in the Sales Order Lines/Invoice Lines
The Aggregation content should be at the Period Level.
Please note the difference, the Sales Order lines and invoice lines is at Day level, where as the Budget is at the Period Level
We will have to set the aggregation content accordingly.
i have a budget table in database and i need to import it to rpd , i need to give joins and copy the BUDGET_Amount column and create YTD , QTD for it and drop it in Sales Order Lines and Invoice Lines fact table .
This is what i understand .
Can some one out there please explain what is the granular level , and also about the above issue....
Thank you

Hi,
I dragged the budget_amount measure into sales order lines/invoice lines logical fact tables . I have set the budget LTS at Quarter Level. when i check at front end , i am unable to see any data in budget_amount column.
Kindly help me in fixing this ,
Aggregation content should be set at period level
what does it mean , period level ,wat i understand is Year or Quarter .Is it true?
Please help me in this
Thank You

Similar Messages

  • Multiple granular levels for fact table

    My fact table has to incorporate both at Transaction level and Accumulative , my basic design for Transaction level is as follows
    CUSTOMER_KEY, LOAN_KEY, TIME_KEY, LOAN_AMT, TOTAL_DUE, LOAN_STATUS, TRANSACTION
    9000,1000,1,200,200,Open, Advance
    9000,1000,1,200,0,Close, Payment If I aggregate the values then query will take time to execute . How can I provide cumulative information from this fact table? shall i go for one more fact table for Accumulative information ?
    Please suggest.
    Thanks,
    Hesh.

    Hi ,
    Is it a question of OLAP cube generation using your fact table design ? If not then it is incorrect forum ..If yes , then it should be straighway ur fact and dimension design and no need of another fact table with aggregation because OLAP cube aggregate this which should ideally be tremendous fast
    Thanks,
    DxP

  • How to group data at granularity level hours and every 10 minutes?

    I have sales table imported from SQL server. The date columns are are captured at granularity level such as 30-12-2013 16:50:16. 
    what is best way create master date table, and create relationship between the sales table and the master date table?
    I am new in Power Pivot, and any complete detailed steps and links will be appreciated
    Note: just clarify the requirements. The business wants to see how much sales a consultant sells in  every 10, 30 minutes during normal business hour.
    Hope this help
    jl

    Split your field:
    1) A date portion related to your date dimension at the day grain
    2) A time portion related to your time dimension at a 10 minute grain.
    In TSQL:
    SELECT
    ,[Date] = CAST( <datetimefield> AS DATE) -- date with no time
    ,[Time] = CAST( '18991230 '
    + RIGHT('0' + DATENAME(Hour, <datetimefield>), 2) + ':'
    + RIGHT( '0' + CAST( (DATEPART(MINUTE, <datetimefield>) / 10) * 10 AS NVARCHAR(2)), 2) + ':'
    + '00' AS DATETIME)
    Write this in your query to populate the fact table. This will give you a field holding just the date, and one holding the time at a 10 minute granularity.
    A date table is trivial to produce in SQL or in Excel.
    Here's a link for doing a very basic one in SQL Server.
    A time table is trivial as well:
    WITH TimeCTE ([Time]) AS
    (SELECT CAST('18991230 00:00:00' AS DATETIME)
    UNION ALL
    SELECT DATEADD(MINUTE, 10, [Time])
    FROM TimeCTE
    WHERE [Time] < CAST('18991230 23:50:00' AS DATETIME)
    SELECT * FROM TimeCTE OPTION(MAXRECURSION 0)
    This gives you the beginnings of a dimension with time intervals every 10 minutes.
    You can extend this table with TSQL functions or DAX calculated columns, whichever you find more convenient.
    Then, you can join your <fact table>[Date] to <date dimension>[Date], and your <fact table>[Time] to <time dimension>[Time], and do all of your filtering on those tables.
    Note: I have used a full datetime field for the time dimension above. This is because Power Pivot/Tabular only know datetime as a data type. If you want to add time to a date, the time portion must be recorded on 1899-12-30 to achieve the desired result.
    When importing a TIME data type into the Tabular model, the field is assigned the date of processing, which is absurdly annoying.

  • Help on  Setting logical Levels  in Fact tables and on Dimension tables

    Hi all
    Can any body provide any blogs or any king of material on what exactly is levelling .
    Like after creating the Dimensional hierarchies we need to set the logical levels for the LTS of fact tabels ri8 .So what is the difference between setting logical levels to fact tabels and also Setting levelling on Dimension tables .
    Any kind of help is appreciated
    Thanks
    Xavier.
    Edited by: Xavier on Aug 4, 2011 10:50 AM

    I have read these blogs ,but what my question is
    Setting the logical levels in LTS of Fact tables i understood .
    But we can also set the logical levels for dimensions also ri8 .I didn't understand why do we set the logical levels for dimensions .Is there any reason why we go with the levelling at dimensions
    Thanks
    Xavier
    Edited by: Xavier on Aug 4, 2011 2:03 PM
    Edited by: Xavier on Aug 4, 2011 2:32 PM

  • XI message status at Adapter engine level using a table (SAP table)

    Hello Experts,
    XI message status at Adapter engine level using a table (SAP table).
    We want to write a custom report using ABAP so Pls tell why the status u2018Holdingu2019 and u2018To be deliveredu2019 are present in message monitoring of RWB but not in the status (MSGSTATE) field of SXMSPMAST table.
    My need is to write a report to get the messages based on the these status from table level.
    Please let me know the table name and field name for this and the table name for the desciption of the status of XI messages at Adapter level.
    Thanks
    Gopesh

    Hi Gopesh,
    the Adapter Engine Messaging System messages are on the Java schema,
    i.e., see the following -
    [XI/PI tables|https://www.sdn.sap.com/irj/scn/wiki?path=/display/xi/xi+tables]
    Regards
      Kenny

  • Display drilling path at granular level of report title in BAM 11g

    Is it possible to show drilling path at report title location at the granular level of drill-down?consider that i have a report zone wise distribution of total price in a barchart.Once i click on say east zone it will show office wise distribution once i click on say canada-office bar it will show the details of the canada office.in the canada office details lavel i want to disply total driling path in the report title location at run time.so i can have at the report title location of the granular level eastzone-->canada office-->like this.if you have any solution on that please help me.thanks in advanced.
    Edited by: user8925573 on 9 Feb, 2010 9:14 AM

    I am sorry this is not available right now.
    Regards,
    Vishal
    BAM Development

  • How to Decrease fragmentation level of a table.

    Hello Everyone,
    There is a table present in my production database.
    Its version is Oracle 10G(10.2.0.1.0)
    i)Its size is 26 GB with fragmentation level.
    ii)It has more then 48 crores of rows.
    iii)Its size is 20 GB without fragmentation level.(So 6 GB of unused space is wasted).
    Following are my doubts regarding the above points.
    i) How to decrease the fragmentation level of the table.
    ii) if there is any way to decrease the fragmentation level,how much time it will take to perform the particular activity so as to decrease the fragmentation level.

    Well Mine is Oracle 10G. Hence the production database is Locally Managed.Well it has more then 48 millions of rows in my table
    Definition of Fragmentation :-
    When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.
    When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.
    Definition of High Water Mark(HWM):-
    HWM is an indicator of USED BLOCKS in the database. Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted. Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high water mark when doing a full table scan.
    Please go through the below example if you have any doubts.
    EXAMPLE:-
    How to find table fragmentation?
    SQL> select count(*) from big1;
    1000000 rows selected.
    SQL> delete from big1 where rownum <= 300000;
    300000 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> update big1 set object_id = 0 where rownum <=350000;
    342226 rows updated.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
    PL/SQL procedure successfully completed.
    Now lets determine the size of table
    TABLE SIZE(with fragmentation)
    SQL> select table_name,round((blocks*8),2)||'kb' "size"
    2 from user_tables
    3 where table_name = 'BIG1';
    TABLE_NAME size
    BIG1 72952kb
    ACTUAL DATA in table:
    SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
    2 from user_tables
    3 where table_name = 'BIG1';
    TABLE_NAME size
    BIG1 30604.2kb
    Note = 72952 - 30604 = 42348 Kb is wasted space in table
    Edited by: 855956 on May 3, 2011 12:11 PM

  • Aggregation table - Diffrent agg levels for base table and agg table

    Is it possible to have Different aggregation level for base table and Aggregation. Say sum on a column in AGG table and Count for the same column in Fact table.
    Example
    Region,Day_product,sales person, customer are dimensions and Call is a fact measure
    FACT_TABLE has columns Region, Day, Product, Sales person,Customer, Call
    AGG_TABLE has columns Region, Month,Product, call
    We already have a Logical Table definition for the fact table say FACT_CALL
    We have a Logical column called No of customers.
    For the Data source as FACT_TABLE Formula for the column is "Customer" and Aggregation level is count distinct.
    But agg table we already have a calculated column call TOT_CUSTOMERS. which is been calculated and aggregated in the ETL.
    IF we map this to the logical column we have to set the formula as TOT_CUSTOMERS and we need to define aggregation type as SUM as this is at REGION, MONTH AND Product level. But OBI does not allow to do so.
    Is there a work around for this? Can you please let us know.
    Regard
    Arun D

    The way BI server picks up the table that would satisfy the query is through column mappings and contents levels. You have set the column mappings to TOT_CUSTOMER, which is right. When it comes to aggregation, since its already precalculated through ETL, you want to set the aggregation to SUM. Which I would say - is not correct, you can set the aggregation to COUNT DISTIMCT which is same as that of the detailed fact. But set the content levels to month in date table, and appropriate levels in region etc., So now BI Server will be aware of how to aggregate the rows when it chooses the agg table.

  • Record column level changes of table for audit.

    Hi Experts,
    I need  suggestion on recording column level changes in table for internal aduit purposes, our company policy does not allow us to enable CDC and CT on database levels so I am looking for whar are other the best ways for doing the same ?
    I know we can use triggers and OUTpUT clause to record the changes on tables but Is there any other light weight solution for recording changes on transactions tables which very gaint tables. 
    Seeking your guidnace on solution side ?
    Shivraj Patil.

    OUTPUT should be the best choice for your case as this would be much lighter than Trigger.

  • Define granularity of the fact table

    Hi BW experts,
    Can you explain defining granularity of the fact table when doing data warehousing?
    Thanks,
    Bill

    Data Modeling issue:  want example of Define granularity of the fact table
    http://help.sap.com/bp_biv335/BI_EN/documentation/Multi-dimensional_modeling_EN.doc

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Configuring gardes and level in T510 Table for Payroll

    Hi Experts,
    We have a scenario in implementation where we have  grades Level 1...... Level 8 ,
    Minimum - Maximum amount is their for each level in that particular grade ( Like L1-A: Min-Max.,L1-B:Min-Max,)
    How to tackle this kind  of situation in payroll configuration in T510 table.
    Regards,
    Irfan

    HI Experts,
    I am doing this for international payroll driver. Our OM and PA are integrated.
    I will repeat my question
    Level 1 - A  :   250 - 300
    Level 1 - B  :   301 -  400
    Level 1 - C  :   401 - 450
    Level 1 - D  :   451-  500
    I want to make this appear in 1005 infotype.
    But instead I am suggesting client
    Level 1 - A  :   300
    Level 1 - B  :    400
    Level 1 - C  :    450
    Level 1 - D  :    500
    In the 1005 infotype.
    Please advice.
    Regards,
    IFF
    Edited by: IFF on Nov 12, 2010 11:47 AM

  • Multiplication using header level variable and table field

    Have header level field (conversion rate) I want to use in calculation with a field in table (transaction currency). I can get my variable using
    <?xdoxslt:set_variable($_XDOCTX, 'X', ACCTD_EXCHG_RATE )?> in the header level field
    and <?xdoxslt:get_variable($_XDOCTX, 'X')?> in the field.
    What is the syntax I use to perform a calculation on the variable and the transaction currency field.
    Thanks in advance,
    Nancy Hoye

    Please help me................ still i am having the problem....
    Need to increase the field width on popup and colum height.

  • MM42 change material, split valuation at batch level, M301, locking table

    Dear All,
    I'm working on ECC 6.0 retail and I have activated split valuation at batch level.  Now in MBEW for this specific material I have almost 14.400 entries.
    If I try to change some material data (MM42) I receive an error message M3021 A system error has occurred while locking and then Lock table overflow.
    I used SM12 to see the table (while MM42 is still running) and it seems that MBEW is the problem.
    What should I do?  For any material modification the system has to modify every entry in MBEW? Is there any possibility to skip this?
    Thank you.

    Hi,
    Symptom
    Key word: Enqueue
    FM: A system error has occurred in the block handler
    Message in the syslog: lock table overflowed
    Other terms
    M3021 MM02 F5 288 F5288 FBRA
    Reason and Prerequisites
    The lock table has overflowed.
    Cause 1: Dimensions of the lock table are too small
    Cause 2: The update lags far behind or has shut down completely, so that the lock entries of the update requests that are not yet updated cause the lock table to overflow.
    Cause 3: Poor design of the application programs. A lock is issued for each object in an application program, for example a collective run with many objects.
    Solution
    Determine the cause:
    SM12 -> Goto -> Diagnosis (old)
    SM12 -> Extras -> Diagnosis (new)
    checks the effectiveness of the lock management
    SM12 -> Goto -> Diagnosis in update (old)
    SM12 -> Extras -> Diagnosis in update (new)
    checks the effectiveness of the lock management in conjunction with updates
    SM12 -> OkCode TEST -> Error handling -> Statistics (old, only in the enqueue server)
    SM12 -> Extras -> Statistics (new)
    shows the statistics of the lock management, including the previous maximum fill levels (peak usage) of the partial tables in the lock table
    If the owner table overflows, cause 2 generally applies.
    In the alert monitor (RZ20), an overrunning of the (customizable) high-water marks is detected and displayed as an alert reason.
    The size of the lock table can be set with the profile parameter u201Cenque/table_size =u201C. specifies the size of the lock table in kilobytes. The setting must be made in the profile of the enqueue server ( u2026_DVEBM.. ). The change only takes effect after the restart of the enqueue server.
    The default size is 500 KB in the Rel 3.1x implementation of the enqueue table. The resulting sizes for the individual tables are:
    Owner table: approx 560.
    Name table: approx 560.
    Entry table: approx 2240.
    As of Rel 4.xx the new implementation of the lock table takes effect.
    It can also be activated as described in note 75144 for the 3.1I kernel. The default size is 2000 KB. The resulting sizes for the individual tables are:
    Owner table: approx 5400
    Name table: approx 5400
    Entry table: approx 5400
    Example: with the
    u201Cenque/table_size =32000u2033 profile parameter, the size of the enqueue table is set to 32000 KB. The tables can then have approx 40,000 entries.
    Note that the above sizes and numbers depend on various factors such as the kernel release, patch number, platform, address length (32/64-bit), and character width (Ascii/Unicode). Use the statistics display in SM12 to check the actual capacity of the lock table.
    If cause 2 applies, an enlargement of the lock table only delays the overflow of the lock table, but it cannot generally be avoided.
    In this case you need to eliminate the update shutdown or accelerate the throughput of the update program using more update processes. Using CCMS (operation modes, see training BC120) the category of work processes can be switched at runtime, for example an interactive work process can be converted temporarily into an update process, to temporarily increase the throughput of the update.
    For cause 3, you should consider a tuning of the task function. Instead of issuing a large number of individual locks, it may be better to use generic locks (wildcard) to block a complete subarea. This will also allow you to considerably improve the performance.

  • Access level in lookup table

    I'm using Dreamweaver CS4. It seems that access levels can
    only be applied (at least through Server Behaviors) to a field
    within the same table that host the users and their associated
    passwords. I have adopted a database which contains a table which
    contains the users and their passwords but the access levels are
    stored in a lookup table. Aside from hand coding this or changing
    the table structure to include access levels, users and passwords
    in the same table, can anyone provide some insight as to how to
    handle this?

    OK i have successfully achieved the JOIN...I think. I clicked
    "Test" on the Recordset dialog and I can see records from the table
    but not the JOIN table.
    Here is the SQL statement I used:
    SELECT authuser_id, firstname, lastname, uname, passwd
    FROM authuser INNER JOIN user_access ON
    authuser.authuser_id=user_access.userID
    So once this is done, I'm not sure how to proceed. I added
    the "Log In User" server behavior but the JOIN field is not
    displayed under the "Restrict access based on:".
    I obviously have a lot to learn about how Dreamweaver helps
    streamline this process. Any help (detailed as possible) would be
    much appreciated.

Maybe you are looking for

  • Remote app and last version

    The newest update of Apple's Remote app requires iOS 7.1 but my iPhone 5 is on 7.0.x.  However iTunes insists on installing the new version, even though it is incompatible, and it predictably crashes.  I can download the last compatible version via t

  • Customizing the top level navigation

    When you logon to the portal we usually have the top level navigation and second level navigation, I need the navigation in a different manner, that the navigation that they are looking for is more like a drop down (menu with the drop down items ) I

  • Installing PSE 6 on my new laptop, telling me serial number is wrong.

    I am installing my old copy of PSE 6 on my new laptop.  I looked up my serial number on my Adobe account. When I enter it, it tells me that the serial number is incorrect.  I am using copy/paste, so cant be that I am entering wrong.  How can I fix th

  • Where dose Logic/Mac Pro store its audio units?

    Hello, dose anyone know wich folder AU (audio units) are stored on a Mac Pro? they are usually in Library>Audio>Plug-ins>Components. but there is nothing in that folder and i am trying to get one of my plugins (Atmosphere Intel Mac Wrapper) to find i

  • Urgent Apex validation required.

    Hi All, In my application i have following items like Select List: Emp No Select List: Emp name Text Box:where they can enter same Emp Name Text Box:where we can enter description. Now the requiremnet is Description can not null. cas1: When we select