Regarding aggregation

i have two tables(source) mapped into my target and now what i wolud like to do is count the number of values in table 1 and table 2 and add both the counts to see if the count matches in the target .
could some one help me in writing he query.......

Try this:
select t1_cnt + t2_cnt total_cnt
  from (select count(*) t1_cnt from <table1>) t1,
       (select count(*) t2_cnt from <table2>) t2

Similar Messages

  • Keyfigures and aggregation

    Hi Gurus,
    I have certain doubt regarding "Aggregation".
    I know that every key figures has 3 tabs,
    1. Data Type / Unit
    2. Exception
    3. Additional properties.
    I have doubt with regard to the tab "Exception".
    Please some one clearly explain that?
    1. I hope aggregation does not have any effects on the fact table and it affects only the result row of a report? Am i right?
    2. I know that we have standard aggregation (min, max and sum) and Exception aggregation based on a characteristic.
    why does exception aggregation has (average, counter, first value, last value, etc) whereas standard aggregation has only  (min, max and sum)? What is the significance of exception aggregation? Why do we relate it with a characteristic?
    3. What are non cumulative key figures? Non-cumulative with non-cumulative change and Non-Cumulative with inflow and outflow?
    A clear explanation of the above with some simple scenario is greatly appreciated.
    Thanks and regards,
    Lakshminarasimhan.N

    Hi Lakshminarasimhan.N:
      The documents below might help.
    "Exception Aggregation in Business Explorer" article by Himanshu Mahajan.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f0b8ed5b-1025-2d10-b193-839cfdf7362a?quicklink=index&overridelayout=true
    "Non-cumulatives / Stock Handling"
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/93ed1695-0501-0010-b7a9-d4cc4ef26d31?quicklink=index&overridelayout=true
    Regards,
    Francisco Milán.

  • Multi-SCORM Packager

    Hi
    You may have read a few of my discussions I have raised in the last month or so regarding Aggregator and its border issues:
    http://forums.adobe.com/message/4528160#4528160
    http://forums.adobe.com/message/4528226#4528226
    I have recently found out about Multi-SCORM Packager and I am wondering if this will fix the issue I am having with the borders.
    Has anyone used Multi-SCORM before or can they tell me if it is any good? Unfortunately it looks like we would have to pay for it if it works as it doesnt come with Captivate 5, which we have.
    Cheers
    Jonny

    The only way you would get a free trial of the SCORM Packager tool would be to install the trial version of the full E-learning Suite, which also includes a trial version of Captivate (currently Cp6).
    The Multi-SCORM Packager just bundles two or more SCORM-compliant Captivate modules into a multi-SCO SCORM zip package.  Each component SCO will play from within the LMS pretty much just as it would look if launched separately in a browser, with the exception that the LMS will probably play the content from within a player application because it needs to listen for learner interaction data coming from the SCOs.
    If you do get hold of the Multi-SCORM packager and want to see what the resulting content looks like in an LMS, try SCORM Cloud.  It's free and online.

  • Regarding exception aggregation based on 2 infobjects

    Hi All,
    I have a report in which I have to use Exception aggregation based on two Infoobjects.
    How we can do it ?
    I have tried using nested aggregation but its not working.
    Example:
    I have three Infoobject :
    0Material
    0Sold_To
    ZSalesRep (Sales representative of Sold to)
    In Report:
    Rows section, I will have only ZSalesRep Infoobject.
    Column Section (i.e. the Keyfigures) I will have one keyfigure which should be calculated based on 0Sold_to and 0Material.
    Any inputs will be helpful.
    Thank you.
    Vamsi.

    Hi,
    I think exception aggregation should work in your scenario, first you get the total with reference to material and then put that key figure in reference to sold to party.
    Just check the below article for more information,
    [http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d08b56a8-daf5-2e10-2397-904d6aeb55c2?QuickLink=index&overridelayout=true]
    Regards,
    Durgesh.

  • Query regarding mapping (aggregated)

    Hi,
    I need some help in mapping.
    SOURCEis as below:
    <E1EDP01>
    <E1EDP05>
    <KSCHL>ZRTP</KSCHL>
    <KRATE>100</KRATE>
    </E1EDP05>
    <E1EDP05>
    <KSCHL>ZIPP</KSCHL>
    <KRATE>200</KRATE>
    </E1EDP05>
    </E1EDP01>
    TARGET
    <E1EDP01>
    <E1EDP05>
    <KSCHL>ZRTP</KSCHL>
    <KRATE>300</KRATE>
    </E1EDP05>
    </E1EDP01>
    target KRATE shuld be the summation of KRATE for ZRTP and ZIPP.
    Please help.
    regards,
    Piyush

    Hi Rahul,
    I have tried doing this but it returns the summation only for the first row. Rest are blank.
    I have used the below UDF at KRATE level:
    int i, sum=0, ret=0;
    for(i=0;i<KSCHL.length;i++)
    if((KSCHL<i>.equals("ZRTP")) || (KSCHL<i>.equals("ZIPP")))
         sum = sum + Integer.parseInt(KRATE<i>.trim());
    result.addValue(""+sum);
    Let me provide you with the complete Source and target:
    SOURCE:
    <?xml version="1.0" encoding="UTF-8"?>
    <INVOIC02>
         <IDOC BEGIN="1">
              <E1EDP01 SEGMENT="1">
                   <POSEX>000010</POSEX>
                   <MENGE>12.000</MENGE>
                   <MENEE>LTR</MENEE>
                   <E1EDP05 SEGMENT="1">
                        <ALCKZ>+</ALCKZ>
                        <KSCHL>ZRTP</KSCHL>
                        <KOTXT>RTP at Port</KOTXT>
                        <BETRG>        100</BETRG>
                        <KRATE>      100</KRATE>
                        <MEAUN>K15</MEAUN>
                   </E1EDP05>
                   <E1EDP05 SEGMENT="1">
                        <ALCKZ>+</ALCKZ>
                        <KSCHL>ZIPP</KSCHL>
                        <KOTXT>RTP at Port</KOTXT>
                        <BETRG>        300</BETRG>
                        <KRATE>      300</KRATE>
                        <MEAUN>K15</MEAUN>
                   </E1EDP05>
                   <E1EDP05 SEGMENT="1">
                        <ALCKZ>+</ALCKZ>
                        <KSCHL>ZIPPTP</KSCHL>
                        <KOTXT>RTP at Port</KOTXT>
                        <BETRG>        300</BETRG>
                        <KRATE>500</KRATE>
                        <MEAUN>K15</MEAUN>
                   </E1EDP05>
              </E1EDP01>
         </IDOC>
    </INVOIC02>
    TARGET:
    <?xml version="1.0" encoding="UTF-8"?>
    <INVOIC02>
         <IDOC BEGIN="1">
              <E1EDP01 SEGMENT="1">
                   <POSEX>000010</POSEX>
                   <MENGE>12.000</MENGE>
                   <MENEE>LTR</MENEE>
                   <E1EDP05 SEGMENT="1">
                        <ALCKZ>+</ALCKZ>
                        <KSCHL>SUMRTP</KSCHL>
                        <KOTXT>RTP at Port</KOTXT>
                        <BETRG>        100</BETRG>
                        <KRATE>      800</KRATE>
                        <MEAUN>K15</MEAUN>
                   </E1EDP05>
                   <E1EDP05 SEGMENT="1">
                        <ALCKZ>+</ALCKZ>
                        <KSCHL>ZIPPTP</KSCHL>
                        <KOTXT>RTP at Port</KOTXT>
                        <BETRG>        300</BETRG>
                        <KRATE>500</KRATE>
                        <MEAUN>K15</MEAUN>
                   </E1EDP05>
              </E1EDP01>
         </IDOC>
    </INVOIC02>
    Please help.
    regards,
    Piyush

  • Regarding the Aggregation in  Query

    hi,
    Is anybody have idea how I can show only the overall result of the query output in the excel .
    Is there any way out such that we can publish only the overall result in the portal instead all the contents of the query output of EXcel.

    Hi Mukesh,
    Have you tried putting all your characteristics as free characteristics (remove all characteristics from the rows area and out them in the free characteristics area)
    Hope this helps,
    Regards,
    Nikhil

  • Regarding exception Aggregation

    hiii firends...can any one pls tell me wat is exception agregation? In which scenario we use it? thanks Ashok

    hi ,
    pls read article in below link
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f0b8ed5b-1025-2d10-b193-839cfdf7362a
    Hope it helps you.
    thanks
    nilesh pathak

  • How can I set a right link Aggregations?

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    sundy
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843 mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5

    sundy.liu wrote:
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!If you're only running a single stream, that's all you'll see. Teaming/aggregating doesn't make one stream go faster.
    If you ran two streams simultaneously, then you should see a difference between a single 1G interface and an aggregate of two 1G interfaces.
    Darren

  • Resetting Aggregated Cleared document

    Hi All,
    Does any one have any idea how do i reset a aggregated posting document which is cleared.
    I have tried doing this with the standard Tcode iueedpplotaalc4 by providing Aggregated Payment Document but it only allows me to reverse the payment.
    Thanks
    Satyajeet

    Hi,
    you may use program REDEREG_ETHI_REV.
    Best regards
    Harald

  • Data in the Cube not getting aggregated

    Hi Friends
    We have Cube 1 and Cube 2.
    The data flow is represented below:
    R/3 DataSource>Cube1>Cube2
    In Cube1 data is Stored according to the Calender Day.
    Cube2 has Calweek.
    In Transformations of Cube 1 and Cube 2 Calday of Cube 1 is mapped to Calweek of Cube 2.
    In the CUBE2 when i upload data from Cube1.Keyfigure Values are not getting summed.
    EXAMPLE: Data in Cube 1
    MatNo CustNo qty calday
    10001  xyz     100  01.01.2010
    10001  xyz      100  02.01.2010
    10001  xyz      100   03.01.2010
    10001  xyz     100  04.01.2010
    10001  xyz      100  05.01.2010
    10001  xyz      100   06.01.2010
    10001  xyz      100   07.01.2010
    Data in Cube 2:
    MatNo CustNo qty calweek
    10001  xyz     100  01.2010
    10001  xyz      100  01.2010
    10001  xyz      100   01.2010
    10001  xyz     100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    But Expected Output Should be:
    MatNo CustNo qty calweek
    10001  xyz     700  01.2010
    How to acheive this?
    I checked in the transformations all keyfigures are maintained in aggregation summation
    regards
    Preetam

    Just now i performed consisyency check for the cube:
    I a getting following warnings:
    Time characteristic 0CALWEEK value 200915 does not fit with time char 0CALMONTH val 0
    Consistency of time dimension of InfoCube &1
    Description
    This test checks whether or not the time characteristics of the InfoCube used in the time dimension are consistent. The consistency of time characteristics is extremely important for non-cumulative Cubes and partitioned InfoCubes.
    Values that do not fit together in the time dimension of an InfoCube result in incorrect results for non-cumulative cubes and InfoCubes that are partitioned according to time characteristics.
    For InfoCubes that have been partitioned according to time characteristics, conditions for the partitioning characteristic are derived from restrictions for the time characteristic.
    Errors
    When an error arises the InfoCube is marked as a Cube with an non-consistent time dimension. This has the following consequences:
    The derivation of conditions for partitioning criteria is deactivated on account of the non-fitting time characteristics. This usually has a negative effect on performance.
    When the InfoCube contains non-cumulatives, the system generates a warning for each query indicating that the displayed data may be incorrect.
    Repair Options
    Caution
    No action is required if the InfoCube does not contain non-cumulatives or is not partitioned.
    If the Infocube is partitioned, an action is only required if the read performance has gotten worse.
    You cannot automatically repair the entries of the time dimension table. However, you are able to delete entries that are no longer in use from the time dimension table.
    The system displays whether the incorrect dimension entries are still being used in the fact table.
    If these entries are no longer being used, you can carry out an automatic repair. In this case, all time dimension entries not being used in the fact table are removed.
    After the repair, the system checks whether or not the dimension is correct. If the time dimension is correct again, the InfoCube is marked as an InfoCube with a correct time dimension once again.
    If the entries are still being used, use transaction Listcube to check which data packages are affected.  You may be able to delete the data packages and then use the repair to remove the time dimension entries no longer being used. You can then reload the deleted data packages. Otherwise the InfoCube has to be built again.

  • Difference between  aggregation and calculation tab in BEx Query Designer

    HI,
    I am using BEx Query Designer for my report, for the key figures in the coloumn area i slected one numeric key figures, in  the properties tab i found aggregation tab and calculation tab.
    I need to sum up the total values for that particualar coloumn, when i used calculation tab i found to sum all the values for a particular coloumn, then what is the use the aggreagation tab?
    I not able to used that Aggregation tab it is showing as a hidden fields...
    can any one tell me whats the exact difference between these tabs and when we need to use which tab?
    With Regards,
    Thanesh Kumar.

    Hi Thanesh Kumar,
    I moved this thread from forum Data Warehousing to Business Explorer since it is a query related question (as SDN moderator).
    I could explain to you the difference between these two tabs.
    For "calculation" tab, it changes the display of result and does not change the calculation logic.
    It means that, if this key figure is used further in formula, still the original number (without "calculation" tab setting)  is used for further formula calculation.
    For "aggregation" tab, it changes the real calculation logic.
    The system takes the setting as the aggregation rule for records.
    The most common aggregation rule is of course summation. If you set to e.g. Average here, the system does the
    Average instead of summation when aggregating records. And the Average value will be taken for calculation
    in further formulas or other calculations.
    For "aggregation" tab, you could only use it for CKF (calculated key figure) or formula and you could not use it for
    a basic key figure. That should be the reason why you see it greyed-out.
    Regards,
    Patricia

  • Member Formula: IF ... ELSE do outline aggregation

    Hi experts,
    How to write a formula for a parent entity member like this:
    IF (@ISMBR("Account member"))
    do something
    ELSE
    do default outline aggregation from its descendants
    ENDIF
    Because I just want the "Do something" execute for some account member. If there is not ELSE statement, the formula will override default outline aggregation. The problem is I can not find any function that manually do default aggregation.
    Please ask if my question not clear.
    Many thanks!

    Huy Van
    I tried to replicate it in Sample Basic, I loaded sample data and below is the result
         Cola     Actual                              
         East     East     East     East     New York     New York     New York     New York
         Sales     Margin     Profit     Measures     Sales     Margin     Profit     Measures
    Jan     1812     1213     837     837     678     407     262     262I've a script where I've fixed on East (Parent member of Market)
    FIX(East,Actual,"100-10")
    Jan(
    IF(@ISMBR(Sales))
    100;
    ENDIF)
    ENDFIXBelow are the results after running the script
         Cola     Actual                              
         East     East     East     East     New York     New York     New York     New York
         Sales     Margin     Profit     Measures     Sales     Margin     Profit     Measures
    Jan     100     -499     -875     -875     678     407     262     262I don't see anything else changes (Only Sales of East is changing).
    Now that you are writing to Parent member, then aggregation from Parent1's descendants will overwrite what you script just populated.
    Regards
    Celvin
    http://www.orahyplabs.com
    Please mark the responses as helpful/correct if applicable

  • Issue regarding Planning layout is not getting rendered and is dumping at : CL_RSDRC_TREX_QUERY_LAYER ~ _GET_PARTPROVS_WITH_TREX_PART in SAP TPM IP

    Gurus,
    I am facing an issue regarding SAP TPM IP ( HANA)
    I have 3 Infoproviders
    Planning infocube, Planning DSO1, Planning DSO2 and i created multiprovider and added these 3 infoproviders into it. I have created Aggregation level on multiprovider. Created Bex Input ready query on Aggregation level.
    Issue is  Planning layout is not getting rendered and is dumping at : CL_RSDRC_TREX_QUERY_LAYER ~ _GET_PARTPROVS_WITH_TREX_PART.
    I tried debugging it and found it is trying to read   i_r_pro -> n_ts_part. It is populated with only 3 values (i.e. 2DSOs and 1 Cube), whereas <l_partprov>-partprov is referring to Aggergation level, hence read statement isn't successful, it is dumping.
    The class CL_RSD_INFOPROV_CACHE->GET() method is trying to populate the N_TS_PART.N_TS_PART uses P_R_INFOPROV table which seems to be already populated.  So, I debugged all the below methods to find out how the P_R_INFOPROV but couldn't find any clue.
    Can any one help,it would be really help.
    Thanks
    Ashok

    Hello Gregor,
    On the launch of planning layout it throws an error message:
    Planning is not possible RSCRM_IMP_CORE008.
    When I debugged, i got to a point wherein
    particular Real Time Planning DSO is
    not getting retrieved under the MultiProivder in below class.
    Class: CL_RSCRM_IMP_ACTIONS_SERVICE, Method: GET_INFOPROV is not
    returning the real time Info-Provider Name(i.e. Planning DSO)
    underlyingthe Multiprovider.
    I've also tried to run the report mentioned by you for the Multiprovider but issue still exists.
    Let me know, if you have any pointers on this topic.
    Thanks,
    Jomy

  • Aggregating Slowly Changing Dimension

    Hi All:
    I have a problem with whole lot of changes in the dimension values (SCD), need to create a view or stored procedure:
    Two Tables within the Oracle db are joined
    Tbl1: Store Summary consisting of Store ID, SUM(Sales Qty)
    Tbl2(View): Store View created which consists of Store ID, Name, Store_Latest_ID
    Join Relationship: Store_summary.Store_ID = Store_View.Store_ID
    If I’m pulling up the report its giving me this info
    Ex:
    Store ID: Name, Sales_Qty , Store_Latest_ID
    121, Kansas, $1200, 1101
    1101, Dallas, $1400, 1200
    1200, Irvine, $ 1800, Null
    141, Gering, $500, 1462
    1462, Scott, $1500, Null
    1346,Calif,$1500,0
    There is no effective date within the store view, but can be added if requested.
    Constraints in the Output:
    1)     If the Store Latest ID = 0 that means the store id is hasn’t been shifted (Ex: Store ID = 1346)
    2)     If the Store Latest ID = ‘XXXX’ then that replaces the old Store ID and the next records will be added to the db to the new Store ID ( Ex: 121 to 1101, 1101 to 1200, 141 to 1462)
    3)     Output Needed: Everything rolled up to the New Store ID irrespective of the # of records or within the view or store procedure whenever there is a Store Latest ID that should be assigned to the Store ID (Ex: the Max Latest Store ID Record for all the changing Store ID Values) and if the value of Latest Store ID is 0 then no change of the record.
    I need the output to look like
    Store ID: Name, Sales_Qty , Store_Latest_ID
    1200,Irvine,$4400,Null
    1462,Scott,$2000,Null
    1346,Calif,$1500,Null or 0
    The Query I wrote for the view creation:
    Select ss.Store_ID, ss.Sales_Qty, 0 as Store_Latest_ID
    From Store_Summary ss, Store_Details sd
    Where ss.Store_ID=sd.Store_ID and sd.Store_Latest_ID is null
    union
    Select sd.Store_Latest_ID, ss.Sales_Qty, null
    From Store_Summary ss, Store_Details sd
    Where ss.Store_ID=sd.Store_Latest_ID and sd.Store_Latest_ID is not null
    And placing a join to the created view to Store Summary ended up getting the aggreagation values without rolling up and also the Store ID's which are not having latest ids are ending up with a value 0 and the ss quantity aggregated, and if there are changes within store id for more than two times then its not aggreagating the ss quatity to the latest and also its not giving the store name of the latest store id.
    I need help to create a view or stored procedure
    Please let me know if you have any questions, Thanks.
    Any suggestions would be really Grateful.
    Thanks
    Vamsi

    Hi
    Please see the following example
    ID- Name -Dependants
    100 - Tom - 5
    101 - Rick -2
    102 - Sunil -2
    See the above contents...assume the ID represents employee ID and the dependants include parents, spouse and kids....
    After sometime, dependants may increase over a period of time but noone is sure when exactly it will increase.....assume in case of a single get married and increase in dependants
    So the attributes of the Employee had a slow chance of changing over the time
    This kind of dimensions are called slowly changing dimensions
    Regards
    N Ganesh

  • Regarding Additive Field In DSO

    Hi Guys,
    I have one field as summation in DSO. I made it additive while creating  the transformation. Can any one tell me how it will add a summation when i do a delta update.
    Thanks

    Hi,
    Transformation
    Aggregation type:
    Use
    You use the aggregation type to control how a key figure or data field is updated to the InfoProvider.
    Features
    For InfoCubes:
    Depending on the aggregation type you specified in key figure maintenance for this key figure, you have the options Summation, or Maximum or Minimum. If you choose one of these options, new values are updated to the InfoCube.
    The aggregation type (summation, minimum & maximum) specifies how key figures are updated if the primary keys are the same. For new values, either the total, the minimum, or the maximum for these values is formed.
    For InfoObjects:
    Only the Overwrite option is available. With this option, new values are updated to the InfoObject.
    For DataStore Objects:
    Depending on the type of data and the DataSource, you have the options Summation, Minimum, Maximum or Overwrite. When you choose one of these options, new values are updated to the DataStore object.
    For numerical data fields, the system uses characteristic 0RECORDMODE to propose an update type. If only the after-image is delivered, the system proposes Overwrite. However, it may be useful to change this: For example, the counter data field u201C# Changesu201D is filled with a constant 1, but still has to be updated (using addition), even though an after-image only is delivered.
    The characteristic 0RECORDMODE is used to pass DataSource indicators (from SAP systems) to the update.
    If you are not loading delta requests to the DataStore object, or are only loading from file DataSources, you do not need the characteristic 0RECORDMODE.
    Summation:
    Summation is possible if the DataSource is enabled for an additive delta. Summation is not supported for data types CHAR, DAT, TIMS, CUKY or UNIT.
    Overwrite:
    Overwrite is possible if the DataSource is delta enabled.
    When the system updates data, it does so in the chronological order of the data packages and requests. It is your responsibility to ensure the logical order of the update. This means, for example, that orders must be requested before deliveries, otherwise incorrect results may be produced when you overwrite the data. When you update, requests have to be serialized.
    Regards
    Santosh

Maybe you are looking for

  • How do I transfer video from my camcorder to my iMac?

    I have a Sony HandyCam (DCR-DVD92 NTSC, 800x Digital zoom, 20x Optical zoom). How can I transfer the video to my iMac? Is the "Video Capture" from Elgato the best way, or is there something easier?

  • How to Use Exec function in Java Code

    How to Use Exec method I want to Execute command net Start "some service" using exec method of runtime class or i use some other way if suggest

  • How can I get the PUK2 code on my C7-00

    My PIN2 CODE blocked. How can I get the PUK2 code? Thanks Solved! Go to Solution.

  • Superdrive on macbook pro 2010?

    According to apple specs for superdrive external dvd drive it is not compatible with any macbook pro that originally came with an optical drive, like the 2010 13" mbp. Really? Samsungs says its external drives are compatible with any Mac or Windows m

  • ThinkVanta​ge System update not happy!

    Hi I was wondering if anyone else is experiencing this issue with an X220 (or any other Lenovo product) "An error was detected with the package catalogue for your system on the System Update Server. Please contact the Support Centre." I did contact S