Hot News: Possible incorrect results in SAP BW system

Everyone ,
We recently identified an issue in SAP ASE which potentially causes incorrect results in a SAP BW system running on SAP ASE.
The issue affects any application running on SAP ASE using optimisation goal 'allrows_dss' or a user created optimisation goal that enables 'advanced_aggregation' . 
SAP BW specifies optimisation goal 'allrows_dss' for ceratin DSS queries and is affected by the issue.
SAP ERP system running on ASE are typically not affected as in SAP ERP systems typically optimisation goal 'allrwos_mix' has been configured. 
Details and corrections are available in SAP note
2026328 - SYB: Incorrect results with SUM aggregation on decimal fields
We strongly suggest to implement the corrections in SAP BW as soon as possible.
With kind regards
Tilman Model-Bosch

Hi,
Yes, I am using the MDX driver. 
Is there any pre-requisites of importing certain ABAP transports into SAP Server since I haven't done any? Please  recommend.
Thanks,
Amogh

Similar Messages

  • Incorrect result between maintain master data and bex query, how can i fix?

    Hi ALL,
    i get some messages from the users there is incorrect result between SAP R/3 and Report on BW. i controlled the monitor and i saw there was a job for 0CUSTOMER_ATTRIBUTE that it finish correctly but the processing it was only in PSA, i started the full update immediately from PSA into Data Targets and is finished correctly. after when i control the content of the 0CUSTOMER (right click maintain master data) i get the correct attribute result that match the data in SAP R/3, but the problem is when i execute a query Bex on this master data it will not return the same attributes data.
    Can SomeBody Help please
    Bilal

    hi,
    For any master data attributes loaded you will have to run "Attributes Change Run" for that.Execute for Master data 0CUSTOMER.
    The same is avilable in rsa1->Tools(top menu)->apply hierarchy/attribute run.
    hope it helps,
    regards,
    Parth.

  • SAP Abap system as LDAP source/server?

    Hello,
    is it possible to configure a SAP Abap system as LDAP server so that I can read out the user information via LDAP?
    We have a SSL-Gateway that needs to preauthenticate external users and we don't want to manage those users in two different systems.

    Marc,
    Are you thinking about Central User Administration (CUA). Then it is possible by LDAP.
    Hope this helps.
    Manoj

  • New possibility in session manager: user menu - SAP menu

    Hi guys,
    there is a new possibility to customize, which menu users get displayed first in session manager. Its a customizing switch in SSM_CUST. This new possibility enables the user still to select both menu options, but the initial display in every new mode is regulated by the switch.
    Available with  [SAP Note 1658872|https://service.sap.com/sap/support/notes/1658872]
    b.rgds, Bernhard

    Hello Kamal,
    Did you do this?
    1. Create a client in SCC4
    2. Log off from the system.
    3. Logon to the new client with user SAP*, password PASS
    4. Go to SCC3:
    Source: Client 000
    Source User: Client 001
    Target: Your new client
    Profile : SAP_ALL
    Delete the profile parameter 'login/no_automatic_user_sapstar' when you have created the first user in the new client. Restart the system to make the changes effective.
    See: SAP note 806819.
    Best regards,
    Dolores

  • KKA2 incorrect results

    Hi,
    Based on the SAP note - 38070. We have configured in our landscape for new line ids. We have found that, if there is new WBS created & simulated
    in KKA2, the results are correct. Whereas for the already created wbs, we get incorrect results in KKA2 simulation.The new ids will point to old wbs & new wbs.Is this note only for new wbs?
    Please help as this is show stopper for our business.
    Kind Regards,
    Kalyan

    Here's an example of what I'm talking about.
    This first query compares a simple geometry with a second, which is defined as almost the full geographic extents (-179 , 179, -89 , 89). Nearly every possible geometry will interact with this in some way. However the result I get is 'Disjoint'.
    SELECT  SDO_GEOM.relate(MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(1,   80,   1,  -80,  160,  -80,  160,  80,  1,  80)),
                'DETERMINE',
                MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(-179,   89,   -179,  -89,  179, -89,  179,  89,  -179,  89)), '0.005')
          from DUAL;If i make the second geometry Smaller so that it starts at 0, ie (0,179, -89, 89) the I correctly get the result 'Inside'.
    SELECT  SDO_GEOM.relate(MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(1,   80,   1,  -80,  160,  -80,  160,  80,  1,  80)),
                'DETERMINE',
                MDSYS.SDO_GEOMETRY(
                2003,
                8307,
                null,
                MDSYS.SDO_ELEM_INFO_ARRAY(1,1003, 1),
                MDSYS.SDO_ORDINATE_ARRAY(0,   89,  0,  -89,  179, -89,  179,  89,  0,  89)), '0.005')
                from DUAL;It would be ideal if someone could confirm or deny this behaviour on a fully patched 10g or even 11g database.

  • Incorrect result set with using isnull() function  in IQ 16

    Hi team,
    We have IQ 16 on HP UX.
    When we use isnull() function in where clause we get incorrect result set if we do not use column name in the result set.
    In first select we get result with one row but in second one we get an empty result set.
    select ID, dat_start, dat_end, dat_stop
    from table_test
    where ID=1105935925
    and isnull(dat_stop,dat_start) <> dat_end
    select ID
    from table_test
    where ID=1105935925
    and isnull(dat_stop,dat_start) <> dat_end
    It depends on number of row or volume of data in table, It is possible to use option Revert_To_V15_Optimizer to get the correct result.
    Do you have any different idea how to solve it?
    Thanks Milos.

    We have tested two versions:
    Sybase IQ/16.0.0.653/131122/P/sp03/ITANIUM/HP-UXi 11.31/64bit/2013-11-22 01:49:18
    SAP IQ/16.0.0.807/140507/P/sp08/ITANIUM/HP-UXi 11.31/64bit/2014-05-07 21:11:45
    Both versions have given same mistake.
    We have not opened any support case for this issue because it is data depended issue. It is not easy to simulate it as an example.
    Do you think we should open a support case for it?
    Miloš

  • Incorrect texts in SAP Query Designer

    Hello,
    I have problems with incorrect text in SAP Query designer, I work with SAP codepage 1404 and I see all queries correct, but when I want to edit some query with some special character for czech language its problem. Query designer can=t use czech diacritic.
    Can you give some advice.
    Thanks
    :Petr

    Hello !
    Thks for your quick reply,
    I'm from France sorry for my bed english but i hope u're understanding me, I'm new consultant in BW and like u know woman are not very comfartable with IT so i'm going to ask u 2 others questions about hierarchy and saving query in roles if u don't mind
    1) I want to change some hierarchy node position in production system, do i change directly in rsh1 by adding a node to another level ? or i must to change in a file source or ECC system ? if i change just on rsh1, i will lost all i have changed when Info package will be executed (in daily process chain)...? that's right?
    2) I have a query already published in rôle, but i have to move it to another one, how can i do that in query designer please?
    PS : i can't send you direct message, is it possible to follow me ?
    Thks in advance have a nice day !

  • Oracle Discoverer report pulls incorrect result when scheduled.

    Recently the database was migrated to 10.1.2 RAC from 9.2.0.6, so the discoverer EUL is now resides on new database.
    after migration the report which pulls correct results when run interactively is pulling incorrect result when scheduled in Discoverer.
    This report used sysdate and aggregate functions, i had ran the same report simultaneously( Directly in Discoverer Desktop/Plus and scheduled in discoverer), but the data retrieved in both case is not matching.
    here is the query. any help is appreciated.
    SELECT /*+ FIRST_ROWS */ A.SITE_ID as E175108,B."SYSTEM DESCRIPTION" as System_Prefix,
    B."SYSTEM PREFIX" as System_Description,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) < 0 THEN 1 ELSE TO_NUMBER(NULL) END) as Less_than_0_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) > 121 THEN 1 ELSE TO_NUMBER(NULL) END) as 0_to_14 Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),3,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 14_to_30_Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),2,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 31_to_60_Days,
    COUNT(DECODE(TRUNC(( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) )/31),1,( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ),to_number(NULL))) as 61_to_90_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) BETWEEN 15 AND 30 THEN 1 ELSE TO_NUMBER(NULL) END) as 91_to_120_Days,
    COUNT(CASE WHEN ( TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE) ) BETWEEN 0 AND 14 THEN 1 ELSE TO_NUMBER(NULL) END) as 120_Days_Plus,
    COUNT(TRUNC(SYSDATE)-DISCO10G.DATE_FORMAT_TEST(A.STATUS_DATE)) as Total
    FROM PSTAGE.ALL_EQUIPMENT A,
    ( SELECT A.SITE "SYSTEM PREFIX", A.DESCRIPTION "SYSTEM DESCRIPTION", A.SITE_ID, B.SITE_DESCRIPTION, A.G2B_ID
    FROM SITE_LIST A, ALL_CF_SITE_CONTROL B
    WHERE A.SITE_ID = B.SITE_ID
    ORDER BY 1, 3
    ) B
    WHERE ( (B.SITE_ID = A.SITE_ID))
    AND (A.EQUIPMENT_STATUS_CODE IN ('T','7'))
    GROUP BY A.SITE_ID,B."SYSTEM DESCRIPTION",B."SYSTEM PREFIX"
    ORDER BY B."SYSTEM DESCRIPTION" ASC ;
    Thanks!

    Hi sunil,
    Rod is referencing the NLS parameters i.e.
    Can you please let me know which NLS parameters you are referring toNLS parameters in this scenerio may be the date and language for that session.Do check out
    SELECT * from NLS_SESSION_PARAMETERS
    how i can check if there any differences in the NLS parameters when report is scheduled or run interactivelyI think you should run the trace file.Iam not sure about it.
    It would be system_context.
    Hope it helps you.
    Kranthi.

  • How can I create a new sales order template in SAP CRM 7.0

    Hello,
    how can I create a new sales order template in SAP CRM 7.0 (Web UI)? I want to use this sales order template in scenario ´'Mass Generation of Sales Orders via Marketing Projects'.
    Thanks for your support in advance.
    Best regards,
    anvan

    Hi,
    Did you set up this scenario? I want something similar, but I want an ERP order to be created. Do you know if that is possible? Do you have som tips?
    regards Camilla

  • Consistent hot backup possible

    Is a consistent hot backup possible?
    I would like to perform hot backups while the database is in basically a read only state. I am currently using Oracle recommended backups via OEM, for example.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup recovery area;
    Would executing the sql command "alter database begin backup;" before running the above RMAN script accomplish this task? Then off course when completed execute sql "alter database end backup;".
    My basic concern is this type of RMAN hot backup usable in a disaster situation, i.e recreated on another server from tape backup.
    I am open to any other ideas.
    Thanks for your help in advance.
    Ed - Wasilla, Alaska
    Edited by: evankrevelen on Sep 11, 2008 10:18 PM

    Thanks everyone who replied to this thread.
    Just to clarify my complete backup strategy, there are two RMAN scripts run on daily and weekly basis. The daily does pickup the archivelogs. I had shown the weekly when first opening this thread. Here is the daily.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup archivelog all not backed up;
    backup backupset all not backed up since time 'SYSDATE-1';
    My question now is what RMAN does in the increments. It appears to be updating the original level 0 copies of datafiles with changed blocks only. Is the new copy of the datafile now a level 0 type file?
    Here is a transcript from one of the daily backups.
    Starting recover at 11-SEP-08
    channel oem_disk_backup: starting incremental datafile backupset restore
    channel oem_disk_backup: specifying datafile copies to recover
    recovering datafile copy fno=00001 name=+DEVRVYG1/landesk/datafile/system.2576.616107783
    recovering datafile copy fno=00002 name=+DEVRVYG1/landesk/datafile/undotbs1.2574.616107865
    recovering datafile copy fno=00003 name=+DEVRVYG1/landesk/datafile/sysaux.2575.616107829
    recovering datafile copy fno=00004 name=+DEVRVYG1/landesk/datafile/users.2572.616107871
    recovering datafile copy fno=00005 name=+DEVRVYG1/landesk/datafile/landesk.2914.616107643
    channel oem_disk_backup: reading from backup piece +DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189
    channel oem_disk_backup: restored backup piece 1
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189 tag=TAG20080910T220150
    channel oem_disk_backup: restore complete, elapsed time: 00:05:16
    Finished recover at 11-SEP-08
    Starting backup at 11-SEP-08
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    input datafile fno=00005 name=+DEVG1/landesk/datafile/landesk.374.614072207
    input datafile fno=00003 name=+DEVG1/landesk/datafile/sysaux.384.614002027
    input datafile fno=00001 name=+DEVG1/landesk/datafile/system.383.614002025
    input datafile fno=00002 name=+DEVG1/landesk/datafile/undotbs1.385.614002027
    input datafile fno=00004 name=+DEVG1/landesk/datafile/users.386.614002027
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/nnndn1_tag20080911t220708_0.12999.665186835 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:02:26
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/ncsnn1_tag20080911t220708_0.2301.665186983 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:00:21
    Finished backup at 11-SEP-08
    It appears to be updating the previous copy with updated blocks thus rolling forward the datafile copy to a new level 0 copy.
    Then to restore from the backup RMAN would first use this new copy of the datafile and then apply any archivelogs to them to bring the database to the point in time the incremental backup was taken.
    Are these assumptions true?
    Thanks for your help,
    ED

  • Rp_provide_from_last returns incorrect result

    Hi
    When issuing rp_provide_from_last  for IT2001, we get incorrect result.
    rp-provide-from-last p2001 space '19000101' '99991231'
    This macro does not return the latest record.  Instead it returns the record with the highest subtype #.  (It actually returns the last record shown in a SE16N listing of PA2001).
    Has anyone seen this problem?
    We are on SAP 4.7., SP 85.
    Best regards
    Kirsten

    Pleas Try this
    Usage:
    Only in PNP database reports under GET PERNR, because the personnel number for which data is being read comes from field PERNR-PERNR, while the field being used is PNP-SW-AUTH-SKIPPED-RECORD.
    (RP_READ_ALL_TIME_ITY beg end)
       DATA: BEGDA LIKE P2001-BEGDA, ENDDA LIKE P2001-ENDDA.
       INFOTYPES:  0000, 0001, 0002, ...
                         2001 MODE N, 2002 MODE N, ...
         GET PERNR.
       BEGDA = '19900101'. ENDDA = '19900131'.
       RP_READ_ALL_TIME_ITY BEGDA ENDDA.
       IF PNP-SW-AUTH-SKIPPED-RECORD NE '0'.
          WRITE: / 'Authorization for time data missing'.
          WRITE: / 'for personnel number', PERNR-PERNR. REJECT.
       ENDIF.
    Remarks
    This RMAC module can be used when, for example, the time infotypes were originally defined in MODE N. This was done because the time data (from LOW-DATE to HIGH-DATE) might not all have fitted into the buffer. Now, however, they are read with shorter intervals (for example, in RPCALCx0 with payroll periods).
    -Due to the large amount of data in HR, the infotypes 2000 u2013 2999 should not be read when GET PERNR occurs. Therefore, these infotypes are declared with the enhancement MODE N.
    -As a result, the infotype tables under GET PERNR are not filled. The time infotype tables are filled subsequently using the macro RP_READ_ALL_TIME_ITY, but only for the time interval specified by PN-BEGDA and PN-ENDDA.
    http://help.sap.com/saphelp_45b/helpdata/en/60/d8bb88576311d189270000e8322f96/content.htm
    Best Regards

  • Incorrect results for calculation based on diff dimensions - 11.1.1.5

    Hello All,
    OBIEE gives incorrect results when i try to perform a calculation (for eg: addition) based on 2 measures. For eg:
    (Note: "->" signifies 1:M)
    Rpd (Physical model & BMM): dim_fe -> dim_gl-> Fact_Legder <- Dim_param
    Fact_Ledger (agg measures) -> YTD_01, YTD_02...... YTD_12 ( here 01,02...12 represent month i.e. if "Feb" selected in prompt then we need to use YTD_02 and so on for other months)
    Answers: Created a report with following columns
    Column Name : Formula
    =================
    Line Item : 'Net Profit'
    Prev Yr Act: (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013}-1 and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=100)/1000) /
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013}-1 and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=100)/1000)
    Curr Yr Act: (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=100)/1000) /
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=100)/1000)
    Curr Yr Plan: case when '@{pmonth}{Jan}='Jan' then
    (filter("Fact Ledger"."YTD_01" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=200)/1000)/
    (filter("Fact Ledger"."YTD_01" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=200)/1000)
    when '@{pmonth}{Jan}='Feb' then
    (filter("Fact Ledger"."YTD_02" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=200)/1000)/
    (filter("Fact Ledger"."YTD_02" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=200)/1000)
    when '@{pmonth}{Jan}='Dec' then
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_Param"."PL_Line" in ( 'Item 1','Item 2','Item 3') and "Fact_Ledger"."Code"=200)/1000)/
    (filter("Fact Ledger"."YTD_12" using "Fact_Ledger"."YEAR"=@{pYear}{2013} and "Dim_FE"."Item" in ( 'L1','L2','L3') and "Fact_Ledger"."Code"=200)/1000)
    endthe results are incorrect. Any help appreciated.
    Qry generated is like
    (select...
    case when year=.. and pl_lin=... and code=100 then ytd_01,
    case when year=.. and pl_lin=... and code=100 then ytd_03,
    case when year=.. and pl_lin=... and code=100 then ytd_04,....,
    case when year=.. and pl_lin=... and code=200 then ytd_01,
    case when year=.. and pl_lin=... and code=200 then ytd_03,
    case when year=.. and pl_lin=... and code=200 then ytd_04,....,
    from...
    where ... year in (2013-1,2013) and pl_line('Item1,'Item2','Item3') or fe.item('l1','l2','l3') and code in (100,200)... ) D1
    (select
    case when 'Apr'='Jan' thne d1.c1 when 'Apr'='Feb' then d1.c2......
    from D1
    Regards..
    Shruti

    See if this explains it better for my crosstab with page items of Vendor Number 1234.
    Vendor 1234
    Dc Nbr 1 2 4 AAAA
    Sum Invoice Amt 1387.04 300.82 327.29 2015.15
    Sum Cost 44.86 57.43 25.54 127.83
    Sum Advanced Cost 102.44 0 0 102.44
    Sum Consolidation Cost 30.37 0 0 30.37
    Sum Allowance Amt 27.74 6.02 6.54 40.30
    Net Freight Cost 149.93 51.41 19 220.34
    Freight Percent 10.81 17.09 5.81 ****
    As stated before, Frieght Percent is a calculation I created in Discoverer that looks like this :
    ( NVL(Sum Cost,0)+NVL(Sum Advanced Cost,0)+NVL(Sum Consolidation Cost,0)-NVL(Sum Allowance Amt,0) )/NVL(Sum Invoice Amt,0)*100
    Column AAAA was created in Discoverer using Sum of field and show to the right.
    What I need is for the **** to be the correct calculation for the totals in column AAAA. If I use do a total for Freight Percent using the Cell Sum I get 33.70., what I want is it to be 10.93, which is (127.83 + 102.44 + 30.37 - 40.30)/2015.15*100.
    If I use an Average Total row for Freight Percent, I get 11.24 which is 33.70 / 3 (the 3 would be the # of dc nbr's)
    I did start with using the detail level data to create this crosstab. Then I made a new version and used the SUM data. I seem to get the same results but am still having issues with the one **** value.
    Hopefully this explains it better.
    Thanks for the ideas so far.

  • Importing Payroll Results in SAP.

    Hello All,
                   We are implementing SAP HCM suite and have a question about importing payroll results back into SAP. We are using ADP as our payroll outsourcing parter for Gross/Net and we have a business requirement to bring back the payroll results in SAP. We contacted ADP and the response we got is that none of our SAP Clients have every done same. We contacted SAP and the response we got is this is a consulting issue.
                     We are currently exploring the possibility of using ULK9 schema to import payroll results for every period from ADP, the question I have is:
    Have you been involved in this kind of model for ADP or any other payroll outsourcing companies, If yes how did you handle bringing back payroll results for Off-Cycles, Reversals, Voids?
    Thanks
    Saket

    Hi.. this is like running 2 payrolls one with ur payroll partner and one with your system. This is not a very good way of doing it as you will never be able to match the results as you have to always mirrior the configuration, badis, bapis and user exits, abap codes, etc..Moreover please do check if the result import can work for all the countries.
    Also, please do think of issues like the patch level update- what's the aggreemnet with you and your payroll partner
    also, other risks need to be clearly defined. In a nutshell, this is not a very good idea and there is hardly anone having it as it is like duplicating your payroll run everytime.
    If you are certainly looking for an alternate, may be you can use dummy wagetypes (custom ones) to store your results..
    If you are looking just to retain the employee history, your payroll partner should have it.. so again its just duplication of data..
    Hope this info helps.
    Regards
    Judith

  • Where to upload test case results in SAP Solution Manager?

    Hi Experts,
    I am new to test management module in SAP Solution Manager. I need to know where to upload test case results in sap solution manager.
    Please provide screenshots on how to upload test results.
    Regards,
    Sanjana

    Hi Sanjan,
    let me provide you some more information.
    First of all you create a SAP Solution Manager Project in Transaction SOLAR_PROJECT_ADMIN. I think this should be no problem. After creating the Project, make sure that the tap "Test Cases" is switched on in this project. The tap "Test Cases" is the tap below "Project Standards".
    Go to transaction "solar02" and choose your created project. Find the tap "Test Cases" and upload a sample file. Please note that the configuration structure of this project can be configured in solar01 (Business Blueprint). There are many settings you can choose in a project regarding ti your requirements. Try to find them out! :-)
    In transaction Test Plan Management (STWB_2) you can choose all relevant test cases you uploaded in your project and generate Test Packages which are mapped to your choosen Testers.
    In Transaction stwb_work you will see the testers workbench.
    Best regrads
    J.Eichner

  • LessFilter and  ReflectionExtractor API giving incorrect results

    I am using Oracle Coherence version 3.7. We are storing DTO objects in cache having "modificationTime" property/instance variable of "java.util.date" type. In order to fetch data from cache passing "java.util.date" variable as input for comparison, LessFilter and ReflectionExtractor api's are used. Cache.entryset(filter) returns incorrect results.
    Note: we are using "com.tangosol.io.pof.PofWriter.writeDateTime(int arg0, Date arg1) " api to store data in cache and "com.tangosol.io.pof.PofReader.readDate(int arg0)" to read data from cache. There is no readDateTime api available ?
    We tested same scenario updating DTO class. Now it has another property in DTO of long(to store milliseconds). Now long is passed as input for comparison to LessFilter and ReflectionExtractor api's and correct results are retrieved.
    Ideally, java.util.Date or corresponding milliseconds passed as input should filter and return same and logically correct results.
    Code:
    1) Test by Date: returns incorrect results
    public void testbyDate(final Date startDate) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
    LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
    final Filter lessFilter = new LessFilter(extractor, startDate);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    2) Test by milliseconds: returns correct results
    public void testbyTime(final Long time) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
    LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
    final Filter lessFilter = new LessFilter(extractor, time);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    }

    Hi Harvy,
    Thanks for your reply. You validated it against a single object in cache using ExternalizableHelper.toBinary/ExternalizableHelper.fromBinary. But we are querying against a collection of objects in cache.
    Please have a look at below code.
    *1)* We are using TestDTO.java extending AbstractCacheDTO.java as value object for our cache.
    import java.io.IOException;
    import java.util.Date;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * The Class AbstractCacheDTO.
    * @param <E>
    *            the element type
    * @author apanwa
    public abstract class AbstractCacheDTO<E> extends AbstractEvolvable implements EvolvablePortableObject {
        /** The Constant IDENTIFIER. */
        private static final int IDENTIFIER = 0;
        /** The Constant CREATION_TIME. */
        private static final int CREATION_TIME = 1;
        /** The Constant MODIFICATION_TIME. */
        private static final int MODIFICATION_TIME = 2;
        /** The version number of cache DTO implementation **/
        private static final int VERSION = 11662;
        /** The id. */
        private E id;
        /** The creation time. */
        private Date creationTime = new Date();
        /** The modification time. */
        private Date modificationTime;
         * Gets the id.
         * @return the id
        public E getId() {
            return id;
         * Sets the id.
         * @param id
         *            the new id
        public void setId(final E id) {
            this.id = id;
         * Gets the creation time.
         * @return the creation time
        public Date getCreationTime() {
            return creationTime;
         * Gets the modification time.
         * @return the modification time
        public Date getModificationTime() {
            return modificationTime;
         * Sets the modification time.
         * @param modificationTime
         *            the new modification time
        public void setModificationTime(final Date modificationTime) {
            this.modificationTime = modificationTime;
         * Read external.
         * @param reader
         *            the reader
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#readExternal(com.tangosol.io.pof.PofReader)
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            id = (E) reader.readObject(IDENTIFIER);
            creationTime = reader.readDate(CREATION_TIME);
            modificationTime = reader.readDate(MODIFICATION_TIME);
         * Write external.
         * @param writer
         *            the writer
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#writeExternal(com.tangosol.io.pof.PofWriter)
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            writer.writeObject(IDENTIFIER, id);
            writer.writeDateTime(CREATION_TIME, creationTime);
            writer.writeDateTime(MODIFICATION_TIME, modificationTime);
        @Override
        public int getImplVersion() {
            return VERSION;
    import java.io.IOException;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestDTO extends AbstractCacheDTO<TestIdentifier> {
        private Long timeinMillis;
        private static final int TIME_MILLIS_ID = 3;
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            super.readExternal(reader);
            timeinMillis = Long.valueOf(reader.readLong(TIME_MILLIS_ID));
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            super.writeExternal(writer);
            writer.writeLong(TIME_MILLIS_ID, timeinMillis.longValue());
         * @return the timeinMillis
        public Long getTimeinMillis() {
            return timeinMillis;
         * @param timeinMillis
         *            the timeinMillis to set
        public void setTimeinMillis(final Long timeinMillis) {
            this.timeinMillis = timeinMillis;
    }*2)* TestIdentifier.java as key in cache for storing TestDTO objects.
    import java.io.IOException;
    import org.apache.commons.lang.StringUtils;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestIdentifier extends AbstractEvolvable implements EvolvablePortableObject {
        private String recordId;
        /** The Constant recordId. */
        private static final int RECORD_ID = 0;
        /** The version number of cache DTO implementation *. */
        private static final int VERSION = 11660;
        @Override
        public void readExternal(final PofReader pofreader) throws IOException {
            recordId = pofreader.readString(RECORD_ID);
        @Override
        public void writeExternal(final PofWriter pofwriter) throws IOException {
            pofwriter.writeString(RECORD_ID, recordId);
        @Override
        public int getImplVersion() {
            return VERSION;
        @Override
        public boolean equals(final Object object) {
            if (object instanceof TestIdentifier) {
                final TestIdentifier id = (TestIdentifier) object;
                return StringUtils.equals(recordId, id.getRecordId());
            } else {
                return false;
         * @see java.lang.Object#hashCode()
        @Override
        public int hashCode() {
            return recordId.hashCode();
         * @return the recordId
        public String getRecordId() {
            return recordId;
         * @param recordId
         *            the recordId to set
        public void setRecordId(final String recordId) {
            this.recordId = recordId;
    }*3) Use Case*
    We are fetching TestDTO records from cache based on LessFilter. However, results returned from cache differs if query is made over property "getModificationTime" of type java.util.Date or over property "getTimeinMillis" of type Long(milliseconds corresponding to date). TestService.java is used for the same.
    import java.io.IOException;
    import java.util.Collection;
    import java.util.Date;
    import java.util.Map;
    import java.util.Set;
    import org.apache.log4j.Logger;
    import com.ladbrokes.dtos.cache.TestDTO;
    import com.ladbrokes.dtos.cache.TestIdentifier;
    import com.cache.services.CacheService;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.ValueExtractor;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.LessFilter;
    * @author nkhatw
    public class TestService implements CacheService<TestIdentifier, TestDTO, Object> {
        private static final String TEST_CACHE = "testcache";
        private static final NamedCache CACHE = CacheFactory.getCache(TEST_CACHE);
        private static final Logger LOGGER = Logger.getLogger(TestService.class);
         * Push DTO objects with a) modTime of java.util.Date type b) timeInMillis of Long type
         * @throws IOException
        public void init() throws IOException {
            for (int i = 0; i < 30; i++) {
                final TestDTO dto = new TestDTO();
                final Date modTime = new Date();
                dto.setModificationTime(modTime);
                final Long timeInMillis = Long.valueOf(System.currentTimeMillis());
                dto.setTimeinMillis(timeInMillis);
                final TestIdentifier testId = new TestIdentifier();
                testId.setRecordId(String.valueOf(i));
                dto.setId(testId);
                final CacheService testService = new TestService();
                testService.createOrUpdate(dto, null);
                LOGGER.debug("Pushed record in cache with key: " + i + " modTime: " + modTime + " Time in millis: "
                    + timeInMillis);
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting time property
         * b) java.util.Date value to be compared with
         * 2) Verify extracted entryset
         * @throws IOException
        public void testbyDate(final Date startDate) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
            LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
            final Filter lessFilter = new LessFilter(extractor, startDate);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting "time in millis  property"
         * b) java.Long value to be compared with
         * 2) Verify extracted entryset
        public void testbyTime(final Long time) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
            LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
            final Filter lessFilter = new LessFilter(extractor, time);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
        @Override
        public void createOrUpdate(final TestDTO testDTO, final Object arg1) throws IOException {
            CACHE.put(testDTO.getId(), testDTO);
        @Override
        public void createOrUpdate(final Collection<TestDTO> arg0, final Object arg1) throws IOException {
            // YTODO Auto-generated method stub
        @Override
        public <G>G read(final TestIdentifier arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public Collection<?> read(final Map<TestIdentifier, Object> arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public void remove(final TestDTO arg0) throws IOException {
            // YTODO Auto-generated method stub
    Use Case execution Results:
    "testbyTime" method returns correct results.
    However, "testbyDate" method gives random and incorrect results.

Maybe you are looking for