DATA SYN BETWEEN SU01 AND BP

Dear Experts,
Is there any report in SRM to update the email address which was changed in SU01 to BP of the user, as it is no more possible to update due to assignment of authorization group in SU01 and also it is involved in mass update of several users.
Thanks anRegards,
Sathya Kumar.
Edited by: KOPPISETTI VENKATA SATHYA KUMAR on Nov 18, 2008 9:54 PM

Hi
HRALXSYNC - sysnchronise organisation unit and user.
if it not you can test this function module BBP_MP_CHANGE_CONTACT_PERSON
regards
Muthu

Similar Messages

  • Data Replication Between Sqlserver and Oracle11g using materialized view.

    I have Sqlserver 2005 as my source and oracle11g as my target.I need to populate the target daily with change data from source.
    for that we have created a dblink between sqlserver and oracle and replicated that table as a Materialized view in Oracle.
    problem we are getting here is Fast refresh option is not available.each day it will pick full data from the source.
    is there any way to use Fast refresh in this scenario??
    Thanks in advance.
    Regards,
    Balaram.

    Pl do not post duplicates - Data Replication Between Sqlserver and Oracle11g using materialized view.

  • Data streaming between server and client does not complete

    Using an ad-hoc app, data streaming between server
    and client does not complete as it supposed to be.
    The process runs well in solaris 5.8, however under 5.9
    we have found the characters stream buffer length limitation
    is around 900 to 950 characters (by default we are using 3072
    characters).
    Example:
    - We are transfering HTML file, which will be displayed
    in the App client, with buffer=3072, the HTML only displayed / transfered
    as xxxxxxxx characters, but with buffer=900 the HTML is displayed properly,
    in this case, the only problem that we have is the file transfer will
    eventually longer than usual.
    - There is another case, where we have to transfer information (data) as stream
    to the client. A long data stream will not appear at all in the client.
    Any ideas why the change between 5.8 and 5.9 woudl cause problems?
    The current app-driver that we are using is compiled using Solaris 5.6,
    if possible we would like to have use of a later version, which is compiled using Solaris 5.9, do you think this will probably solve our problem?
    Thanks
    Paul

    Does this have anything to do with Java RMI? or with Java come to think of it?

  • Secure the file/data transfer between XI and any third-party system

    Hi All,,
    I would like to use to "secure" SSH on OS Level the file/data transfer between XI and any third-party system Run OS Command before processing and OS command After processing. right now my XI server installed on iSeries OS.
    with ISeries we can't call the Unix commands hope we need to go for AS400 (CL) Programming. If we created the AS400 programm how i can call that in XI.
    If any one have idea pls let me know weather it will work or not.
    Thanks in adavance.
    Venkat

    Hi,
    Thanks for your reply.
    I have red some blogs like /people/krishna.moorthyp/blog/2007/07/31/sftp-vs-ftps-in-sap-pi to call the Unix Shell script in XI.
    But as i know in iSeries OS we can write the shell script we need to go for AS400 programe. If we go with AS400 how we need to call that programe and it will work or not i am not sure there i need some help please.
    Thanks,
    Venkat

  • Data mismatch between 10g and 11g.

    Hi
    We recently upgraded OBIEE to 11.1.1.6.0 from 10.1.3.4.0. While testing we found data mismatch between 10g and 11g in-case of few reports which are including a front end calculated column with division included in it, say for example ("- Paycheck"."Earnings" / COUNT(DISTINCT "- Pay"."Check Date")) / 25.
    The data is matching for the below scenarios.
    1) When the column is removed from both 10g and 11g.
    2) When the aggregation rule is set to either "Sum or Count" in both 10g and 11g.
    It would be very helpful and greatly appreciated if any workaround/pointers to solve this issue is provided.
    Thanks

    jfedynic wrote:
    The 10g and 11.1.0.7 Databases are currently set to AL32UTF8.
    In each database there is a VARCHAR2 field used to store data, but not specifically AL32UTF8 data but encrypted data.
    Using the 10g Client to connect to either the 10g database or 11g database it works fine.
    Using the 11.1.0.7 Client to go against either the 10g or 11g database and it produces the error: ORA-29275: partial multibyte character
    What has changed?
    Was it considered a Bug in 10g because it allowed this behavior and now 11g is operating correctly?
    29275, 00000, "partial multibyte character"
    // *Cause:  The requested read operation could not complete because a partial
    //          multibyte character was found at the end of the input.
    // *Action: Ensure that the complete multibyte character is sent from the
    //          remote server and retry the operation. Or read the partial
    //          multibyte character as RAW.It appears to me a bug got fixed.

  • Data sync between oracle and sql server

    Greetings Everyone,
    Your expert views are highly appreciable regarding the following.
    We at work are evaluation different solutions to achieve data synchronization between oracle and sql server data bases. Data sync i mentioned here is for live applications. We are runnign oracle EBS 11i with custom applications and intending to implement a custom software based on .NET and SQL Server. Now the whole research is to see updates and data changes whenever happens between these systems.
    I googled and found Oracle Golden Gate, Microsoft SSIS, Wisdom Force from Informatica....
    If you can pour in more knowledge then it's great.
    Thank You.

    Most of the work involved has to be done through scripts and there is no effective GUI to implement OGG.However using commands is not vey togh and they are very intutive.
    These are the steps, from a high level:
    1.Get the appropriate GG Software for your source and target OS.
    2.Install GG on source and target systems.
    3.Create Manager and extract processes on source system
    4.Create Manager and replicat processes on target system
    5.Start these processes.
    First try to achieve uni-directional replication. Then Bi-directional is easy.I have implemented bi-directional active active replication using Oracle DBs as source and target. Refer to Oracle installation and admin guides for more details.
    Here is a good article that may be handy in your case.
    http://www.oracle.com/technetwork/articles/datawarehouse/oracle-sqlserver-goldengate-460262.html
    Edited by: satrap on Nov 30, 2012 8:33 AM

  • Data Reconciliation between BI and R/3 systems

    Hi Gurus,
    I want to know how to reconciliation the data between BI and R/3 systems.
    is there any easy mentods to reconcile the data.i have also gone through the HOW TO GUIDE document, that document dosen't help me out.
    Regards
    Sreenivas.Y

    1) Either we can reconcile using standard R3 tables eg VBAK , VBAP for sales .
    2) You can go for automatic reconcillation where you will have to create a reconcillation data source and extract it to BW using virtual cube (Virtual cube would extract real time data from R3).
    Include original cube and this virtual cube in a multiprovider .
    Make a report taking keyfigures from both the infoproviders and add a column for difference .If there is 0 difference it means BW data matches with R3 .

  • Report for date variances between delivery and goods receipt date

    dear guru ,
    i search a standard report for date variances between delivery date in the purchase order and the goods receipt date.
    Do you help me ?
    Thanks

    Hi,
    Use Report ME80FN and here check the delivery schedule option in output screen.

  • SNP planned order availability date difference between APO and ECC

    Hi,
    I have observed that SNP Planned order availability date is not matching between APO and ECC. Details are as follows.
    I ran SNP Optimizer with bucket offset of 0.5. After publishing the optimizer created planned orders to ECC, only start date is matching.
    Example:
    I am using PDS as a source of supply.
    Fixed production activity in SNP PDS is 1 day.
    GR processing time: 3 day
    After running optimizer planned order is created with dates explained below.
    Start date/time: 09.05.2011 00:00:00
    End date/time: 12.05.2011 23:59:59
    Availability date: 16.05.2011 00:00:00
    Because of bucket offset defined as 0.5 optimizer planned order availability is start of next monday.
    After publishing this planned order to ECC dates on the planned order is as follows.
    Start date: 09.05.2011
    End date: 09.05.2011
    Availability date: 12.05.2011
    I have not maintained any scheduling margin key in ECC. Also if I dont define the GR processing time, planned dates between APO and ECC always match. Can anyone explain the impact of GR time on the availability date.
    Regards,
    Venkat

    Hi Venkadesh,
    What's "state stamp"? Do you mean different time zone?
    note : 645597  mentioned by Nandha is very helpful:
    In standard, CCR will use duedate - "the available date of the output product".
    Nandha's words "In SAP APO, if the receipt date of the primary product deviates from the
    end date of the last activity of the order, the receipt date
    always identifies this as inconsistent. You cannot rectify
    inconsistencies of this type by using CCR."
    I guess in your PDS or PPM, the output product is not assigned to the End of the last activity. Someone changed it?
    Please CIF the PDS or PPM again.
    If you really want to apply a note, please use note 815509 as you're using planned order,
    and system will use order end  date in CCR instead.
    GR time is always considered. BR/Tiemin

  • Functional and data differences between W_GL_BALANCE_F and W_GL_OTHER_F

    Hi:
    Can some explain what the functional and data-source differences are between W_GL_BALANCE_F and W_GL_OTHER_F? Both seem to group by GROUP_ACCOUNT_NUM.
    Thanks.

    That is not possible; all transaction in GL Other will end up in GL Balance.
    The two tables are essentially having the same data but at different grain. Two main differences:
    - GL Other have individual journal transactions; GL Balance have them summarized to the account level.
    - GL Other is truly additive since its just individual journal transactions. i.e. you can just sum() any number of transactions and wont double count a trx; GL Balance is a monthly snapshot table. i.e. it provides account balance for all accounts for every month end, so you can never add two snapshots.

  • Data mapping between sybase and xml

    I want to make a data mapping between sybase relational data result set and xml.
    I am using the function ForXmlTree for this purpose, I have the entire documentation about the synatx and usage of it, but i need to know what support does JAVA have for it.
    What files need to be installed .
    This is an example of using the function
    java jcs.xmlutil.ForXmlTree �i forxmltree-spec [-o output-script-file] \
    [-x output-document-file  -S server-name]
    does anyone know where can i find the jcs.xmlutil package. If so let me know.
    Thanks in advance
    Sandeep

    PRPS-PSPNR = AFVU-PROJN.
    also you can use this fm.
    BAPI_PROJECT_GETINFO

  • Most efficient data transfer between RT and FPGA

    This post is related to THIS post about DMA overhead.
    I am currently investigating themost efficient way to transfer a set of variables to a FPGA target for out application.  We have been using DMA FIFOs for communications in both directions (to and from FPGA) but I'm recently questioning whether this is the most efficient approach.
    Our application must communicate several parameters (around 120 different variables in total) to the FPGA.  Approximately 16 of these are critical meaning that they must be sent every iteration of our RT control loop.  The others are also important but can be sent at a slightly slower rate without jeopardising the integrity of our system.  Until now we have sent these 16 critical parameters plus ONE non-critical parameter over a DMA to the FPGA card.  Each 32-bit value sent incorporates an ID which allows the FPGA to demultiplex to the appropriate global variables on the FPGA.  Thus over time (we run a 20kHz control loop on the RT system - we have a complete set of paramaters sent @ approx. 200Hz).  The DMA transfers are currently a relatively large factor in limiting the execution speed of our RT loop.  Of the 50us available per time-slot running at 20kHz approximately 12-20us of these are the DMA transfers to and from the FPGA target.  Our FPGA loop is running at 8MHz.
    According to NI the most efficient way to transfer data to a FPGA target is via DMA.  While this may in general be true, I have found that for SMALL amounts of data, DMA is not terribly efficient in terms of speed.  Below is a screenshot of a benchmark program I have been using to test the efficiency of different types of transfer to the FPGA.  In the test I create a 32MB data set (Except for the FXP values which are only present for comparison - they have no pertinence to this issue at the moment) which is sent to the FPGA over DMA in differing sized blocks (with the number of DMA writes times the array size being constant).  We thus move from a single really large DMA transfer to a multitude of extremely small transfers and monitor the time taken for each mode and data type.  The FPGA sends a response to the DMA transfers so that we can be sure that when reading the response DMA that ALL of the data has actually arrived on the FPGA target and is not simply buffered by the system.
    We see that the minimum round-time for the DMA Write and subsequent DMA read for confirmation is approximately 30us.  When sending less than 800 Bytes, this time is essentially constant per packet.  Only when we start sending more than 800 Bytes at a time do we see an increase in the time taken per packet.  A packet of 1 Byte and a packet of 800 Bytes take approxiamtely the SAME time to transfer.  Our application is sending 64 Bytes of critical information to the FPGA target each time meaning that we are clearly in the "less efficient" region of DMA transfers.
    If we compare the times taken when communication over FP controls we see that irrespective of how many controls we write at a time, the overall throughput is constant with a timing of 2.7us for 80 Bytes.  For a small dedicated set of parameters, the usage of front panel controls seems to be significantly faster than sending per DMA.  Once we need to send more than 800 Bytes, the DMA starts to become rapidly more efficient.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

    So to continue:
    For small data sets the usage of FP controls may be faster than DMAs.  OK.  But we're always told that each and every FP control takes up resources, so how much more expensive is the varsion with FP controls over the DMA.
    According to the resource usage guide for the card I'm using (HERE) the following is true:
    DMA (1023 Elements, I32, no Arbitration) : 604 Flip-Flops 733 LUT 1 Block RAM
    1x I32 FP Control: 52 Flip-Flops 32 LUTs 0 Block RAM
    So the comparison would seem to yield the following result (for 16 elements).
    DMA : 604 FLip-Flops 733 LUT 1 Block RAM
    FP : 832 FLip-Flops 512 LUT 0 Block RAM
    We require more FLip-Flops, less LUTs and no Block RAM.  It's a swings and roundabouts scenario.  Depending on which resources are actually limited on the target, one version or the other may be preferred.
    However, upon thinking further I realised something else.  When we use the DMA, it is purely a communications channel.  Upon arrival, we unpack the values and store them into global variables in order to make the values available within the FPGA program.  We also multiplex other values in the DMA so we can't simply arrange the code to be fed directly from the DMA which would negate the need for the globals at all.  The FP controls, however, ARE already persistent data storage values and assuming we pass the values along a wire into subVIs, we don't need additional globals in this scenario.  So the burning question is "How expensive are globals?".  The PDF linked to above does not explicitly mention the difference in cost between FP controls and globals so I'll have to assume they're similar.  This of course massively changes the conclusion arrived to earlier.
    The comparison now becomes:
    DMA + Globals : 1436 Flip-Flops 1245 LUTs 1 Block RAM
    FP : 832 FLip-Flops 512 LUT 0 Block RAM
    This seems very surprising to me.  I'm suspiscious of my own conclusion here.  Can someone with more knowledge of the resource requirements differences between Globals and FP controls weigh in?  If this is really the case, we need to re-think our approach to communications between RT and FPGA to most likely employ a hybrid approach.
    Shane.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

  • End date difference between Outlook and mobile

    When I use Outlook to create an all day event from 20 June to 22 June, then use Nokia PC Suite to Synchronise with my mobile, the mobile shows a memo in the Calendar from 20 June to 21 June :-( This 'one day less in the Nokia Memo' stays when I edit dates either in the mobile or in outlook and sychronise again.
    When I create a Memo in the mobile, then Synchronise it with Outlook, the problem does not appear. Also, changing dates in Outlook of an event originally created in the mobile, is working perfectly.
    The problem also does not appear if 'all day event' is not checked in Outlook.
    I use
    Windows XP build 2600 Service Pack 1
    Microsoft Outlook 2002 (10.5109.4219) SP-2
    Nokia PC Suite Version 6.80.22
    Nokia 9103 V 04.61 08-02-06 RM-161
    An IRDa connection

    When I have "31st December 2006" in the database, the
    webapp displays "30th December 2006" and so on.
    I don't understand this one-day difference?????
    Does anyone has some clue of why I get this?Because there is a lot of code between 'database' and 'webapp displays'.
    That of course also assumes that you really do have the date that you think in the database. (Not knowing db2 I can only note that most other databases do not store dates. They store timestampes, which is different.)

  • Date difference between Java and db2

    Hello,
    I am quite puzzled. I have the following environment:
    -iseries DB2
    -Jdk 1.5
    -Hibernate 3 (jpa)
    -Tomcat 5.5
    When I have "31st December 2006" in the database, the webapp displays "30th December 2006" and so on.
    I don't understand this one-day difference?????
    Does anyone has some clue of why I get this?
    Thanks in advance,
    Julien.
    PS: The fields are typed java.util.Date

    When I have "31st December 2006" in the database, the
    webapp displays "30th December 2006" and so on.
    I don't understand this one-day difference?????
    Does anyone has some clue of why I get this?Because there is a lot of code between 'database' and 'webapp displays'.
    That of course also assumes that you really do have the date that you think in the database. (Not knowing db2 I can only note that most other databases do not store dates. They store timestampes, which is different.)

  • Data Sync between ORACLE and SQLServer

    Hi,
    I would like to here the possible options for "Bi-Directional Data Sync" between ORACLE 10g (Enterprise Edition Release 10.2.0.4.0) and SQL Server 7.0 (7.00 - 7.00.961 Standard Edition on Windows NT 5.0 ).
    Please let me know the available tools or any other addons.
    thanks and regards,
    Suman.S

    Are you looking for transactional replication between Oracle and SQL Server? Take a look at Wisdomforce [DatabaseSync.|http://www.wisdomforce.com/products-DatabaseSync.html]
    It can perform a [real-time change data capture|http://www.wisdomforce.com/products-DatabaseSync.html] of transactions from redo log files and apply them into Oracle or SQL Server

Maybe you are looking for