Doubt about proxies implementation

hi experts i have small doubt about proxies implementation
1. if we r implementing client proxies, it means sap r/3(proxy)->>xi->>>file
     system.here where we have to execute the SPROXY  transaction. in sap r/3 or
     in the xi server.and the next thing is where we have to write the report program
     to trigger the interface.in sap r/3 or in the xi server.
2. if we r implementing server proxies, it means File->>xi->>>sap r/3
    (proxy).here where we have to execute the SPROXY  transaction. in sap r/3 or
     in the xi server.
please clear me
Regards
giri

Sreeram,
The Integration Server and the client on which you generate the proxies should not be the same. If they are different then yes, you can use another client in your XI box itself to generate proxies and trigger the call to XI.
If you see this blog by Ravi ( incidentally he is my boss as well ) this is exactly what we have done as well.
/people/ravikumar.allampallam/blog/2005/03/14/abap-proxies-in-xiclient-proxy
When you say XI, you mean the Client on which the Integration Server is running! XI is basically a R3 instance with more functionality and its own Integration Engine.
Regards
Bhavesh

Similar Messages

  • Doubt about Proxies

    Hi All,
    While Implementing Proxy----JDBC Scenario,
    Q1) Where we need to create the Proxy ???? in XI Or In R/3 ???
    Q2) What would be the sender service specified in Receiver determination, should it be the Business System Or a Business service.
    Q3) Also let us know whether the report need to be written in R/3 or in XI to initiate the proxy.
    Regards
    Bopanna

    Hi Bopanna,
    Q1) Proxy need to be created in R/3
    Q2) Sender should be a Business system as a R/3 system is the sender in ur case
    Q3)Report needs to be written in R/3
    See these link
    http://help.sap.com/saphelp_nw04s/helpdata/en/ba/f21a403233dd5fe10000000a155106/frameset.htm
    Settings required for ABAP Proxies,
    ABAP Proxies
    Regards,
    Divija.

  • Doubt about Bulk Collect with LIMIT

    Hi
    I have a Doubt about Bulk collect , When is done Commit
    I Get a example in PSOUG
    http://psoug.org/reference/array_processing.html
    CREATE TABLE servers2 AS
    SELECT *
    FROM servers
    WHERE 1=2;
    DECLARE
    CURSOR s_cur IS
    SELECT *
    FROM servers;
    TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
    s_array fetch_array;
    BEGIN
      OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
        FORALL i IN 1..s_array.COUNT
        INSERT INTO servers2 VALUES s_array(i);
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;If my table Servers have 3 000 000 records , when is done commit ? when insert all records ?
    could crash redo log ?
    using 9.2.08

    muttleychess wrote:
    If my table Servers have 3 000 000 records , when is done commit ? Commit point has nothing to do with how many rows you process. It is purely business driven. Your code implements some business transaction, right? So if you commit before whole trancaction (from business standpoint) is complete other sessions will already see changes that are (from business standpoint) incomplete. Also, what if rest of trancaction (from business standpoint) fails?
    SY.

  • Doubts about BP number in SRM and SUS

    Hello everyone,
    I have some doubts about the BP number, especially for Vendors.
    I am working with the implementation of SRM 5.0 with SUS in an extended classic scenario. We will use one server for SRM and other for SUS. We will use the self registration for vendor (in SUS). My questions are:
    - Can I have the same BP number in SRM and SUS?? Or is it going to be different??
    - When a vendor accesses at the site to make a self registration in SUS, the information is sent to SRM as prospect (by XI) and there the prospect is changed as vendor? After that, is it necessary to send something from SRM to SUS again? (to change the prospect to vendor)
    - When is it necessary to replicate vendors from SRM to SUS??
    Thanks
    Ivá

    Dear Ivan,
    Here is answer to all your questions. Follow these steps for ROS configuration:
    Pls note:
    1. No need to have seperate clients for ROS and SUS. Create two clients for EBP and (SUS+ROS).
    2. No need of XI to transfer new registered vendor from ROS to EBP
    Steps to configure scenario:
    1. Make entries in SPRO --> "Define backend system" on both clients.
        You will ahev specify logical systems of both the clients (ROS as well as EBP)
    2. Create RFCs on both clients to communicate with each other
    3. In ROS client create Service User for supplier registration service with roles:
        SAP_EC_BBP_CREATEUSER
        SAP_EC_BBP_CREATEVENDOR
        Grant u201CS_A.SCONu201D profile to the user.
    4. Maintain service user in u201CLogon Datau201D tab of service : ros_self_reg in ROS client
    5. Create Purchasing and vendor Organizational Structure in EBP client and maintain necessary
        attributes. create vendor org structure in ROS client
    6. Create your ROS registration questionnaires and assign to product categories- in ROS client
    7. To transfer suppliers from registration system to EBP/Bidding system, Supplier pre-screening has to be
        defined as supplier directory in SRM server - EBP client.
        Maintain your prescreen catalog in IMG --> Supplier Relationship Management u2192 SRM Server u2192
        Master Data u2192 Define External Web Services (Catalogs, Vendor Lists etc.) 
    8. Maintain this catalog Id in purchasing org structure under attribure "CAT" - in EBP client
    9. Modify purchaser role in EBP client:
        Open node for u201CROS_PRESCREENu201D and maintain parameter "sap-client" and ROS client number
    10.Maintain organizational data in make settings for business partner
    Supplier Relationship Management -> Supplier Self-Services -> Master Data -> Make Settings for the Business Partners. This information is actually getting getting stored in table BBP_MARKETP_INFO.
    11. Using manage Business partner node with purchasers login (BBPMAININT), newly registsred vendors are pulled from Pre-screen catalog and BP is created in EBP client. If you you have SUS scenario, ensure to maintain "portal vendor" role here.
    I hope this clarifies all your doubts.
    Pls reward points for helpful answers
    Regards,
    Prashant

  • Doubt about unicode conversion

    Dear Friends,
    I have a doubt about the unicode conversion. I have read (one server strategy) that I have to do export (R3load), delete the ECC 6.0 non-unicode, install an unicode database and then import with R3load. My doubt is about the deletion of ECC 6.0 non unicode. Why?  can I convert to unicode only with an export (R3load) and subsequent import (R3load) without any deletion?. Obviously I must to change the SAP kernel with a unicode one. Is it correct?
    Cheers

    you can not convert a SAP System to unicode by just exchanging the executables from non unicode to unicode ones.
    a non unicode SAP Oracle database is typically running with a database codepage WE8DEC or US7ASCII (well this one is out of support).
    so every string stored in the database is stored using this codepages or SAP-Internal codepages.
    When converting to unicode you have to convert also the contents of the database to unicode. As unicode implementation starts at a time where Oracle did not support mixed codepage databases (one tablespace codepage WE8DEC the other one UTF8) inplace conversions are not possible. To keep things simpler, we still do not support mixed codepage databases.
    Therefore you have to export the contents of your database and import it to a newly created database with a different codepage if you want to migrate to unicode.
    regards
    Peter

  • Doubt about the sizing in the CPH

    Hi gurus!
    I have the following doubt about the sizing in the CPH.  I'm going to implement the CCMS BI CONTENT, i read the Note 979581 - Installing and configuring the CCMS BI Content, but in the document called "IT Performance Reporting Using SAP NetWeaver Business Intelligence" you can see images in this link:
    http://www.servidor-imagenes.com/show-image.php?id=050236bd91654e16c94a559350611dff
    here said how I can calculate the sizing based in Data Record Size per each Metric stored, but in specific,  about the example: 20 systems, 17 CPH metrics. are included SCM, BI, ECC, SOLMAN in this 20 systems? And about the 17 CPH Metrics is talking about ratios? what's mean the CPH metrics?, How can I know what is my CPH metrics?
    I hope can help me
    Best Regards
    Ramon Sanchez

    this theme has been closed on our enterprise

  • Doubt about @SequenceGenerator annotation (sequenceName element)

    Hello, everybody!
    I'm with a doubt about the @SequenceGenerator annotation. What is that sequenceName element? Is that a stored procedure in the database that we have to create or what? If we specify it, what do we have to create in the database? If we don't specify the sequenceName element, will the sequence be generated for us?
    Thank you in advance.
    Marcos

    B.Mansour Nizar wrote:
    no i don't want to remove the @GeneratedValue but instead just bypass it just in that case only to gain peformanceEither you bypass JPA altogether for the task and then manually update the sequances afterwards or, if you are using hibernate, you implement a custom Idgenerator that can use a provided ID.
    The implementation involves extending IdentityGenerator and ovveriding the generate method to return super.generate if the entity Id is null and return the supplied entity's Id otherwise. You then set that generator for that entity.

  • About proxies and IDOC

    Hi all,
    Can any one explain me or give me a lnk to blog on how do we merge more than two IDOC to one file.
    And also blogs,good links about proxies (ABAP).
    Cheers,
    Karthick

    Hi,
    check this links
    XI: Debug your inbound ABAP Proxy implementation  ---Debug your inbound ABAP Proxy implementation
    Choose the Right Adapter to integrate with SAP systems
    https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1387--- [original link is broken] [original link is broken] [original link is broken] Client Proxy
    https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1457--- [original link is broken] [original link is broken] [original link is broken] Server Proxy
    ABAP Proxy Runtime
    Programming with Client and Server Proxies
    Illustration of Multi-Mapping and Message Split using BPM in SAP Exchange Infrastructure
    Various multi-mappings and Optimizing their Implementation in Integration Processes (BPM) in XI.
    IDOCs (Multiple Types) Collection in BPM
    https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/2034Regards [original link is broken] [original link is broken] [original link is broken]
    regards
    vasu

  • Doubts about RAC infraestructure with one disk array

    Hello everybody,
    I'm writing to you because we have a doubt about the correct infrastructure to implement RAC.
    Please, let me first explain the current design we are using for Oracle DB storage. Currently we are running several standalone instances in several servers, all of them connected to a SAN disk storage array. As we know this is a single point of failure we have redundant controlfiles, archiveds and redos both in the array and in the internal disk of each server, so in case array completely fails we “just” need to recover nightly cold backup, apply archs and redos and everything it's ok. This can be done because we have standalone instances and we can assume this 1 hour downtime.
    Now we want to use these servers and this array to implement a RAC solution and we know this array is our single point of failure and wonder if it's possible to have a multinode RAC solution (not RAC One Node) with redundant controlfiles/archs/redos in internal disks. Is it possible to have each node writing full RAC controlfiles/archs/redos in internal disks and apply these files consistently when the ASM filesystem used for RAC is restores (i.e. with a softlink in an internal disk and using just one node)? Or maybe the recommended solution is to have a second array to avoid this single point of failure?
    Thanks a lot!

    cssl wrote:
    Or maybe the recommended solution is to have a second array to avoid this single point of failure?Correct. This is the proper solution.
    In this case you can also decide to simply use striping on both arrays, then mirror array1's data onto array2 using ASM redundancy options.
    Also keep in mind that redundancy is also need for the connectivity. So you need at least 2 switches to connect to both arrays, and dual HBA ports on each server, with 2 fibres running, one to each switch. You will need multipath driver s/w on the server to deal with the multiple I/O paths to the same storage LUNs.
    Likewise you need to repeat this for your Interconnect. 2 private switches, 2 private NICs on each server that are bonded. Then connect these 2 NICs to the 2 switches, one NIC per switch.
    Also do not forget spares. Spare switches (one each for storage and Interconnect). Spare cables - fibre and whatever is used for the Interconnect.
    Bottom line - not a cheap solution to have full redundancy. What can be done is to combine the storage connection/protocol layer with the Interconnect layer and run both over the same architecture. Oracle's Database Machine and Exadata Storage Servers do this. You can run your storage protocol (e.g. SRP) and your Interconnect protocol (TCP or RDS) over the same 40Gb Infiniband infrastructure.
    Thus only 2 Infiniband switches are needed for redundancy, plus 1 spare. With each server running a dual port HCA and a cable to each of these 2 switches.

  • Doubt about  a null value assigned to a String variable

    Hi,
    I have a doubt about a behavior when assigning a null value to a string variable and then seeing the output, the code is the next one:
    public static void main(String[] args) {
            String total = null;
            System.out.println(total);
            total = total+"one";
            System.out.println(total);
    }the doubt comes when i see the output, the output i get is this:
    null
    nulloneA variable with null value means it does not contains a reference to an object in memory, so the question is why the null is printed when i concatenate the total variable which has a null value with the string "one".
    Is the null value converted to string ??
    Please clarify
    Regards and thanks!
    Carlos

    null is a keyword to inform compiler that the reference contain nothingNo. 'null' is not a keyword, it is a literal. Beyond that the compiler doesn't care. It has a runtime value as well.
    total contains null value means it does not have memory,No, it means it refers to nothing, as opposed to referring to an object.
    for representation purpose it contain "null"No. println(String) has special behaviour if the argument is null. This is documented and has already been described above. Your handwaving about 'for representation purpose' is meaningless. The compiler and the JVM don't know the purpose of the code.
    e.g. this keyword shows a hash value instead of memory addressNo it doesn't: it depends entirely on the actual class of the object referred to by 'this', and specifically what its toString() method does.
    similarly "total" maps null as a literal.Completely meaningless. "total" doesn't 'map' anything, it is just a literal. The behaviour you describe is a property of the string concatenation operator, not of string literals.
    I hope you can understand this.Nobody could understand it. It is compete nonsense. The correct answer has already been given. Please read the thread before you contribute.

  • Doubt abt proxies

    hi this is seshagiri.
    i have doubt abt proxies. when u created ABAP server proxy,objects were generated.
        1. ABAP interface
        2.structure for data type
        3. structure for msg type
    WAT IS THE DIFFERENCE BETWEEN DATATYPE STRUCT AND MSG TYPE STRUCT.
    AND ONE MORE THING IS PROXIES ARE RUNNING ON WHICH ENGINE

    Hi,
    To see the deff between DT and MT
    A message type comprises a data type that describes the structure of a message
    More than one message interface can use the same message type. For example, an asynchronous outbound message interface and an asynchronous inbound message interface can reference the same message type because the request message does not need to be mapped.
    ·        When defining a message mapping you can directly reference message types to map messages from an outbound interface to messages from a receiver interface.
    For technical reasons, a data type is not sufficient to describe the instance of a message. In XML schema, data types are defined as abstract types that are not yet fixed to an element. You can only describe an instance of a message when you have specified a data type as an element type. Therefore, a message type defines the root element of a message.
    A message type does not define the direction of the message exchange
    Just go thru following links-
    http://help.sap.com/saphelp_nw2004s/helpdata/en/a8/bfc6373c8fea43bdb3541535bcbd43/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/a8/bfc6373c8fea43bdb3541535bcbd43/frameset.htm
    Regards
    chilla..

  • Doubt about Select statement.

    Hi folks!!
                 I have a few doubts about the select statements, it may be a silly things but its useful for me.
    what is   difference between below statment.
    1)SELECT * FROM TABLE.
    2)SELECT SINGLE * FROM TABLE
    3)SELECT SINGLE FROM TABLE.
    Hope i will get answer,thanks in advance.
    Regards
    Richie..

    Hi,
    try this and if possible use sap help.i mean place the cursor on select and press F1.
                 Types of select statements:
    1.     select * from ztxlfa1 into table it.
                 This is simple select statement to fetch all the data of db table into internal table it.
       2.   select * from ztxlfa1 into table it where lifnr between 'V2' and 'V5'.
            Thisis using where condition between v2 and v5.
      4. select * from ztxlfa1 where land1 = 'DE'. "row goes into default table work Area
      5. select lifnr land1 from ztxlfa1
            into corresponding fields of it   "notice 'table' is omitted
             where land1 = 'DE'.
              append it.
               endselect.
         Now data will go into work area. and then u will add it to internal table by     
            append statement.
      6.   Table 13.2 contains a list of the various forms of select as it is used with internal tables and their relative efficiency. They are in descending order of most-to-least efficient.
    Table 13.2  Various Forms of SELECT when Filling an Internal Table
    Statement(s)                                   Writes To
    select into table it                                    Body
    select into corresponding fields of table it   Body
    select into it                                    Header line
    select into corresponding fields of it           Header line
    7. SELECT VBRK~VBELN
           VBRK~VKORG
           VBRK~FKDAT
           VBRK~NETWR
           VBRK~WAERK
           TVKOT~VTEXT
           T001~BUKRS
           T001~BUTXT
        INTO CORRESPONDING FIELDS OF TABLE IT_FINAL
        FROM VBRK
        INNER JOIN TVKOT ON VBRKVKORG = TVKOTVKORG
        INNER JOIN T001 ON VBRKBUKRS = T001BUKRS
        WHERE VBELN IN DOCNUM AND VBRK~FKSTO = ''
       AND VBRK~FKDAT in date.
    Select statement using inner joins for vbrk and t001 and tvkot table for this case based on the conditions
    8. SELECT T001W~NAME1 INTO  TABLE IT1_T001W
    FROM T001W INNER JOIN EKPO ON T001WWERKS = EKPOWERKS
    WHERE EKPO~EBELN = PURORD.
    here selecting a single field into table it1_t001winner join on ekpo.
    9. SELECT BUKRS LIFNR EBELN FROM EKKO INTO CORRESPONDING FIELDS OF IT_EKKO WHERE     EBELN IN P_O_NO.
    ENDSELECT.
    SELECT BUTXT   FROM T001 INTO  IT_T001 FOR ALL ENTRIES IN IT_EKKO WHERE BUKRS = IT_EKKO-BUKRS.
    ENDSELECT.
    APPEND IT_T001.
    here I am using for all entries statement with select statement. Both joins and for all entries used to fetch the data on condition but for all entries is the best one.
    10. SELECT AVBELN BVTEXT AFKDAT CBUTXT ANETWR AWAERK INTO TABLE ITAB
                 FROM  VBRK AS A
                 INNER JOIN TVKOT AS B ON
                 AVKORG EQ BVKORG
                 INNER JOIN T001 AS C ON
                 ABUKRS EQ CBUKRS
                 WHERE  AVBELN IN BDOCU AND AFKSTO EQ ' ' AND B~SPRAS EQ
                 SY-LANGU
                 AND AFKDAT IN BDATE AND AVBELN EQ ANY ( SELECT VBELN FROM
                VBRP WHERE VBRP~MATNR EQ ITEMS ).
        Here we are using sub query in inner join specified in brackets.
    Thanks,
    chandu.

  • Explain me about sap implementation in yarn manifacturing industry

    HI,
    can anybody explain me about sap implementation in yarn manifacturing industry.what type of process it is?either repetitive or SD

    HI,
    can anybody explain me about sap implementation in yarn manifacturing industry.what type of process it is?either repetitive or SD

  • Doubt about uses of OBIEE

    I have some doubts about the possible uses of OBIEE. It happens that using OBIEE sometimes users demand report of an "analytical" type, that is aggregated analysis through OBIEE’s Answers, selecting data from dimension tables and measures from fact tables. That’s the ordinary purpose of business intelligence tools!!!
    Some other times though, users demand to perform through Answers analyses of an "operating" type, that is simple extractions of some fields belonging to dimension tables, linked between each other through joins, (hence without querying fact tables): that happens because some of the tables brought in the datawarehouse are not directly linked to any fact table. In this way users want to use Answers to visualize data even for this kind of extractions (or operating reports).
    Is this a correct use of the tool or is it just a “twisted” way of using it, always leading eventually to incorrect extractions? If that’s the case, is it possible to use instead BI Publisher, extracting the dataset through the "Sql Query" mode in a visual manner? The problem of the latter solution, in my case, relies in the fact that users are not enough skilled from the technical point of view: they would prefer to use Answers for every extraction, belonging both to the first type (aggregations) and the second one (extractions), that I just described. Can you suggest a methodology to clarify this situation?

    Hi,
    I understand your point... But I think OBIEE doesn't allow having dimension "on their own", they must be joined to a fact table somehow. This way, when you do a query in answers using fields of two dimension tables a fact table should be always involved. When dimensions are conformed, several fact tables may be used, and OBIEE uses the "best" one in terms of performance. However, there are some tricks that you can do to make sure a particular fact table is used, like using the "implicit fact column" in the presentation layer.
    So back to your point, using OBIEE for "operational" reporting as you call it is a valid option in my experience, but you have to make sure that the underlaying star schema supports the logic that your end users expect when they use just dimension fields.
    Regards,

  • Doubts about use of REPORTS_SERVERMAP with Forms11g HA

    Hi,
    I'm configuring a Linux 64bits Forms/Reports 11g HA environment, the point is that i have two nodes, each one with its Forms and Reports servers, let's say FormsA and ReportsA for the first node and FormsB and ReportsB for the seconde node.
    i want FormsA to be able to call reports from ReportsB and FormsB to be able to call reports from ReportsA.
    I've been reading about REPORT_SERVERMAP
    http://docs.oracle.com/cd/E12839_01/bi.1111/b32121/pbr_conf003.htm#autoId5
    But i have some doubts about its use:
    1. I will not use a shared cluster file system or any way of cache solution, i will only have my rdf files on each node, and i'm wondering if just by configuring this parameter i will be able to get the effect mentioned above ??
    2. The link provided says "Using RUN_REPORT_OBJECT. If the call specifies a Reports Server cluster name instead of a Reports Server name, the REPORTS_SERVERMAP environment variable must be set in the Oracle Forms Services default.env file"
    In fact i'm using RUN_REPORT_OBJECT but
    what is the Reports Server cluster name ?? and where do i find that name ??
    3. Is this configuration well defined:
    REPORTS_SERVERMAP=clusterReports:ReportsA;clusterReports:ReportsB
    4. At forms applications when using RUN_REPORT_OBJECT, can i assume that the report server name will be the cluster name specified at the REPORTS_SERVERMAP ??
    5. Which files should i modify rwservlet.properties or default.env ??
    Hope you can help me :)
    Regards
    Carlos

    Hi,
    1. I will not use a shared cluster file system or any way of cache solution, i will only have my rdf files on each node, and i'm wondering if just by configuring this parameter i will be able to get the effect mentioned above ??
    --> In such case what could go wrong is
    Suppose Run_report_object executed jobs successfully to ReportsA
    But web.show_document command for getjobid failed ( as ReportsA went down by this time)
    --> You will not get the output shown ( though job was successful)
    If shared cache was enabled, then Even if ReportsA is down, other cluster member ( say ReportsB)
    will respond back to web.show_document.
    Point 2,
    --> Under HA is it highly recommended to use web.show_document ( a servlet call) to execute reports. This is to help use all HA features at the HTTP , Webcache or load balancer level.
    However if there is migrated code or Run_report_object is must, then the recommendations as you see in the pointed document is must.
    REPORTS_SERVERMAP setting needs to be configured in rwservlet.properties file and also in default.env Forms configuration file to map the Reports Server cluster name to the Reports Server running on the mid-tier where the Load Balancer forwarded the report request.
    For example FormsA, ReportsA, cluster name say rep_cluster
    default.env file
    REPORTS_SERVERMAP=rep_cluster:ReportsA
    Where "rep_cluster" is the Reports Server cluster name and "ReportsA" is the name of the Reports Server running on the same machine as FormsA
    rwservlet.properties file
    <reports_servermap>rep_cluster:ReportsA</reports_servermap>
    At default.env this is not a valid entry
    REPORTS_SERVERMAP=clusterReports:ReportsA;clusterReports:ReportsB
    what is the Reports Server cluster name ?? and where do i find that name ??
    --> This is created via EM on the report server side.
    Would recommend to refer following documents at the myoracle support repository
         How to Setup Reports HA (High Availability - Clusters) in Reports 11g [ID 853436.1]
         REP-52251 and REP-56033 Errors When Calling Reports From Forms With RUN_REPORT_OBJECT Against a Reports Cluster in 11g. [ID 1074804.1]
    Thanks

Maybe you are looking for

  • Migration from HP-UX to Linux on Oracle DB: From 9.2 to 10.2

    Consultation, (SAP R3 4.6 Ext) If I make an export of oracle 9.2 on HP platform, I can make an  import to oracle 10.2 on Linux platform? Someone could make a similar migration? Regards.

  • Help, Preview doesn't work in InDesign!!!!

    I'm working on creating an interactive multipage/spread document for web use in InDesign CS6 for mac. My issue is that when I create buttons, e.g. a textbox who's feature is 'go to next page', and then try it in preview - nothing happens. I have trie

  • Date contructor deprecation : Parsing a String to Date

    Hi All, In Java 1.5 the Date() constructor Date(String s) is deprecated. As per the API Documentation DateFormat.Parse() method is used. The following code from Java 1.4 version has to be upgraded to Java 1.5. Existing Code: Date dDate = new Date(sDa

  • I canu00B4t run various records with BAPI_NETWORK_COMP_CHANGE

    If I run the BAPI with several component to update the field WITHDRAWN, the records are not updated. Now if I run one compomente updating the data correctly.

  • Package for Online PDF Payslip XML

    Hello gurus, I am trying to identify the package/procedure that generates the XML output that is used by XML/BI Publisher to generate the PDF payslips in SSHR. We would like to create a custom version based on the seeded version. I am specifically lo