SRM supplier user synchronization between SUS and productive client

Dear Sirs, in our SRM environement the contact persons (and relative system users) created by suppliers in the auto-registration procedure, should be synchronized between SUS and productive client.
(in our implementation client 150 and 330) .
This sync mechanism should be valid also for lock-unlock status of the user and reset of the password,
but in both cases it does not works (a locked user, manually unlocked from client 150 via SU01, is again locked in client 330, the same from 330 client).
Could you suggest a solution or some checkpoints ?
Could you provide some links for the configuration of this mechanism ? (we have seen only the SPRO node " Maintain Systems for Synchronization of User Data") ?
I am not shure if XI is implicated or not.
Best regards,
Riccardo Galli

HI
Maintain Systems for Synchronization of User Data
In this IMG activity, you specify the RFC destinations to which user data from SUS User Management is replicated.
In the Logical System field, you should enter the backend destination with which the user data should be synchronized when they are entered in SUS. On the basis of this information, the system determines the relevant RFC destination automatically.
Specify the function modules for creating, changing, or deleting a user in the external system for each logical system:
Function Module for Creating User: Function module that is called to create a user in the external system
Function Module for Changing User: Function module that is called to change the user in the external system
Function Module for Deleting User: Function module that is called to delete the user in the external system
If this data exists, you can set the indicator Use Purchasing BP ID to use the business partner ID from the procurement system instead of the business partner ID from SUS.
What are the settings maintained here ?
external system and roles in external system
br
Muthu

Similar Messages

  • User base Synchronization between SAP and MS Active Directory Server

    Dear all!
    I'm using Web AS 6.20 ABAP and MS Active Directory Server based on Win 2003 Server.
    i successfully implemented the synchronization of user data between SAP and the ADS.
    My question: Is there a way to customize the users on Active Directory Server in regard to their SAP authorization (roles auth. objects etc.)?
    Currently I don't have a clue how to do this.
    Regards,
    Christoph

    Have you searched on SDN for "Active Directory"? That turns up a number of results. I think your expectation might be backwards though, it's not how ADS exposes SAP specific data but how SAP uses ADS to store SAP specific data. My understanding (from quite some time ago so I am fuzzy on this) is that SAP can use ADS in much the same way it can use LDAP as an external user store.
    The Security Newsletter from November 04 [https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/sap security newsletter november 2004.pdf] mentions that a webinar is hosted on SDN about this exact topic, unfortunately I was unable to find a direct link.
    Regards,
    Marc g

  • Users mapping between EP and ABAP system

    Hello
    I'd like to ask for some guidance in my quest
    Current situation looks like this:
    I've configured UME in AS Java to work with LDAP as read only data source. Then I've configured SPNego to run SSO - It works, users from MS AD can log into portal.
    Now I have application in WD which authorizes via EP/AD - works fine.
    And next step is users mapping between AD and ABAP backend (serving some BAPI's for WD app)
    I've found a bunch of help pages starting from
    http://help.sap.com/saphelp_nwce711/helpdata/en/0b/d82c4142aef623e10000000a155106/frameset.htm
    But somehow it's quite complicated to achieve this mapping. I've tried to set RFC destinations logon type to user mapping but without succes.
    Can anyone point me to some more clear example or give path to configure this scenario? Is there a way of configuring this with NWA or some XML file editing is required?
    Any help will be appreciated.
    BTW: whole environment is in version 7.11
    Best regards
    Maciej

    There is no equivalent to SPNEGO on the ABAP side.
    If your goal is to propagate the user, then possible options are:
    -> Wait for SAML 2.0 or invest now in a SAML 1.0 provider.
    -> Use the same kerberos ticket for the EP as what your ABAP system will accept: route = SNC and 3rd party libraries.
    -> Issue SAP logon tickets for the ABAP system from the EP, and use these in your WDA.
    Another option is to expose the service with saved logon data in the ICF. If the service is just a wrapper for the BAPI, then you can also consider using trusted RFC between the service and the backend, but this might not be acceptable for your service.
    I have only done experimental stuff with this and some of the above is not released yet. Also consider the consequences, even if it "does work"...
    Cheers,
    Julius

  • Mapping Design  - SOAP body content needs to be different between test and production

    Hello,
    We are integrating with a 3rd party SOAP receiver who uses the same web service URLS for test and production.
    So to differentiate they exposed 2 web services which do the same thing but have different root and payload node names...along with account details.
    For example, for production our SOAP XML must follow pattern like:
    <Envelope>
    <Body>
    <appRequest>
    <userID>produser</userID><password>prodpwd</password>
    <appPayload>
    <?xml>
    blah blah this XML is the same between test and production
    </xml>
    </appPayload>
    etc
    But for their testing we must use:
    <Envelope>
    <Body>
    <appRequestTest>
    <userID>testuser</userID><password>testpwd</password>
    <appPayloadTest>
    <?xml>
    blah blah this XML is the same between test and production
    </xml>
    </appPayload>
    etc
    So I'm trying to think of a good way to handle this difference in one set of mappings that we can use in our 3 PI platforms Dev / Test / Prod
    Since these differences are in the SOAP Body does it need handled in mapping or is there a way to handle it in the Adapter Config which is naturally different between our environments (mapping we like to keep the same).
    What is a smart way to handle this scenario?
    Many thanks,
    Aaron

    I second Artem when he states that this is a bad design decission from the caller's side.
    However this is not gonna help you in the current situation, right?
    The problem you are facing is that by poor design the message does not have a root node which you may use to handle occurences. Let me explain further
    You would be good if prod message looked like so
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <soapenv:Header/>
    <soapenv:Body>
      <appData>
       <appRequest>
       </appRequest>
      </appData>
    </soapenv:Body>
    </soapenv:Envelope>
    and test message looked like so
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <soapenv:Header/>
    <soapenv:Body>
      <appData>
       <appRequestTest>
       </appRequestTest>
      </appData>
    </soapenv:Body>
    </soapenv:Envelope>
    --> Then you would have been able to specify occurence of <appRequest> and <appRequestTest> as 0..1
    So I think you have (besides what Artem already pointed out) 2 other options:
    1. activate "do not use SOAP envelope" on sender SOAP channel and then designing the data types like above
    2. Use HTTP instead of SOAP adapter and designing data types like above
    Hope I didn't miss something crucial :-)
    Cheers
    Jens

  • Basic differnce between 000 and 001 clients.

    what is the basic differnce between 000 and 001 clients?
    why do we use client 000 as source client while client copy?

    Hi Kumar
    The basic difference between Client 000 and Client 001 is that Client 000 is client independent instace while Client 001 is Client dependent.
    Client 000 is SAP source client while Client 001 doesn't exits for every installation.
    Regards
    Shashank

  • Password synchronization between OID and AD - 10.1.2

    Hi,
    I've some questions about the following issue:
    I've tried to setup the password synchronization between OID 10.1.2 and active directory, with the intent of exporting ldap users from OID to AD..
    Well, the bootstrap gone fine, but when I tried to activate the export of password in the activexp.map configuration file,
    I've obtained this:
    *Writer Thread - 0 - [LDAP: error code 53 - 0000001F: SvcErr: DSID-031A0FC0, problem 5003  (WILL_NOT_PERFORM), data 0*
    for each entry I tried to export...
    I've opened a SR on metalink and I've received the following answer:
    _"  As shown by the synchronization profile, currently you have a mapping for the password from OID to AD._
      _userpassword: : :person:unicodepwd: :person:_ 
      _According to the documentation, password synchronization requires the directories to be configured for SSL mode:_
        _http://download-uk.oracle.com/docs/cd/B14099_12/idmanage.1012/b14085/odip_actdir003.htm#CHDEFIED_
    _18.3.2.8 Synchronizing Passwords_
      _You can synchronize Oracle Internet Directory passwords with Active Directory._
       _You can also make passwords stored in Microsoft Active Directory available in Oracle Internet Directory._  
       _Password synchronization is possible only when the directories run in SSL mode 2, that is, server-only authentication."_
    The SSL setup is the only way to achieve this, or there's another alternative?
    Thanks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Yes. It needs to be in SSL.
    http://download-uk.oracle.com/docs/cd/B14099_12/idmanage.1012/b14085/odip_actdir003.htm#CHDCJHHB
    Some excerpts:
    Active Directory Connector uses SSL to secure the synchronization process. Whether or not you synchronize in the SSL mode depends on your deployment requirements. For example, synchronizing public data does not require SSL, but synchronizing sensitive information such as passwords does. To synchronize password changes between Oracle Internet Directory and Microsoft Active Directory, you must use SSL mode with server-only authentication, that is, SSL Mode 2.
    -shetty2k

  • UCM - How to setup synchronize between DR and Primary site

    Hi all.
    As mentioned on title, we have a primary UCM site and a clean DR site. I want to ensure that end-users have ability to work with DR site for a short time when the Primary site is unavailable. To make DR site available to serves when the primary is down, we can do:
    - setup auto-export archive on primary site
    - target to destination archive on DR site
    - auto-transfer from primary to DR site
    - with data in Database, we can use Golden Gate to sync Primary and DR site
    So, with these settings, I can ensure that the DR is ready to run when the Primary is down. But, if the primary takes a long time to recovered, the DR site has many new contents. How to transfer it back to primary site when the primary is came back ? In other words, how to synchronize contents (vault and native files) between new Primary (old DR) and new DR (old Primary) site ?
    Thank for your attention.
    Sorry for my bad English.
    Cuong Pham

    Hi Cuong (and guys),
    I'm afraid the issue is not that simple. In fact, I think that the Archiver could be used for DR only by customers who have little data and few changes. Why do I think so?
    a) (Understanding System Migration and Archiving - 11g Release 1 (11.1.1)) "Archiver: A Java applet for transferring and reorganizing Content Server files and information." This means that you will use a Java applet to Export and Import your data. With a lot of items (you will need to transfer all the new and updated items!), or large items it will take time (your DR site will always be "few minutes late"). Besides, the Archiver transfers are based on batches - I don't think you can do continuous archiving - and will have impacts on the performance.
    b) Furthermore, (Exporting Data in Archives - 11g Release 1 (11.1.1)) "You can export revisions that are in the following status: RELEASED, DONE, EXPIRED, and GENWWW. You cannot export revisions that are in an active workflow (REVIEW, EDIT, or PENDING status) or that are DELETED." This means that the Archiver cannot be used for all your items.
    Therefore, together with FMW DR Guide (Recommendations for Fusion Middleware Components) I think other techniques should be considered:
    - Real Application Clusters (RAC), Weblogic Clustering, cluster-ware file system: the first, almost error-free, and relatively cheap option is having your DR site as other nodes in DB and MW clusters. If any of your node goes down, the other(s) will still serve your customers (no extra work needed), plus, you can benefit from "united power" of multiple nodes. RAC is available also in Oracle DB Standard Edition (for max. 2-nodes db cluster). The only disadvantage of this configuration is that it is not available for geo-clustering (distance between RAC nodes must be max. some hundreds meters), so it does not cover DR scenarios like "location goes down" (e.g. due to networking issues)
    - Data Guard and distributed file system: the option mentioned in the guide is actually this one. It is based on Data Guard, a free option of the Oracle Database Enterprise Edition, which can run in both asynchronous (a committed transaction on the primary site is immediately transferred to the DR site) and synchronous (a transaction is not committed on the primary until processed by the DR site - if sites are far, or a lot of data is being sent, this can take quite long) modes. So, if you store your content in the database the Data Guard can resolve a lot. Unfortunately, not everything - the guide also mentions that some artifacts (that change!) are also stored on the file system (again, workflow updates, etc), so you have to use file system sync techniques to send those updates. In theory, you could use file system to send also updates in the database, which is nothing but a file(s) (in this case you will need the Partitioning option to split your database into smaller files), but db guys hate this way since it transfers also inconsistencies, so you could end up with an inconsistent database in the DR site, too.
    This option will require some administrative tasks - you will have to resolve inconsistencies resulting from DG/file system sync, you will need to redirect your users to the DR site, and re-configure the DG to make primary site from your DR one. Note that once your original primary site is up again, you can use DG to transfer (again, immediately) changes done in the meantime.
    As you can see, there is no absolute solution, so you need to evaluate your options, esp. with regards to your needs.
    Jiri

  • Synchronization between B1 and third party software

    Hi Experts,
               Is it possible to synchronize between SAP B1 8.8 and Third party software.Here the third party software comes as 'LIBSYS' which is used for Library purpose.so we need to synchronise some data between B1 and Libsys.So if i have to install any software for synchronisation,please suggest me a best
    With Regards,
    Kambadasan.v

    Hi Kambadasan,
    It is extremely difficult or impossible to do exact synchronization with any two independent systems. I believe that library system has too many unique process that is not covered by B1.
    Thanks,
    Gordon

  • Deactivate integration between SUS and ERP-MM

    Dear Expert,
    I have one requirement from user that all supplier invoices created in SUS must be block/park status in ERP-MM. I notice that there is ABAP effort here in order to facilitate the requirement. Prior to do that the automation invoice verification in ERP-MM must be disconnected from SUS, I believe somebody has experience before. Appreciate if you can share how to deactivate that setting.
    Best regards,
    Mahnansa

    HI, Mahnansa
    you can check the NOTE:501524.mybe helpful
    Bestregards
    Alex

  • Synchronizing user data between Laptop and Desktop

    We have a Macbook and an iMac. While my kids and wife went to visit the Grandparanets I lent them my Macbook, used SuperDuper to create a copy of my Laptop's drive and booted it on the iMac (ok, that is pretty cool!).
    All worked very well and now they are returning and I need to merge the information on the Macbook itself and the SuperDuper drive I have been using on the iMac.
    There are multiple user accounts on each and all that was really added on the Macbook would be photos and movies and iTunes music one 3 of the accounts.
    Is there an easy way to sync user accounts between two computers? I am quite familiar with .mac and syncing wha I would call preferences. It cannot handle stuff like synching iTunes, iPhoto or iMovie though.
    Appreciate any suggestions
    JIm

    Well ... here is what I ended up doing. Hope it helps someone
    Once the laptop came home:
    1. I made a SuperDuper image just in case (to another Firewire drive partition)
    2. Then I logged into the user accounts on each and used Backup to backup Home, iLife ... for each user on the laptop that logged in while they were gone (I asked them)
    3. Then I used SuperDuper to overwrite the hard drive on the laptop using the SuperDuper image on the Firewire drive.
    4. Then I logged into each account on the laptop where I backuped up data and restored it.
    It took a little while but everyone is happy and has their data

  • Synchronization between datagrid and chart Item

    Hi
    In my application, I have a datagrid and corresponding bar
    chart. there is a toggle button through which I can switch between
    chart and data grid.
    Now I want to synchronize both.
    for example If I select any 3 rows in data grid then it
    should selects 3 bars on the chart also.
    can any body help me in that?
    Thanks
    smonika15

    Hi,
    U need to have a combo box renderer something like:
    In the object that u are populating the data provider of the
    data grid, add 2 fields:
    listOfFields & selectedField.
    <mx:HBox horizontalScrollPolicy="off"
    verticalScrollPolicy="off"
    xmlns:mx="
    http://www.adobe.com/2006/mxml">
    <mx:Script>
    <![CDATA[
    public var itemSelected: Object;
    ]]>
    </mx:Script>
    <mx:ComboBox id="combo"
    dataProvider="{data.listOfFields}"
    selectedItem="{data.selectedField}"
    change="itemSelected=combo.selectedItem;"
    updateComplete="itemSelected=combo.selectedItem;">
    </mx:ComboBox>
    </mx:HBox>
    Now, loop through the list of objects that u get from
    back-end and keep setting the 2 new properties (listOfFields &
    selectedField). For setting the value of selectedField, u need to
    loop through listOfFields to match the fieldId.
    Hope that helps,
    Cheree

  • Data Synchronization between Planning and HPCM

    Hello,
    How it is possible to synchronize the Data between HPCM and Planning?
    I have 12 Dimensions in HPCM and only 8 Dimensions in Planning.
    8 Dimensions in HPCM are like Planning dimensions + 2 Standard HPCM dimensions + 2 staging dimensions.
    I want to move the Data from HPCM to Planning (Both 11.1.1.3)
    Many thanks in advance,
    Whitebaer

    Are you re-initialising your Source System after adding new members to EBS?
    Regards,
    Gavin

  • Synchronization between PCI6722 and PCI6602

    I plan to synchronize counter PCI6602 and analog out PCI6722。6602 shares 6722’s ao/SampleClock as external clock and arm start triggered by 6722’s ao/StartTrigger。The master device is 6722, which refered as Dev1, and the slave device is 6602, which refered as Dev2. A RTSI line is used to connect the two devices correctly.
    I use C API to finish my program and my code is as follows:
    //config 6722 analog out task
    1、DAQmxCreateTask("NI6672", &hAOTask);
    2、DAQmxCreateAOVoltageChan(hAOTask, "Dev1/ao0", "", -10.0, 10.0, DAQmx_Val_Volts, "" );
    3、DAQmxCfgSampClkTiming(hAOTask, "", 1000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1000);
    4、DAQmxWriteAnalogF64(hAOTask, 1000, 0, 10.0, DAQmx_Val_GroupByChannel, data, NULL, NULL);
    //config 6602 counter task
    5、DAQmxCreateTask("NI6602", &hCounterTask);
    6、DAQmxCreateCICountEdgesChan(hCounterTask, "Dev2/ctr0", "", DAQmx_Val_Rising, 0, DAQmx_Val_CountUp);
    //use /Dev1/ao/SampleClock for external clock
    7、DAQmxCfgSampClkTiming(hCounterTask, "/Dev1/ao/SampleClock", 1000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1000);
    //use /Dev1/ao/StartTrigger
    8、DAQmxSetTrigAttribute (hCounterTask, DAQmx_ArmStartTrig_Type, DAQmx_Val_DigEdge);
    9、DAQmxSetTrigAttribute (hCounterTask, DAQmx_DigEdge_ArmStartTrig_Src, "/Dev1/ao/StartTrigger");
    10、DAQmxSetTrigAttribute (hCounterTask, DAQmx_DigEdge_ArmStartTrig_Edge, DAQmx_Val_Rising);
    //start counter task first
    11、DAQmxStartTask(hCounterTask);
    //start 6722 task
    12、DAQmxStartTask(hAOTask);
    I run it on the MAX virtual Device, and the Step 11 always returned error -89120。I try to slove this problem by changing the Step 7, use /Dev2/PFI9  instead of /Dev1/ao/SampleClock.
    7. 
    DAQmxCfgSampClkTiming(hCounterTask, "/Dev2/PFI9", 1000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1000);
    The code runs well, but I don’t know which terminal is connected by /Dev2/PFI9. Does it connect to /Dev1/ao/SampleClock?After that, I use another API DAQmxConnectTerms to ensure that, I add a Step before Step 11.
    DAQmxConnectTerms( "/Dev1/ao/SampleClock", "/Dev2/PFI9", DAQmx_Val_DoNotInvertPolarity );
    The program also run well. But I am still not sure that 6602 is sharing /Dev1/ao/SampleClock。If not, which terminal of Dev1 is connected by /Dev2/PFI9?
    Is my code right? If not, where's the problem? Is there any example code for me? Thanks!

    Hi Shokey,
    You may want to check into the DAQmxExportSignal function in the NI-DAQmx C Reference Help.  You can take a look at this post for more info on how to get there.  John explains, on this linked forum post, why this function is preferred to DAQmxConnectTerms.  Functionally, they work similarly, but DAQmxExportSignal is a little more convenient.
    That error may mean that you are trying to route something that is not available for routing, or that you may just have the name of your device mis-typed (based on what I've found.)  Searching the error code alone on ni.com, I found this KnowledgeBase online, which you may find helpful (whenever you are searching for an error code like -89210, type it in without the negative sign, as in "89120" into the search bar on ni.com)
    Using a Valid Terminal Produces Error -89120 at DAQmxStartTask
    To see where your device can and cannot route to, you can always look at the "Device Routes" tab in your Measurement and Automation Explorer   See this KB for information on how to do that:
    How Can I Know What Internal Routes are Available on My Device?
    Let me know if using these resources helps, or if I can clarify anything for you.
    Regards,
    Andrew
    National Instruments

  • Help, about data synchronization between C++ and PHP API!

    I use Berkeley DB XML as my Server's database, the client access the database via http by C++ API. I don't close the XmlManager and XmlContainer after read and write the database for better performance. However, I provide another way to manipulate the database via web by PHP. After I updated the data by PHP, I found that I couldn't catch the update.
    If I close the XmlManager and XmlContainer after read and write the database every time, the problem disappeared. But I can't do that for performance.
    How can I solve this problem? Thanks!

    First of all, thank you for your attention.
    I don't share the same environment for the two processes, but I seted the same configure flags on the environment. The flags as follows:
    DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER;
    The C++ code as follows:
    //C++ code
    UINT32 envFlags = DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER;
    INT32 lRet = 0;
    string ctnName = "rls_services.bdbxml";
    string docName = "index.xml";
    CHAR acXQuery[256] = {0};
    CHAR acXmlDoc[] = "<test>C++ test</test>";
    //from      
    DbEnv *pDbEnv  = new DbEnv(0);
    XmlManager *pxmlMgr = NULL;
    pDbEnv->open("/usr/local/xdms", envFlags, 0);
    pxmlMgr = new XmlManager(pDbEnv, DBXML_ADOPT_DBENV);
    lRet = pxmlMgr->existsContainer(ctnName);
    if(0 == lRet)
         pxmlMgr->createContainer(ctnName);     
    XmlContainer xmlCtn = pxmlMgr->openContainer(ctnName);
    //to
    // the code between 'from' and 'to' is a seperate function
    XmlQueryContext xmlQC = pxmlMgr->createQueryContext();
    XmlUpdateContext xmlUC = pxmlMgr->createUpdateContext();
    sprintf(acXQuery, "doc(\"%s/%s\")", ctnName, docName);
    XmlQueryExpression xmlQE = pxmlMgr->prepare(acXQuery, xmlQC);
    XmlResults xmlResult = xmlQE.execute(xmlQC);
    XmlDocument xmlDoc;
    if (xmlResult.hasNext())
    xmlDoc = xmlCtn.getDocument(docName);
    xmlDoc.setContent(acXmlDoc);
    xmlCtn.updateDocument(xmlDoc, xmlUC);
    else
    xmlDoc = pxmlMgr->createDocument();
    xmlDoc.setName(docName);
    xmlDoc.setContent(acXmlDoc);
    xmlCtn.putDocument(xmlDoc, xmlUC);
    // I don't close the Container and Manager for performance
    The PHP code as follow:
    <php?
    $DB_DIR = "/usr/local/xdms";
    $env = new Db4Env();
    $enFlags = DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER;
    $env->open("/usr/local/xdms", $enFlags, 0);
    $xmlManager = new XmlManager($env, 0);
    $ctnName = 'rls_services.bdbxml';
    $docName = 'index.xml';
    $docContent = '<test>PHP test</test>';
    if (!$xmlManager->existsContainer($ctnName))
         return ;
    $xmlCtn = $xmlManager->openContainer($ctnName);
    $xmlQC = $xmlManager->createQueryContext();
    $xmlUC = $xmlManager->createUpdateContext();
    $acXQuery = "doc('".$ctnName.'/'.$docName."')";
    $xmlQE = $xmlManager->prepare($acXQuery, $xmlQC);
    $xmlResult = $xmlQE->execute($xmlQC);
    if ($xmlResult->hasNext())
         $xmlDoc = $xmlCtn->getDocument($docName);
         $xmlDoc->setContent($docContent);
         $xmlCtn->updateDocument($xmlDoc, $xmlUC);
    else
         $xmlDoc = $xmlManager->createDocument();
         $xmlDoc->setName($docName);
         $xmlDoc->setContent($docContent);
         $xmlCtn->putDocument($xmlDoc, $xmlUC);
    unset($xmlDoc);
    unset($xmlCtn);
    ?>
    The code between 'from' and 'to' excute only one time when server start.
    After the server started, I write data by PHP. I can read out the data that
    writed by PHP just now, but I can't read out the data by C++ because I don't
    close the XmlContainer and XmlManager.
    After I restarted the server, I can readout the data by C++. But I can't
    open and close database for each request for performance as a server.
    what should I do ? Thank you!
    Whether I express my question clearly or not?

  • Content Transfer between WSUS and SCCM client on port 8530

    Dear All
    In my environment from last few days we are facing an strange situtaion where huge contents is getting transferred between WSUS server and SCCM client on port 8530.
    I check logs on client and found that client are getting contents from their associated DP's but still weekly network utlisation report showing that data in GB's was transferred between SCCM primary server and clients on port 8530.
    Kindly suggest what else need to check in this scenario
    Regards Suresh

    The only other possible traffic from WSUS is actual updates if and only if you have been approving updates directly in WSUS which you should not be doing.
    Thus, assuming you are not doing what you should not be doing, the only possibility is the update catalog. 150-200MB sounds excessive but that will be based upon what you all have selected for your catalog. There are also a handful of reasons why a full
    catalog resync would be initiated instead of just a delta.
    Jason | http://blog.configmgrftw.com

Maybe you are looking for

  • Printing Problem in sap script

    Hi all, There is a printing problem in z report of sapscript. when I am giving continuous print ...    1st page is right in print format but in 2nd page the text comes 1 row up...and so on... so the starting point for all page are not same... Please

  • Code for FM PC_CHECK_PATH_WITH_DRIVE needed.

    Hi Experts, I am learning ABAP through SAP Mini Basis installed in my computer. I am learning it through SAMS - Learn Abap in 21 days. While I was installing some programs from it, I came to know that those programs needed a function module - "PC_CHE

  • Java mapping - for reading the binary file contents and changing filename

    Hi, I have already gone through the various bolgs and threads and have not been able to find a solution to my problem. One blog did give some idea but did not help- "JAAPPING", an alternate way of reading a CSV file I am not very good in java but hav

  • 10.8.5 Console Error.

    14/09/2013 17:12:03.000 kernel[0]: SMC::smcReadKeyAction ERROR TC0D kSMCBadArgumentError(0x89) fKeyHashTable=0x0xffffff8023ea6000 I've been getting these in my Console every second since installing 10.8.5 yesterday. There are thousands. I've reset th

  • Finder window size not saved when cmd+click

    Hi. I'm not sure if anyone else is having this problem. I look through the forum but didn't find it posted. I re-sized my finder to a certain size and have done the same to some other folder such that they are basically all different sizes. However,