Synchronization between PCI6722 and PCI6602

I plan to synchronize counter PCI6602 and analog out PCI6722。6602 shares 6722’s ao/SampleClock as external clock and arm start triggered by 6722’s ao/StartTrigger。The master device is 6722, which refered as Dev1, and the slave device is 6602, which refered as Dev2. A RTSI line is used to connect the two devices correctly.
I use C API to finish my program and my code is as follows:
//config 6722 analog out task
1、DAQmxCreateTask("NI6672", &hAOTask);
2、DAQmxCreateAOVoltageChan(hAOTask, "Dev1/ao0", "", -10.0, 10.0, DAQmx_Val_Volts, "" );
3、DAQmxCfgSampClkTiming(hAOTask, "", 1000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1000);
4、DAQmxWriteAnalogF64(hAOTask, 1000, 0, 10.0, DAQmx_Val_GroupByChannel, data, NULL, NULL);
//config 6602 counter task
5、DAQmxCreateTask("NI6602", &hCounterTask);
6、DAQmxCreateCICountEdgesChan(hCounterTask, "Dev2/ctr0", "", DAQmx_Val_Rising, 0, DAQmx_Val_CountUp);
//use /Dev1/ao/SampleClock for external clock
7、DAQmxCfgSampClkTiming(hCounterTask, "/Dev1/ao/SampleClock", 1000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1000);
//use /Dev1/ao/StartTrigger
8、DAQmxSetTrigAttribute (hCounterTask, DAQmx_ArmStartTrig_Type, DAQmx_Val_DigEdge);
9、DAQmxSetTrigAttribute (hCounterTask, DAQmx_DigEdge_ArmStartTrig_Src, "/Dev1/ao/StartTrigger");
10、DAQmxSetTrigAttribute (hCounterTask, DAQmx_DigEdge_ArmStartTrig_Edge, DAQmx_Val_Rising);
//start counter task first
11、DAQmxStartTask(hCounterTask);
//start 6722 task
12、DAQmxStartTask(hAOTask);
I run it on the MAX virtual Device, and the Step 11 always returned error -89120。I try to slove this problem by changing the Step 7, use /Dev2/PFI9  instead of /Dev1/ao/SampleClock.
7. 
DAQmxCfgSampClkTiming(hCounterTask, "/Dev2/PFI9", 1000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1000);
The code runs well, but I don’t know which terminal is connected by /Dev2/PFI9. Does it connect to /Dev1/ao/SampleClock?After that, I use another API DAQmxConnectTerms to ensure that, I add a Step before Step 11.
DAQmxConnectTerms( "/Dev1/ao/SampleClock", "/Dev2/PFI9", DAQmx_Val_DoNotInvertPolarity );
The program also run well. But I am still not sure that 6602 is sharing /Dev1/ao/SampleClock。If not, which terminal of Dev1 is connected by /Dev2/PFI9?
Is my code right? If not, where's the problem? Is there any example code for me? Thanks!

Hi Shokey,
You may want to check into the DAQmxExportSignal function in the NI-DAQmx C Reference Help.  You can take a look at this post for more info on how to get there.  John explains, on this linked forum post, why this function is preferred to DAQmxConnectTerms.  Functionally, they work similarly, but DAQmxExportSignal is a little more convenient.
That error may mean that you are trying to route something that is not available for routing, or that you may just have the name of your device mis-typed (based on what I've found.)  Searching the error code alone on ni.com, I found this KnowledgeBase online, which you may find helpful (whenever you are searching for an error code like -89210, type it in without the negative sign, as in "89120" into the search bar on ni.com)
Using a Valid Terminal Produces Error -89120 at DAQmxStartTask
To see where your device can and cannot route to, you can always look at the "Device Routes" tab in your Measurement and Automation Explorer   See this KB for information on how to do that:
How Can I Know What Internal Routes are Available on My Device?
Let me know if using these resources helps, or if I can clarify anything for you.
Regards,
Andrew
National Instruments

Similar Messages

  • Password synchronization between OID and AD - 10.1.2

    Hi,
    I've some questions about the following issue:
    I've tried to setup the password synchronization between OID 10.1.2 and active directory, with the intent of exporting ldap users from OID to AD..
    Well, the bootstrap gone fine, but when I tried to activate the export of password in the activexp.map configuration file,
    I've obtained this:
    *Writer Thread - 0 - [LDAP: error code 53 - 0000001F: SvcErr: DSID-031A0FC0, problem 5003  (WILL_NOT_PERFORM), data 0*
    for each entry I tried to export...
    I've opened a SR on metalink and I've received the following answer:
    _"  As shown by the synchronization profile, currently you have a mapping for the password from OID to AD._
      _userpassword: : :person:unicodepwd: :person:_ 
      _According to the documentation, password synchronization requires the directories to be configured for SSL mode:_
        _http://download-uk.oracle.com/docs/cd/B14099_12/idmanage.1012/b14085/odip_actdir003.htm#CHDEFIED_
    _18.3.2.8 Synchronizing Passwords_
      _You can synchronize Oracle Internet Directory passwords with Active Directory._
       _You can also make passwords stored in Microsoft Active Directory available in Oracle Internet Directory._  
       _Password synchronization is possible only when the directories run in SSL mode 2, that is, server-only authentication."_
    The SSL setup is the only way to achieve this, or there's another alternative?
    Thanks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Yes. It needs to be in SSL.
    http://download-uk.oracle.com/docs/cd/B14099_12/idmanage.1012/b14085/odip_actdir003.htm#CHDCJHHB
    Some excerpts:
    Active Directory Connector uses SSL to secure the synchronization process. Whether or not you synchronize in the SSL mode depends on your deployment requirements. For example, synchronizing public data does not require SSL, but synchronizing sensitive information such as passwords does. To synchronize password changes between Oracle Internet Directory and Microsoft Active Directory, you must use SSL mode with server-only authentication, that is, SSL Mode 2.
    -shetty2k

  • User base Synchronization between SAP and MS Active Directory Server

    Dear all!
    I'm using Web AS 6.20 ABAP and MS Active Directory Server based on Win 2003 Server.
    i successfully implemented the synchronization of user data between SAP and the ADS.
    My question: Is there a way to customize the users on Active Directory Server in regard to their SAP authorization (roles auth. objects etc.)?
    Currently I don't have a clue how to do this.
    Regards,
    Christoph

    Have you searched on SDN for "Active Directory"? That turns up a number of results. I think your expectation might be backwards though, it's not how ADS exposes SAP specific data but how SAP uses ADS to store SAP specific data. My understanding (from quite some time ago so I am fuzzy on this) is that SAP can use ADS in much the same way it can use LDAP as an external user store.
    The Security Newsletter from November 04 [https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/sap security newsletter november 2004.pdf] mentions that a webinar is hosted on SDN about this exact topic, unfortunately I was unable to find a direct link.
    Regards,
    Marc g

  • SRM supplier user synchronization between SUS and productive client

    Dear Sirs, in our SRM environement the contact persons (and relative system users) created by suppliers in the auto-registration procedure, should be synchronized between SUS and productive client.
    (in our implementation client 150 and 330) .
    This sync mechanism should be valid also for lock-unlock status of the user and reset of the password,
    but in both cases it does not works (a locked user, manually unlocked from client 150 via SU01, is again locked in client 330, the same from 330 client).
    Could you suggest a solution or some checkpoints ?
    Could you provide some links for the configuration of this mechanism ? (we have seen only the SPRO node " Maintain Systems for Synchronization of User Data") ?
    I am not shure if XI is implicated or not.
    Best regards,
    Riccardo Galli

    HI
    Maintain Systems for Synchronization of User Data
    In this IMG activity, you specify the RFC destinations to which user data from SUS User Management is replicated.
    In the Logical System field, you should enter the backend destination with which the user data should be synchronized when they are entered in SUS. On the basis of this information, the system determines the relevant RFC destination automatically.
    Specify the function modules for creating, changing, or deleting a user in the external system for each logical system:
    Function Module for Creating User: Function module that is called to create a user in the external system
    Function Module for Changing User: Function module that is called to change the user in the external system
    Function Module for Deleting User: Function module that is called to delete the user in the external system
    If this data exists, you can set the indicator Use Purchasing BP ID to use the business partner ID from the procurement system instead of the business partner ID from SUS.
    What are the settings maintained here ?
    external system and roles in external system
    br
    Muthu

  • Synchronization between B1 and third party software

    Hi Experts,
               Is it possible to synchronize between SAP B1 8.8 and Third party software.Here the third party software comes as 'LIBSYS' which is used for Library purpose.so we need to synchronise some data between B1 and Libsys.So if i have to install any software for synchronisation,please suggest me a best
    With Regards,
    Kambadasan.v

    Hi Kambadasan,
    It is extremely difficult or impossible to do exact synchronization with any two independent systems. I believe that library system has too many unique process that is not covered by B1.
    Thanks,
    Gordon

  • Synchronization between datagrid and chart Item

    Hi
    In my application, I have a datagrid and corresponding bar
    chart. there is a toggle button through which I can switch between
    chart and data grid.
    Now I want to synchronize both.
    for example If I select any 3 rows in data grid then it
    should selects 3 bars on the chart also.
    can any body help me in that?
    Thanks
    smonika15

    Hi,
    U need to have a combo box renderer something like:
    In the object that u are populating the data provider of the
    data grid, add 2 fields:
    listOfFields & selectedField.
    <mx:HBox horizontalScrollPolicy="off"
    verticalScrollPolicy="off"
    xmlns:mx="
    http://www.adobe.com/2006/mxml">
    <mx:Script>
    <![CDATA[
    public var itemSelected: Object;
    ]]>
    </mx:Script>
    <mx:ComboBox id="combo"
    dataProvider="{data.listOfFields}"
    selectedItem="{data.selectedField}"
    change="itemSelected=combo.selectedItem;"
    updateComplete="itemSelected=combo.selectedItem;">
    </mx:ComboBox>
    </mx:HBox>
    Now, loop through the list of objects that u get from
    back-end and keep setting the 2 new properties (listOfFields &
    selectedField). For setting the value of selectedField, u need to
    loop through listOfFields to match the fieldId.
    Hope that helps,
    Cheree

  • Data Synchronization between Planning and HPCM

    Hello,
    How it is possible to synchronize the Data between HPCM and Planning?
    I have 12 Dimensions in HPCM and only 8 Dimensions in Planning.
    8 Dimensions in HPCM are like Planning dimensions + 2 Standard HPCM dimensions + 2 staging dimensions.
    I want to move the Data from HPCM to Planning (Both 11.1.1.3)
    Many thanks in advance,
    Whitebaer

    Are you re-initialising your Source System after adding new members to EBS?
    Regards,
    Gavin

  • UCM - How to setup synchronize between DR and Primary site

    Hi all.
    As mentioned on title, we have a primary UCM site and a clean DR site. I want to ensure that end-users have ability to work with DR site for a short time when the Primary site is unavailable. To make DR site available to serves when the primary is down, we can do:
    - setup auto-export archive on primary site
    - target to destination archive on DR site
    - auto-transfer from primary to DR site
    - with data in Database, we can use Golden Gate to sync Primary and DR site
    So, with these settings, I can ensure that the DR is ready to run when the Primary is down. But, if the primary takes a long time to recovered, the DR site has many new contents. How to transfer it back to primary site when the primary is came back ? In other words, how to synchronize contents (vault and native files) between new Primary (old DR) and new DR (old Primary) site ?
    Thank for your attention.
    Sorry for my bad English.
    Cuong Pham

    Hi Cuong (and guys),
    I'm afraid the issue is not that simple. In fact, I think that the Archiver could be used for DR only by customers who have little data and few changes. Why do I think so?
    a) (Understanding System Migration and Archiving - 11g Release 1 (11.1.1)) "Archiver: A Java applet for transferring and reorganizing Content Server files and information." This means that you will use a Java applet to Export and Import your data. With a lot of items (you will need to transfer all the new and updated items!), or large items it will take time (your DR site will always be "few minutes late"). Besides, the Archiver transfers are based on batches - I don't think you can do continuous archiving - and will have impacts on the performance.
    b) Furthermore, (Exporting Data in Archives - 11g Release 1 (11.1.1)) "You can export revisions that are in the following status: RELEASED, DONE, EXPIRED, and GENWWW. You cannot export revisions that are in an active workflow (REVIEW, EDIT, or PENDING status) or that are DELETED." This means that the Archiver cannot be used for all your items.
    Therefore, together with FMW DR Guide (Recommendations for Fusion Middleware Components) I think other techniques should be considered:
    - Real Application Clusters (RAC), Weblogic Clustering, cluster-ware file system: the first, almost error-free, and relatively cheap option is having your DR site as other nodes in DB and MW clusters. If any of your node goes down, the other(s) will still serve your customers (no extra work needed), plus, you can benefit from "united power" of multiple nodes. RAC is available also in Oracle DB Standard Edition (for max. 2-nodes db cluster). The only disadvantage of this configuration is that it is not available for geo-clustering (distance between RAC nodes must be max. some hundreds meters), so it does not cover DR scenarios like "location goes down" (e.g. due to networking issues)
    - Data Guard and distributed file system: the option mentioned in the guide is actually this one. It is based on Data Guard, a free option of the Oracle Database Enterprise Edition, which can run in both asynchronous (a committed transaction on the primary site is immediately transferred to the DR site) and synchronous (a transaction is not committed on the primary until processed by the DR site - if sites are far, or a lot of data is being sent, this can take quite long) modes. So, if you store your content in the database the Data Guard can resolve a lot. Unfortunately, not everything - the guide also mentions that some artifacts (that change!) are also stored on the file system (again, workflow updates, etc), so you have to use file system sync techniques to send those updates. In theory, you could use file system to send also updates in the database, which is nothing but a file(s) (in this case you will need the Partitioning option to split your database into smaller files), but db guys hate this way since it transfers also inconsistencies, so you could end up with an inconsistent database in the DR site, too.
    This option will require some administrative tasks - you will have to resolve inconsistencies resulting from DG/file system sync, you will need to redirect your users to the DR site, and re-configure the DG to make primary site from your DR one. Note that once your original primary site is up again, you can use DG to transfer (again, immediately) changes done in the meantime.
    As you can see, there is no absolute solution, so you need to evaluate your options, esp. with regards to your needs.
    Jiri

  • Help, about data synchronization between C++ and PHP API!

    I use Berkeley DB XML as my Server's database, the client access the database via http by C++ API. I don't close the XmlManager and XmlContainer after read and write the database for better performance. However, I provide another way to manipulate the database via web by PHP. After I updated the data by PHP, I found that I couldn't catch the update.
    If I close the XmlManager and XmlContainer after read and write the database every time, the problem disappeared. But I can't do that for performance.
    How can I solve this problem? Thanks!

    First of all, thank you for your attention.
    I don't share the same environment for the two processes, but I seted the same configure flags on the environment. The flags as follows:
    DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER;
    The C++ code as follows:
    //C++ code
    UINT32 envFlags = DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER;
    INT32 lRet = 0;
    string ctnName = "rls_services.bdbxml";
    string docName = "index.xml";
    CHAR acXQuery[256] = {0};
    CHAR acXmlDoc[] = "<test>C++ test</test>";
    //from      
    DbEnv *pDbEnv  = new DbEnv(0);
    XmlManager *pxmlMgr = NULL;
    pDbEnv->open("/usr/local/xdms", envFlags, 0);
    pxmlMgr = new XmlManager(pDbEnv, DBXML_ADOPT_DBENV);
    lRet = pxmlMgr->existsContainer(ctnName);
    if(0 == lRet)
         pxmlMgr->createContainer(ctnName);     
    XmlContainer xmlCtn = pxmlMgr->openContainer(ctnName);
    //to
    // the code between 'from' and 'to' is a seperate function
    XmlQueryContext xmlQC = pxmlMgr->createQueryContext();
    XmlUpdateContext xmlUC = pxmlMgr->createUpdateContext();
    sprintf(acXQuery, "doc(\"%s/%s\")", ctnName, docName);
    XmlQueryExpression xmlQE = pxmlMgr->prepare(acXQuery, xmlQC);
    XmlResults xmlResult = xmlQE.execute(xmlQC);
    XmlDocument xmlDoc;
    if (xmlResult.hasNext())
    xmlDoc = xmlCtn.getDocument(docName);
    xmlDoc.setContent(acXmlDoc);
    xmlCtn.updateDocument(xmlDoc, xmlUC);
    else
    xmlDoc = pxmlMgr->createDocument();
    xmlDoc.setName(docName);
    xmlDoc.setContent(acXmlDoc);
    xmlCtn.putDocument(xmlDoc, xmlUC);
    // I don't close the Container and Manager for performance
    The PHP code as follow:
    <php?
    $DB_DIR = "/usr/local/xdms";
    $env = new Db4Env();
    $enFlags = DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER;
    $env->open("/usr/local/xdms", $enFlags, 0);
    $xmlManager = new XmlManager($env, 0);
    $ctnName = 'rls_services.bdbxml';
    $docName = 'index.xml';
    $docContent = '<test>PHP test</test>';
    if (!$xmlManager->existsContainer($ctnName))
         return ;
    $xmlCtn = $xmlManager->openContainer($ctnName);
    $xmlQC = $xmlManager->createQueryContext();
    $xmlUC = $xmlManager->createUpdateContext();
    $acXQuery = "doc('".$ctnName.'/'.$docName."')";
    $xmlQE = $xmlManager->prepare($acXQuery, $xmlQC);
    $xmlResult = $xmlQE->execute($xmlQC);
    if ($xmlResult->hasNext())
         $xmlDoc = $xmlCtn->getDocument($docName);
         $xmlDoc->setContent($docContent);
         $xmlCtn->updateDocument($xmlDoc, $xmlUC);
    else
         $xmlDoc = $xmlManager->createDocument();
         $xmlDoc->setName($docName);
         $xmlDoc->setContent($docContent);
         $xmlCtn->putDocument($xmlDoc, $xmlUC);
    unset($xmlDoc);
    unset($xmlCtn);
    ?>
    The code between 'from' and 'to' excute only one time when server start.
    After the server started, I write data by PHP. I can read out the data that
    writed by PHP just now, but I can't read out the data by C++ because I don't
    close the XmlContainer and XmlManager.
    After I restarted the server, I can readout the data by C++. But I can't
    open and close database for each request for performance as a server.
    what should I do ? Thank you!
    Whether I express my question clearly or not?

  • Method of Synchronization between XE and Enterprise (Offline)

    Hi,
    We have two DB servers hosted in two locations which has 10g XE and 10g Enterprise.
    These sites will be connected in dial-up lines only when needed to synchronize (from XE to Enterprise).
    Please can anyone tell me what is the best method we can do for synchronizing data in between these two servers in dial-up line.
    Thanks!
    Nilaksha.

    This is old info but the Workaround might help if such is the case.
    http://btsc.webapps.blackberry.com/btsc/viewdocument.do?externalId=KB32234&sliceId=2&docType=kc&noCo...

  • SEM-BCS Synchronization between BCS and BW.

    Hi,
    Everytime i want to delete master data from BCS(T-code UCWB), it always shows me an error message : "There are inconsistencies". But when i check the inconsistencies, There are nothing. Every thing is consistent.
    So for delete the master data, i must delete it from BW side (T-code RSA1) and then running program UGMDSYNC to synchronize data between BW to BCS.
    <b>The question is :</b>
    are there anyway to delete master data directly from BCS without having such inconsistency errors?
    thanks in advance.

    In order to maintain the sanctity and integrity of master data you have to run the programs ,its kept for check purpose.
    Standard synchronization programs are available to force the transfer of master data from BCS to BW and in a more limited fashion transfer changes from BW to BCS. Report UGMDSYNC is used in dialog for characteristics and UGMDSY00 is the batch equivalent.
    UGMDSY10 and UGMDSY20 are corresponding programs for replicating hierarchies.
    Hope it Helps
    Chetan
    @CP..

  • 12c synchronization between cdb and pdb

    Hello,
    I'm reading the following the my 12c new features student guide:
    If a common users and roles had been previously created, the new or closed PDB must be synchronized to retrieve the new common users and roles from the root. The synchronization is automatically performed if the PDB is opened in read write mode. If you open the PDB in read only mode, an error is returned.
    SQL> select version from v$instance;
      12.1.0.1.0
    SQL> sho con_name
      CDB$ROOT
    SQL> select con_id,name, open_mode from v$pdbs;
      2 PDB$SEED            READ ONLY
      3 PDB2_1              READ ONLY
    SQL> create user c##dude identified by dude;
      User created.
    SQL> select * from PDB_PLUG_IN_VIOLATIONS;
      no rows selected.
    SQL> drop user c##dude cascade;
      User dropped.
    User dropped.
    If I understand the student guide correctly, creating a common user in cdb$root can be created, modified, dropped when the pdb is in mounted mode, but read-only should return an error.
    Any ideas what I cannot reproduce this behavior?
    Thanks!

    OK, I understand, for instance when I create a new pluggable database, it will not be in sync with the CDB$ROOT and therefore I cannot open it read only. I have tested it, and it indeed creates an error. What confuses me is the following:
    In my example, I have a PDB in read-only mode, but I can still create a common user in root. But according to the statement from the guide,the user or role cannot be created.
    Perhaps I misunderstood and what it means is that the common user won't be created or modified in the read-only PDB? But isn't that the point of having a database read-only?
    Well, I guess what it means is that it is possible to create the user or role in the root container, but it simply won't be synchronized with the PDB until it is opened read-write or mounted. So I just misread the context, which was from the perspective of the PDB and not CDB.
    So anyway, it makes sense now. Thanks a lot for your reply!
    P.S. I cannot move the post to Multitenant forum (why is it hidden in Database + options?).
    But if a moderator can move the whole post.. please do.

  • How to maintain the synchronization between routing and bom in Production V

    We are maintaining the Prod version and assigning BOM and Routing to the prod version.
    But if user does not assign the mentioned BOM to the respective routing and selects the alternative BOM.
    Then how to keep the check the Production Version.
    Is any way to keep the Sync between, Prod version and related BOM and Routing.
    Amit Shah
    Edited by: SAP2511 on Feb 2, 2011 11:56 AM

    Dear,
    Go to T.Code: C223 enter your plant and press enter you will get the details of the all production version You will get Green light for correct PV's
    Select all PV and do the consistency check.
    Regards,
    R.Brahmankar

  • How to synchronize between OID and the Custom Database  Tables ?

    Hi All,
    Our ADF Application is using Oracle SIngle-Sign On (OAS 10.1.4). Meanwhile we also maintain
    users login within Database table to store application menu accessibility data.
    i.e:
    Firstly user Login using Oracle SSO, after logged in, application will query the above mentioned
    database tables to determine which menu he/she has access.
    We have develope a security module to input users login into the database, so I need to synchronize
    the data into OID, so that that particular user can use Oracle SSO.
    How is the mechanism to do that ?
    Thank you very much,
    xtanto

    Hi,
    OID providesa Java and PLSQL API. I agree with Chris that from what you describe, the PLSQL API seems to be the best approach to take as it allows you to use database triggers for the synchronization
    Frank

  • Synchronization between AD and Sun Java Directory Server

    I would like to build an environment as below, kindly let me know whether it is possible or not.
    My Enterprise Directory is Active Directory and i have Policy Server which directs the sso users to get authenticated with that server. I would like to synchronize the user data from Active Directory to Sun Java Directory Server (existing version is 5.2 Service Pack 4) including the passwords and i would like to know with which hashing algorithm these passwords are stored in the sun directory server. Because i want to synchronize the same attributes from sun java directory server to Oracle Internet Directory and is it possible to get my sso users to get authenticated at OID even?
    Kindly let me know whether this approach is feasible or not?
    Any suggestion to this approach is greatly appreciated...
    Thanks in advance...
    Regards,
    Kishore Repakula.

    i would like to know with which hashing algorithm these
    passwords are stored in the sun directory server.Like most other directory servers, SunDS offers a few choices here.
    The most secure is SSHA, which you'd probably want to use unless you have apps with dependencies on other hashes (e.g., CRYPT for backward compatibility with UNIX password field).
    I would like to synchronize the user data from Active Directory
    to Sun Java Directory Server (existing version is 5.2
    Service Pack 4) including the passwords...Sun has a "Identity Synchronization for Windows" product which might work for you.
    http://www.sun.com/software/products/directory_srvr_ee/identity_synch/
    Unfortunately, the big trick with AD passwords is that they are stored in a proprietary one-way hash, so you can't just sync them directly over to another directory. Likewise, you can't import password hashes from other sources into AD and expect them to work.

Maybe you are looking for