Performance hit implementing last authentication time (pwdKeppLastAuthTime)

I have a DSEE 6.1 installation running on solaris 10.
There are about 100 users in the directory, but soon to have another 3000 added as it is properly productionized.
I notice there is a warning that pwdKeepLastAuthTime feature is not activated by default as it adds an update for each successful bind operation.
I wanted to enable this so set:
dsconf set-server-prop pwd-keep-last-auth-time-enabled:on
This has resulted in the db size <instance>/db increasing 100 fold from ~10mb to ~1gb.
The memory size is now at ~800mb.
Is this expected behaviour?

/opt/ds/db" > ls -l
total 137126
-rw------- 1 nobody nobody 24576 Oct 31 09:21 __db.001
-rw------- 1 nobody nobody 10264576 Nov 13 00:53 __db.002
-rw------- 1 nobody nobody 41951232 Nov 13 00:53 __db.003
-rw------- 1 nobody nobody 1572864 Nov 13 00:53 __db.004
-rw------- 1 nobody nobody 11313152 Nov 13 00:53 __db.005
-rw------- 1 nobody nobody 65536 Nov 13 00:53 __db.006
-rw------- 1 nobody nobody 38 May 31 16:16 DBVERSION
-rw------- 1 nobody nobody 10485760 Nov 13 09:02 log.0000001204
drwx------ 2 nobody nobody 1536 Sep 6 12:07 zeus
"/opt/ds/db" > du -sk *
24 __db.001
10032 __db.002
41000 __db.003
1544 __db.004
11056 __db.005
64 __db.006
1 DBVERSION
4904 log.0000001204
1339315 zeus
opt/ds/db/zeus" > ls -l
total 2678626
-rw------- 1 nobody nobody 1369350144 Nov 13 09:03 cl5dc_zeus_dc_ghsewn_dc_com463ff1cb000000010000.db3
-rw------- 1 nobody nobody 38 Jun 1 11:55 DBVERSION
-rw------- 1 nobody nobody 16384 Jun 21 12:47 zeus_aci.db3
-rw------- 1 nobody nobody 16384 Oct 31 09:37 zeus_ancestorid.db3
-rw------- 1 nobody nobody 81920 Oct 31 09:37 zeus_cn.db3
-rw------- 1 nobody nobody 32768 Oct 31 09:37 zeus_entrydn.db3
-rw------- 1 nobody nobody 16384 Oct 31 09:37 zeus_gidnumber.db3
-rw------- 1 nobody nobody 909312 Nov 13 09:03 zeus_id2entry.db3
-rw------- 1 nobody nobody 16384 Aug 10 14:52 zeus_nisnetgrouptriple.db3
-rw------- 1 nobody nobody 16384 Nov 7 09:38 zeus_nscpEntryDN.db3
-rw------- 1 nobody nobody 16384 Jun 1 11:57 zeus_nsds5ReplConflict.db3
-rw------- 1 nobody nobody 16384 Oct 31 09:37 zeus_nsRoleDN.db3
-rw------- 1 nobody nobody 40960 Nov 7 09:38 zeus_nsuniqueid.db3
-rw------- 1 nobody nobody 16384 Jun 13 11:26 zeus_numsubordinates.db3
-rw------- 1 nobody nobody 24576 Nov 7 09:38 zeus_objectclass.db3
-rw------- 1 nobody nobody 16384 Oct 31 09:37 zeus_parentid.db3
-rw------- 1 nobody nobody 16384 Nov 11 16:18 zeus_pwdaccountlockedtime.db3
-rw------- 1 nobody nobody 16384 Nov 12 15:00 zeus_pwdfailuretime.db3
-rw------- 1 nobody nobody 16384 Nov 9 11:46 zeus_pwdgraceusetime.db3
-rw------- 1 nobody nobody 16384 Jun 20 16:07 zeus_sn.db3
-rw------- 1 nobody nobody 16384 Jun 20 09:50 zeus_sudoUser.db3
-rw------- 1 nobody nobody 16384 Oct 31 09:37 zeus_uid.db3
-rw------- 1 nobody nobody 16384 Oct 31 09:37 zeus_uidNumber.db3
-rw------- 1 nobody nobody 16384 Oct 24 13:03 zeus_vlv#zeusghsewncomgetgrent.db3
-rw------- 1 nobody nobody 16384 Aug 10 14:52 zeus_vlv#zeusghsewncomgetngrpent.db3
-rw------- 1 nobody nobody 16384 Nov 13 08:57 zeus_vlv#zeusghsewncomgetpwent.db3
-rw------- 1 nobody nobody 16384 Nov 13 08:57 zeus_vlv#zeusghsewncomgetspent.db3
"/opt/ds/db/zeus" > du -sk *
1337920 cl5dc_zeus_dc_ghsewn_dc_com463ff1cb000000010000.db3
1 DBVERSION
16 zeus_aci.db3
16 zeus_ancestorid.db3
80 zeus_cn.db3
32 zeus_entrydn.db3
16 zeus_gidnumber.db3
896 zeus_id2entry.db3
16 zeus_nisnetgrouptriple.db3
16 zeus_nscpEntryDN.db3
16 zeus_nsds5ReplConflict.db3
16 zeus_nsRoleDN.db3
40 zeus_nsuniqueid.db3
16 zeus_numsubordinates.db3
24 zeus_objectclass.db3
16 zeus_parentid.db3
16 zeus_pwdaccountlockedtime.db3
16 zeus_pwdfailuretime.db3
16 zeus_pwdgraceusetime.db3
16 zeus_sn.db3
16 zeus_sudoUser.db3
16 zeus_uid.db3
16 zeus_uidNumber.db3
16 zeus_vlv#zeusghsewncomgetgrent.db3
16 zeus_vlv#zeusghsewncomgetngrpent.db3
16 zeus_vlv#zeusghsewncomgetpwent.db3
16 zeus_vlv#zeusghsewncomgetspent.db3

Similar Messages

  • User Last Login Time

    I'm trying to use DS6 built-in functionality for tracking user's last login time. I created a new password policy and enabled pwdKeepLastAuthTime attribute. Then I tried signing into Access Manager.
    According to the documentation, an attribute pwdLastAuthTime should be added to the user entry, but it is not there.
    Any ideas how I can get this to work?

    Last login time is a feature provided with the new Directory Server password policy implementation introduced in DS 6 and is not part of the compatibility mode. Check the Directory Server password policy compatibility mode:
    $ dsconf get-server-prop ... | grep 'pwd-compat'
    pwd-compat-mode : DS5-compatible-mode
    The Directory Server password policy compatibility mode must be advanced past DS5-compatible-mode:
    $ ldapmodify ...
    dn: cn=Password Policy,cn=config
    changetype:modify
    replace:pwdkeeplastauthtime
    pwdkeeplastauthtime:TRUE
    modifying entry cn=Password Policy,cn=config
    ldap_modify: DSA is unwilling to perform
    ldap_modify: additional info: (Password Policy: modify policy entry) "pwdKeepLastAuthTime: TRUE" is not supported in server mode DS5-compatible-mode ("cn=config" pwdCompat: 0).
    $ dsconf pwd-compat ... to-DS6-migration-mode
    $ dsconf get-server-prop ... | grep 'pwd-compat'
    pwd-compat-mode : DS6-migration-mode
    Now it should work. If not, please try binding directly to the directory server as the user (e.g., do an ldapsearch as that user) and check the entry.

  • Oblix audit logs to track last login time in Sun DS

    Hi,
    I would like to use oblix audit logs to track last login time in Sun DS.
    Is there a straightforward procedure to do that other than parsing the logs and using custom script to update Sun DS.
    Please advice.
    Thanks.

    Hi,
    In OAM you can define your own plugins to run during the authentication (you include them in the relevant authentication schemes) - you could write one that updates the user profile of the logged-in user. You would be pretty much on your own, though, all that OAM would give you is the DN of the logged in user. You would need to include libraries that connect to ldap (or maybe the plugin could send a web service call) and perform the necessary attribute updates. Authn plugins are documented in the Developer Guide: http://docs.oracle.com/cd/E15217_01/doc.1014/e12491/authnapi.htm#BABDABCG (actually that's for 10.1.4.3).
    Regards,
    Colin

  • Performance Hit After Oracle Database Upgrade to 10.2.0.4

    We have a couple dozen workbooks that took this performance hit after the upgrade of the database/migration to a new server. Worksheets that executed in the ten second range are now running for hours or simply not finishing. We took the new server factor out of the equation by rolling back the database to 10.2.0.3 where a test EUL resides and the problem was resolved. Has anyone seen this issue? Does anyone have an suggestions? An early reply would be greatly appreciated.
    Thanks,
    Jerre

    Rod,
    Thanks for the quick reply. We are looking at the different plans and modifying the optimizer settings, switching back and forth, as we speak. We are now starting with the hints. Currently our Server 'optimizer_mode' parameter is ALL_ROWS. We are planning to change the to 'Choose' and see what happens. The workbooks that are impacted are on our oldest business areas of Finance and HR. The former setup was borrowed from another school for a quick, low cost start up. The latter was thrown together by novices. Our true datamarts developed by knowledgeable personnel with star schemas are not impacted. Of course we are planning on redoing the older business areas but time, personnel and money matters slow things down. It is these workbooks on the older business areas that are greatly affected by the migrations and upgrade. We eventually get things to settle down but past actions do not always have the same resolution with newer and better servers and upgrades.
    Thanks,
    Jerre

  • How can I run two independant LabView applications from the same computer, without taking a performance hit?

    I have two identical, but independant test stations, both feeding data back to a Data Acquisition Computer running LabView 6.1. Everything is duplicated at the computer as well, with two E-series multifunction I/O cards (one for each test station) and two instances of the same LabView program for acquiring and analysing the data. The DAQ computer has a Celeron processor w/ 850Mhz clock and 512MB memory, and is running on Windows NT.
    I have noticed that when I run both the applications simultaneously, I take a substantial performance hit in terms of processing speed (as opposed to running just one program). Why does this happen and how can I prevent it? (In t
    his particular case, it may be possible to combine both the tests into one program since they are identical, but independant, simultaneous control of two different LabView programs is a concept I need to prove out).
    Thanks in advance for any tips, hints and spoon feedings (!)....

    Depending on your application, you may or may not be able to improve things.
    Firstly, each task requires CPU time, so a certain performance difference is guaranteed. Making sure you have a "wait until ms" in every while loop helps in all but the most CPU intensive programs.
    Secondly, if you are
    1) streaming data to disk
    2) Acquiring lots of data over the PCI bus
    3) Sending lots of data o ver the network
    you can have bottlenecks elsewhere than in your program (limited Disk, PCI or Network bandwidth).
    Avoid also displaying data which doesn`t need to be displayed. An array indicator which only shows one element still needs a lot of processing time if the array itself is large.... Best is to set the indicator invisible if this is the case.
    I think
    it would be best if you could give some more information about the amount of data being acquired, processed and sent. Then maybe it will be more obvious where you can optimise things. If you are running W2000, try activating the task manager while the program(s) are sunning to see where the bottleneck is.
    Shane
    Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)

  • How can I implement a real time datawarehouse

    Hi, I'm a little lost here but I want to know how can I implement a real time datawarehouse in sql server, I don't know if it is only to make the extraction process the shortest time possible.
    Thank you

    Hi Mega15, 
    I agree with everything Seif and Louw said, but I'd like to add that if you are using SQL Server 2012 or 2014 and you want to use DirectQuery or ROLAP mode (depending on what SSAS mode are you using, tabular or multidimensional) you may be interested on
    using columnar indexes on your base tables. 
    Analytical and aggregated queries will take GREAT advantage from these indexes, they will perform much better than with traditional B-Tree indexes in most of your scenarios. 
    Regards. 
    Pau.

  • I have an issue syncing my iPhone 4 with itunes. This has only happened the last few times i have tried but don't know how to resolve it. Please help.

    Hi, the last few times i have tried to sync my iPhone 4 to my itunes it won't work. I plug my phone into my computer, to sync any new pictures i have taken or add my new music i have bought on my phone, and hit sync. It appears to start syncing then an error message appears saying the following:
    The iPhone 'my iPhone' cannot be synced. You do not have enough access privileges for this operation.
    I'm not sure what this means or why this keeps happening. I have the latest iTunes and i have not done anything different. It appears to have wiped some of my music from my iPhone and a number of my photo albums off. Please can someone help or advise how i resolve this issue. Thanks. 

    When you set up iOS7 the second screen specifically asks that you create a four digit code .
    That's what you now need to re-enter in your phone.
    Once entered you can go to system/general/Passcode lock
    Re-enter the same code and choose turn Passcode off
    iOS 7 has been designed to enforce minimum passcode security by default or in the case of the new 5s use of finger print level security.

  • Updated to Oct 2014 version of photoshop CC. since that time cannot save ANY work.  I can work on files but when I hit 'save as" every time it says "photoshop has quit working and needs to close."

    Updated to Oct 2014 version of photoshop CC. since that time cannot save ANY work.  I can work on files but when I hit 'save as" every time it says "photoshop has quit working and needs to close." Need solution now.

    BOILERPLATE TEXT:
    Note that this is boilerplate text.
    If you give complete and detailed information about your setup and the issue at hand,
    such as your platform (Mac or Win),
    exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
    your settings in Photoshop > Preference > Performance
    the type of file you were working on,
    machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
    what troubleshooting steps you have taken so far,
    what error message(s) you receive,
    if having issues opening raw files also the exact camera make and model that generated them,
    if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
    etc.,
    someone may be able to help you (not necessarily this poster).
    a screen shot of your settings or of the image could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • Performance hit using "where" clause in the query

    Hi All,
    I am facing a huge performance hit in the java code when using "where" clause in queries. Following are the details:
    1. SELECT * FROM Employee
    2. SELECT * FROM Employee where employeeid in (26,200,330,571,618,945)
    There is no difference in Query Execution Time for both queries.
    Business Logic Time is huge in second case as compared to first one (ratio - 1:20).
    Rows returned are more in first case as compared to second case.(ratio - 1:4)
    Business Logic is same for both the cases where I iterate through the ResultSet, get the objects and set them in a data structure.
    Does anybody know the reason of unexpected time difference for the business logic in the second case?

    Since you're mentioning clustering your index, I'll assume you are using Oracle. Knowing what database you are using makes it a lot easier to suggest things.
    Since you are using Oracle, you can get the database to tell you what execution plan it is using for each of the 2 SQL statements, and figure out why they have similar times (if they do).
    First, you need to be able to run SQL*Plus; that comes as part of a standard database installation and as part of the Oracle client installation - getting it set up and running is outside the scope of this forum.
    Second, you may need your DBA to enable autotracing, if it's not already:
    http://asktom.oracle.com/~tkyte/article1/autotrace.html
    http://www.samoratech.com/tips/swenableautotrace.htm
    Once it's all set up, you can log in to your database using sql*plus, issue "SET AUTOTRACE ON", issue queries and get execution plan information back.
    For example:
    SQL> set autotrace on
    SQL> select count(*) from it.ticket where ticket_number between 10 and 20;
      COUNT(*)
            11
    Execution Plan
    Plan hash value: 2983758974
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            |     1 |     4 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE   |            |     1 |     4 |            |          |
    |*  2 |   INDEX RANGE SCAN| TICKET_N10 |    12 |    48 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("TICKET_NUMBER">=10 AND "TICKET_NUMBER"<=20)
    Statistics
              0  recursive calls
              0  db block gets
              1  consistent gets
              0  physical reads
              0  redo size
            515  bytes sent via SQL*Net to client
            469  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> This tells me that this query used an INDEX RANGE SCAN on index TICKET_N1; the query can't do much better than that logically... In fact, the statistic "1 consistent gets" tells me that Oracle had to examine only one data block to get the answer, also can't do better than that. the statistic, "0 physical reads" tells me that the 1 data block used was already cached in Oracle's memory.
    the above is from Oracle 10g; autotrace is available back to at least 8i, but they've been adding information to the output with each release.
    If you have questions about sql_plus, check the forums at asktom.oracle.com or http://forums.oracle.com/forums/category.jspa?categoryID=18
    since sql*plus is not a JDBC thing...
    Oh, and sql*plus can also give you easier access to timing information, with "set timing on".

  • Solution manager last action time

    Dear Expert,
    I know in crm_dno_monitor have on column show "Change On"-it show the last action for complete the ticket.
    Now my concern is in crm_dno_monitor, is there any posibility to show on last reply time? i mean the consultant last reply solution to customer while the ticket haven close yet.
    As after consultant send solution then might wait for long time until customer close the ticket. So it will longer our resolution time.
    Please advice any solution for that as might enhance on crm_dno_monitor?
    Thanks
    regards,
    ng chong chuan

    hi,
    1. first you copy the profile and then you add (or copy the action)
    for support message we use SLFN0001 but I guess for AI_SDK_STANDARD this is about the same.
    the developer has developed his own ( method ) implementation of the badi EXEC_METHODCALL_PPF
    filter PPFDFLTVAL with a new filter value Z_SET_LAST_SOLUTION_TIME (something like that in your case)
    if you have 3 fields Just add some coding like
        lv_value-guid             = lv_guid_ref.
        lv_value-ZZCUSTOMER_H0101 = ls_customer_h-ZZCUSTOMER_H0101.
        lv_value-ZZCUSTOMER_H0102 = 'Xxxxxxx'.
       lv_value-ZZCUSTOMER_H0103 =  post time. (ask a programmer to define the write code)
    2. just add the new field(s) as new attribute
    3. I don't understand your question. Your programmer needs more info ? I am not a programmer and I can't help you further, if he knows programming he should know how to implement a new method
    Note : All of this is working if you schedule the action under the right condition via customizing or directly by Calling transaction SPPFCADM and select CRMD_ORDER , select your action profile
    br Xavier

  • Last Logon Time in Iplanet Directory Server 4.1

    Hi,
    It would be great help if any one of you could let me know the attribute in Iplanet Directory Server 4.1 to get the Last Logon Time of a particular account.
    The Directory Server is on solaris.
    Thanks

    Hari,
    You can try to find it from the logfiles.
    I actually designed a plugin for this type of thing, but it's not yet implemented. It would simply write a timestamp to a user's entry after every successful bind, among other things which I won't go into detail about now...
    Oletko suomessa?
    podzap

  • Which more performance hit: trip to DB, traverse XML DOM in memory?

    Which is more of a performance hit:
         1. Make trip to DB, get Data, come back
              a. Includes Make trip to DB, alter smth, come back
         2. Use DOM Parser to get data from XML (XML is in Memory)
              a. Includes altering the DOM Document that is in memory

    The trip to the DB depends on network latency and sometimes (but not often) network speed and of course the complexity of doing the update on the DB (how many indices chang, what trigggers fire, what integrity constraints must be validated, etc.).
    The XML parsing is generally done once and from then on you manipulate the tree directly - that latter part is efficient. Of course the cost depends on your technique, the hit taken to parse the XML (which parser? validating?).
    The first part will probably consume less CPU on the host running the Java program (assuming DB is remote) but take more time total.
    The latter may (or may not) consume more local CPU but should be faster.
    Your mileage may vary. Test it.
    Chuck.

  • Last Logon Time

    Hi,
     In Exchange 2003 I looked into the Mailboxes on a Mailbox Store. It has a label Last Logon Time. What is this last logon time? Where does it get it from? I have users already using another email system which is web based and they are no longer using
    outlook on their PCs.
    Thanks!

    Last Logon Time that is shown in Active Directory and Exchange are different. Active Directory shows last successful authentication and Exchange shows last time when mailbox was accessed. However, Exchange uses many background processes that access the
    mailbox for maintenance.
    - Sarvesh Goel - Enterprise Messaging Administrator

  • Quary : Implement Payroll without Time Management

    Hi,
             is it posiable implement Payroll without Time manage ment ?  if yes   while performing action ( PA40 )  what we hve have to mention in worke schdule..
    Edited by: prabu ganesh on Apr 30, 2011 1:50 PM

    Yes, it is possible to run PY without TM. But workschedule must be filled and choose time management status 7 - without Payroll Integration.
    Regards,
    Handoko

  • Last Access and Last Update Times for objects in Cache

    Hi,
         We are looking to implement Tangosol cache for our one of our critical J2EE systems and I am pretty new to it. One of the requirements is to know the following timings
         -Time when an object was put in a cache X (Last update of the object)
         -Time when an object was last accessed in the cache X (Last access time of an object)
         Is it possible to get these values through the Tangosol J2EE API?
         Thanks,
         ravi

    Hi Ravi,
         You can use InvocationService to access this information - an example is attached.
         Regards,
         Dimitri<br><br> <b> Attachment: </b><br>DumpCache.java <br> (*To use this attachment you will need to rename 529.bin to DumpCache.java after the download is complete.)

Maybe you are looking for