Performance of queries against large AD CS databases - how to optimize?

I am asking experts with experience with AD CS databases with 100.000s or millions of certificate to confirm or correct my "theories".
I am aware of these two articles that state performance is not an issue for millions of certificates:
Windows CA Performance Numbers and
Evaluating CA Capacity, Performance, and Scalability
However, here performance is mainly evaluated in terms of database size and request / certificate throughput. I am more interested in the performance of queries as I have seen that it might take minutes to build up views for databases with 100.000s of certificates
- no matter if you use certutil -view, certsrv.msc, or access to CCertview.
Could this just be due to an "unfortunate" combination of non-indexed fields? Any advice on which queries to avoid?
Or is the solution just as simple as to throw more memory or CPU or both at the problem?
In case it hinges on an unfortunate choice fields and you absolutely have to do this query my guess is that you have to use a custom policy(*) module (FIM or third-party) to dump certificates to a SQL database and do your queries there.
Am I right or did I miss something? Any input is highly appreciated!
Elke
PS / edit: That should have been 'Exit module' - I don't know why I wrote Policy Module. Thanks for Vadims for pointing it out.

> I meant 'exit module'
exit module is correct one. However, it is notified by a CA only when new request is issued/processed. This means that you can use Exit Module to copy certificate information to SQL only for new requests, for existing requests you are still sticking
with a database dump.
> but I should probably check how I dealt with the row handles
I don't know how COM handle are working in VBS, but in PowerShell (and other CLR languages) COM handle may not be handled properly by a garbage collector, therefore, when COM object is not necessary, you should set reference count to zero. In CLR it is made
by calling Marshal.ReleaseComObject method. This will mark COM object as safe for garbage collector. For example, the typical row/column iterator scheme is:
$Row = $ICertView.OpenView()
# do row iteration
while ($Row.Next() -ne -1) {
# acquire IEnumCERTVIEWCOLUMN COM object
$Column = $Row.EnumCertViewColumn()
# do column iteration for the current row
while ($Column.Next() -ne -1) {
# collect column information and other stuff
# do other stuff if necessary
# release IEnumCERTVIEWCOLUMN object. This is the last line in the while/do loop.
[Runtime.InteropServices.Marshall]::ReleaseComObject($Column)
# release IEnumCERTVIEWROW COM object as well
[Runtime.InteropServices.Marshall]::ReleaseComObject($Row)
My weblog: en-us.sysadmins.lv
PowerShell PKI Module: pspki.codeplex.com
PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
Check out new: SSL Certificate Verifier
Check out new:
PowerShell FCIV tool.

Similar Messages

  • SQL queries against a database view from an external system?

    Hi,
    I have a question about Database views in se11.
    Is it possible to create a database view and that do SQL queries against this  view from an
    external system, Not a SAP system?
    Please, I need you help.
    Best Regards
    Annika

    Hi Annika,
    it is possible , yes... but depends on your database systems in the SAP source DB and the external DB
    (easier if they are the same,  i.e. both ORACLE) - check out with your BASIS team (they have to create something like a "database link" in the external DB system  that you can use to access the tables in the SAP source).
    In the external DB you sure can create a view on these "remote" tables.
    We used this to pull data form SAP DB  to another DB system (both ORACLE based).
    But this is NOT supported by SAP , so be carefull. Below is the restriction for ORACLE (as well for other DB systems )
    see SAP note 581312 "Oracle database: licensing restrictions"
    As of point 3, it follows that direct access to the Oracle database is
    only allowed for tools from the areas of system administration and
    monitoring. If other software is used, the following actions, among
    other things, are therefore forbidden at database level:
    * Querying/changing/creating data in the database
    * Using ODBC or other SAP external access methods
    This means that additional application software is only allowed if this
    accesses the database through SAP interfaces (for example, RFC, SAP J2EE
    or BAPI).
    I would say if you KNOW the tables involved (using valid WHERE conditions and joins )
    and don't start queries from hell (ad-hoc type) wich can bring down your SAP system performance
    you can try it.
    But be warned...
    good luck...
    bye
    yk

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

  • BerkeleyDB + Tomcat + large number of databases.

    Hi all,
    for my bioinformatics project, I'd like to transform a large number of SQL databases (see http://hgdownload.cse.ucsc.edu/goldenPath/hg18/database/ ) to a set of read only BerkeleyDB JE databases.
    In my web application , the Environment would be loaded in tomcat and one can imagine a servlet/jsp querying/browsing each database.
    Then I wonder what are the best practices ?
    Should I open each JE Database for each http request and close it at the end of the request ?
    Or should I just let each Database open once it has been opened ? Wouldn't it be a problem if all the database and secondary databases are all open ? Can I share one Database for some multiple threads ?
    Something else ?
    Many thanks for your help
    Thanks in advance
    Pierre

    Hi Pierre,
    Normally you should keep the Environment and all Databases open for the duration of the process, since opening and closing a database (and certainly an environment) per request is expensive and unnecessary. However, each open database takes some memory, so if you have an extremely large number of databases (thousands or more), you should consider opening and closing the databases at each request, or for better performance keeping a cache of open databases. Whether this is necessary depends on how much memory you have and how many databases.
    You'll find the answer to your multi-threading question in the getting started guide.
    Please read the docs and also search the forum.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Queries against ODS vs. Cube

    I'm looking for best practices relating to when you should or should not develop queries against ODS.    I am familiar with the standard considerations like level of granularity and so forth but would like to be able to analyze each situation more thoroughly.   Does anyone know if there is a document that discusses this topic?
    Thanks, Carol

    Hi,
    There is no right and wrong answer for this question. There are more pros and cons.
    The major Cons con in the performance area (in my opinion). ODS objects should not be used for slice and dice analytics but for more list type reports (more operational in nature) Please check the BW performance materials on the BW home page for more information. (http://service.sap.com/BW -> Performance).
    Cheers,
    Mike.

  • How to efficiently support "Like"-masked range queries against a CLR UserDefinedType

    Hi all,
    Is there any way to efficiently support "Like"-masked range queries against a CLR UserDefinedType?   I've never written a UDT before and am just beginning with one that has a Varbinary type as the underlying physical type for
    persistence but has a string representation to the consumer.   Looking at the articles and samples I've found, the only required method/interface for a UDT that seems to be made for queries is the requirement to provide an implementation
    of IComparable.   
    But from what I've read so far, doing a "Like" masked/range search - or any search for other than an exact "key" expression match would amount to doing a table scan while the database engine passes in each candidate to the UDT's IComparable
    implementation.
    I need to support masked searches; including some that might wild-card the first few characters instead of always providing always providing non-wild-carded characters staring in character position one of the mask.   My UDT would
    have its internal structures that could vastly speed up the searches if I could get the whole Like() mask at once vs. having my IComparable methods being called once per every existing row in the underling table.    (We may have one such table
    with a column of this UDT containing over 800M rows.   You can imagine the response times if every search other than a single exact key match effectively meant a table scan).
    I had heard that you could implement a "Like()" method in a UDT that might allow this type of processing.   But I can't find any discussions or documentation on this issue; so I'm not sure if that's really an option for solving this issue.
    Thanks!
    -Bob

    Not sure that I understand exactly, but obviously you can implement a type method that returns 0 or 1. However, searches on this method would requiring scanning the table, so even if your internal matching is fast, the query will still be slow.
    It sounds like you are looking for some custom indexing, and there is nothing built-in that you can hook into, but you would have to do all the work yourself - if it is possible at all.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Run queries against system tables for oracle BPEL processes

    I want to run queries against system tables for oracle BPEL processes. It is becoming very difficult for me to us EM as it is very slow,
    and not all the time, it is sufficient for our needs.
    We are doing load testing and we want find out info like how many requests came in and how many faulted and what time is taken by each request...
    So do any of you have the query that I can use and tables that I need to go against?

    Use the BPEL hydration database table "cube_instance".
    There should be plenty of example in the forum regarding this table.

  • Oracle Forms 6i client/server against an Oracle 10gR2 database

    Hello
    We have the following situation:
    Oracle Forms6i/Reports6i client server application against an Oracle 8i cost-based database. we want to migrate this application
    step 1:
    Migrate the database to 10gr2, but do not touch the client application
    Go live
    step 2:
    Migrate the development environment to 6i webforms.
    Production environment stays client server.
    With this construction we can still create new patches/functionality.
    step 3:
    Migrate to Forms10gR2 (and reports)
    I know Forms 6i is not supported anymore.
    My question is on step 1.
    When I read NOTE: 338513.1 entitled "Is Forms/Reports 6i Certified To Work Against Oracle Server 10g Rel 1, Rel 2 or 11g?" carefully
    it says that Forms 6i is not certified against 10gR2.
    On OTN I read several posts that this works ok (assuming you do not use the wrong character set).
    I also read on OTN that patch 18 (which is only supported for EBS customers) is certified against 10gR2.
    The questions:
    - Is Oracle Forms patch 18 certified against an Oracle 10gR2 database? (or only for EBS)
    - Is there anybody out there that can confirm that Oracle Forms 6i C/S works against Oracle 10gR2
    Regards Erik

    Thank you.
    Now I found it.
    But how do I read that page.
    It says:
    Client Certifications
    OS      Product      Server Status
    XP      6.0.8.27.0 Patch 18      N/A      Desupported
    Application Tier Certifications
    OS      Product      Server      Status      
    XP      6.0.8.27.0 Patch 18      9.2      Desupported
    XP      6.0.8.27.0 Patch 18      10g      Desupported
    I read this as follows:
    Patch 18 was certified agains a 10g database in a webforms environment.
    No client server mentioned and Server 10g , so no 10gR2!
    I'm I right?
    Regards Erik

  • Is it possible to perform network data encryption between Oracle 11g databases without the advance security option?

    Is it possible to perform network data encryption between Oracle 11g databases without the advance security option?
    We are not licensed for the Oracle Advanced Security Option and I have been tasked to use Oracle Network Data Encryption in order to encryption network traffic between Oracle instances that reside on remote servers. From what I have read and my prior understanding this is not possible without ASO. Can someone confirm or disprove my research, thanks.

    Hi, Srini Chavali-Oracle
    As for http://www.oracle.com/technetwork/database/options/advanced-security/advanced-security-ds-12c-1898873.pdf?ssSourceSiteId… ASO is mentioned as TDE and Redacting Sensitive Data to Display. Network encryption is excluded.
    As for Network Encryption - Oracle FAQ (of course this is not Oracle official) "Since June 2013, Net Encryption is now licensed with Oracle Enterprise Edition and doesn't require Oracle Advanced Security Option." Could you clarify this? Thanks.

  • Why we cannot perform DML operations against complex views directly.

    hi
    can any tell me why we cannot perform DML operations against complex views directly.

    Hi,
    It is not easy to perform DML operations on complex views which involve more than one table as said by vissu. The reason being you may not know which columns to be updated/inserted/deleted on the base tables of the views. If it is a simple view containing a single table it is as simple as performing actions on the table.
    For further details visit this
    http://www.orafaq.com/wiki/View
    cheers
    VT

  • Performing sql queries in java without using java libraries

    i wonder whether its possible to perform sql queries beginning from create table to updating queries without using the java sql library.
    has anyone written such code.

    You could use JNI to talk to a native driver like the Oracle OCI driver. Doing this is either exiting or asking for trouble depending on your attitude to lots of low level bugs.

  • How large can a database be?

    Hi,
    just out of curiosity that came in my mind . . .
    10 of 10 largest database in the world user Oracle :)
    How large are these databases?

    Exactly. My largest db is 5TB.
    The actual limitations is o/s and h/w based - if the o/s for example only support 128 (or whatever) file devices (via HBA to a SAN), then you are limited to the max device size that the SAN can support.
    From an Oracle and ASM perspective, it can use whatever file systems and devices you assign to it for use.
    Again, there are o/s file system limits. Cooked file systems have a max size. There is a limit on the number of file handles a single o/s process can allocate. Oracle is subject to these - but Oracle itself does not really impose any limitation in this regard.

  • Cannot run a UNICODE kernel against a non-UTF8 database

    Hi,
    I am trying to install SAP ECC 6.0 SR2 . am using  windows 2003 server oracle 10g db.
    please help me how to resolve this...
    sapparam: sapargv( argc, argv) has not been called.
    sapparam(1c): No Profile used.
    sapparam: SAPSYSTEMNAME neither in Profile nor in Commandline
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: START OF LOG: 20100717144107
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#13 $ SAP
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: version R7.00/V1.4 [UNICODE]
    Compiled Jul 17 2007 01:28:45
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe -testconnect
    DbSl Trace: ORA-1403 when accessing table SAPUSER
    DbSl Trace: Cannot run a UNICODE kernel against a non-UTF8 database (charset = AL32UTF8)
    (DB) ERROR: db_connect rc = 256
    DbSl Trace: Default conn.: already connected to DEV
    (DB) ERROR: DbSlErrorMsg rc = 29
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: job finished with 1 error(s)
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: END OF LOG: 20100717144108

    Reinstallae the server .problem solved thanks to 3everyone

  • EPM V11.1.2.1 Perform 1st-time configuration of Shared Services database

    Hello,
    I installed EPM v11.1.2.1 but I could not configurate.
    On EPM configurator step Perform 1st-time configuration of Shared Services database option was disabled.
    I try to change vpd.properties file's name; but it did not work.
    Do you give me any suggestions?
    Thank you.
    ankist

    Have you had any previous versions on the machine, what OS is it.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • "Perform 1st-time configuration of Shared Services database" option disable

    Hi,
    Hyperion 11.1.2 shared services configuration “Perform 1st-time configuration of Shared Services database” option is disable
    I am going to do the configuration first time but Perform 1st-time configuration of Shared Services database” option is disabled.
    Please help here.
    Thanks
    Dharm

    Try starting the configurator from the command prompt with option -forceRegistry
    We were able to do it in v11.1.1.3... I think this can still be done with v11.1.2
    Thanks,
    Vijay

Maybe you are looking for