DSEE 6.3.1 - Slow ldapsearch Queries

We've recently upgraded to Sun DSEE version 6.3.1 from SunONE Directory Services 5.1.
We have some utilities that extract a list of all users in the LDAP repository and check certain aspects of the accounts. We recently found that the following ldapsearch query executed on a suffix containing only 5 entries took over 45 seconds to complete:
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(objectclass=*)" uid The following message was displayed in the error log:
[12/Nov/2009:12:34:08 -0600] - WARNING<20805> - Backend Database - conn=45187 op=1 msgId=2 -  search is not indexed base='ou=people,o=test-suffix' filter='(objectClass=*)' scope='sub'Since objectclass is a system index and cannot be modified we tried wildcard searches on other known fields, such as:
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(uid=*)" uid
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(cn=*)" uid
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(dn=*)" uid
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(sn=*)" uidAll of these searches took roughly the same amount of time (~45 seconds). However, if the wildcard searches are refined slightly so they do not return all the entries in the suffix they execute instantaneously.
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(uid=A*)" uid
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(cn=A*)" uid
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(dn=A*)" uid
ldapsearch -h policy.test.com -p 389 -D "cn=Directory Manager" -b "ou=People, o=test-suffix" -s sub "(sn=A*)" uid Also, I found some information on the referential integrity plugin and have indexed the fields used and regenerated the indexes. This did not have any effect on the performance.
It seems that any query that will return all entries in the suffix gets the "search is not indexed" error and takes an inordinate amount of time to complete. It doesn't seem to matter which fields (indexed or not indexed) are in the query filter.
Is this the expected behavior, or am I missing something? If so, what is the preferred method for retrieving a list of all entries in a suffix?

Thank you guys. It does make sense that a rescan of the database is needed, although the ability to index all objectclasses and use this index in an exhaustive search would be nice.
One more aspect though: can these searches be parallelized? I have a Niagara (Sun Fire T2000) acting as one of several DSEE 6.3 (not yet 6.3.1) servers in a group of servers balanced by a DPS. While this box can take a lot of queries at once, it seems to execute each one in a single process or LWP. Thus it takes very long to complete an exhaustive search (like 4 minutes), although it can complete over a dozen parallel searches in the same 4 minutes :)
I tried to tweak the number of threads with dsconf set-server-prop, but it did not seem to influence anything.
Is it possible to parallelize a single query in DSEE spreading it over several CPUs? (Maybe not in the DS instance but in DPS; I have set the balancing option to "Proportional" but it also did not seem to help spread the load over CPUs, although it does seem to contact and use several instances - "data-sources").
Thanks,
//Jim

Similar Messages

  • First query very slow, subsequent queries fine

    In our 9i application when we first do a query it is extremely slow. If the database has not been used for sometime. This happens for sure after overnight, but also after an hour or so of inactivity during day.
    After the initial query eventually completes, subsequent queries seem fast and no problem. Is just problem with first query.
    This does not happen with all data. Just a particular group of data in our database.
    any suggestions?
    Thanks
    John

    Hi John !
    For mee, it looks like a data cache effect.
    A database need to manipulate data and use a data cache to avoid reading/writing data too much.
    So if the request don't find data in the cache, the database have to read it from disk and put it in the data cache (for me, your fist request). But if data are already in the cache, there is no need to reed them from disk. So the request time is very far better (for me, following requests).
    So if this is a very important problem what can you do ?
    - Check your query exec plan and try to need few data reads (avoid full scans tables for exemple...)
    - Rise the size of your db cache (check the cache hit ratio (1))
    - You can place data permanently in the cache (for table CACHE option) but only if these data sets are small (check dba_segments, [dba_tables after statistics]). If data sets are important, these data can eject other data from cache, so your request time will be good but other requests very bad.
    It could be a library cache effect too (same kind of problem: entries are made for querys already parsed, so the same query can avoid a hard parse) if, for exemple, you handle queries with 5,000 bind variables .
    You can check the library hit ratio too (2)
    To be sure of your problem, I think the best is to trace your request
    1) when executed first (cold request)
    2) and when executed 4th time (hot request)
    Tkprof the two traces and look where is the difference. There is 3 phases: parse, execute and fetch.
    Data cache problem is a high fetch, library, high parse. You will also find for your query the state which implies disk reads read (on execution plan)
    You can posts here cache query results and times for your 1st request and following requests. Even your trace files, if you want me to check your resolution.
    Regards,
    Jean-Luc
    (1)
    Cache hit ratio.
    Warning1: calculated from your last startup (so if your last startup is few weeks ago, you need to shutdwon, wait for a good sample of your batches executed, and try the following request)
    Warning2: There is no ">98 % is good" and "<90 % is bad". It depends on yours applications. For exemple, if same data is frequently acceded in a transactionnal database, you have rise it as high you can.
    But imagine databases for clients and clients who needs their data 1 time a day or a week (database or schema of client information like this very forum [Good exemple because I suspect them to use Oracle databases you know :)]). You can accept to have a high response time, lots of disk reads, and so a HR < 90.
    Cache hit ratio :
    select round((1-(pr.value/(bg.value+cg.value)))*100,2) cachehit
    from v$sysstat pr, v$sysstat bg, v$sysstat cg
    where pr.name = 'physical reads'
    and bg.name = 'db block gets'
    and cg.name = 'consistent gets';
    (2)
    Same warnings than (1)W1
    but not (1)W2: Library HR is generaly higher than cache hit ratio >98
    Library cache hit ratio :
    select round(sum(pinhits)/sum(pins) * 100,2) efficacite from v$librarycache;

  • Slow Running Queries after IMPDP

    Hi Xperts
    I have been doing table data and sequences restore every day, because the user ask for it.
    The user works with the Data Base to test some applications and at the end of the day
    I do table data restore to return the information to some point.
    I truncate every table, disable Constraints, disable Triggers and then I do and IMPDP
    of table data and sequences, then I enable Triggers, Constraints and at the
    end I execute statistics. This procedure is Only for one Schema.
    The problem is that some days the queries work fine, but, others days the queries work slowly.
    The procedure is the same every day.
    This is a Test Data Base: 11g R2, ASM under Oracle Linux 5.8 64bits
    Any advice ? why do some days the queries work properly ? others days work Slowly ?
    I execute statistics like this:
    exec dbms_stats.gather_schema_stats('MYSCHEMA');
    exec dbms_stats.gather_database_stats;
    exec dbms_stats.gather_dictionary_stats;
    IMPDP:
    impdp system/oracle1 FULL=NO SCHEMAS=MYSCHEMA CONTENT=DATA_ONLY directory=IMPDP_DIR dumpfile=Full_MYSCHEMA.dmp logfile=IMPDP_Full_MYSCHEMA.log
    impdp system/oracle1 FULL=NO SCHEMAS=MYSCHEMA CONTENT=METADATA_ONLY INCLUDE=SEQUENCE directory=IMPDP_DIR dumpfile=Full_MYSCHEMA.dmp logfile=IMPDP_Full_MYSCHEMAseq.log
    Thank you in advance
    J.A.

    The problem could be statistics. A simple fix would be to lock the statistics at the end of a day when performance is OK, and don't try to gather them again.
    By the way, a much better way to do this (better in terms of work for you and repeatable behaviour for your client) would be to enable database flashback for this sort of thing rather than repeatedly importing. Flashback requires Enterprise Edition licences.
    John Watson
    Oracle Certified Master DBA
    http://skillbuilders.com

  • BLOB fields slow down queries

    Hello,
    If I run this query:
    select MyID, MyDescription, MyblobField
    From MyTable
    Brings 500 records but it takes 30 seconds !
    If I select without any blob fied like:
    select MyID, MyDescription
    From MyTable
    It's fast.
    The blob fields are storing images like jpg or gif files. Approximately, each field stores 250 KB.
    I know that blob fields are slowing down the query, but could be any way to accelerate it?
    Thank you!

    1. use connection pool on your web program.because connecting database is big cost in 3-T structure.
    2. tune your sql, using explain plan.
    3. adjust your database setting
    Hope this help

  • Using the ReportDisplay For CR 2008 in VS2008 for slow runing queries.

    Hi,
    I am using the ReportDisplay viewer in VS2008 winforms  which sits on a form which is shown using this.ShowDialog()
    Unless the report is really quick (which in general Crystal 2008 does not seem to be) , the form is shown, the mousepointer is automatically changed back from the wait cursor, and the user thinks it has crashed. A fair number of my reports will take 10-30 seconds to display, so I wanted to show some sort of loading indication. I gather that this is only available via the web component at the moment, so I added a status strip with a progress bar on it, which displays fine. However, the problem I have is finding an event of the ReportDisplay to then hide the progress bar when it's done. I've tried the Load,Paint and Layout events, but all seem to fire before the report has finished displaying.
    In previous versions of Crystal I have seen code to kludge this sort of thing for the CrystralReportViewer e.g. http://social.msdn.microsoft.com/Forums/en-US/vscrystalreports/thread/be123db6-eb32-4636-bde4-fa848464a449 so it seems to have been a problem for a while.
    There doesn't appear to be a navigate event for the ReportDisplay. so is there any other way to do this ?
    It's very frustrating indeed.....

    No way that I know of. Many have tried... as far as I know, all have failed.
    You may want to see the article [Improving Crystal Reports Performance in Visual Studio .NET Applications|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/8029cc96-6ff3-2b10-47a2-b30ea790ea5b].
    Perhaps it will help with the processing speed. But i have noting on the progress indicator in a win app.
    Ludek

  • MySQL 5.0.27-max xserve g5 dual run very slow

    Hi to all,
    I have a very strange problem on my dual G5 xserve running php and mysql5.
    I upgrated from mysql 4.01 to mysql 5 5.0.27 and now my server is very slow during queries.
    My CPU is always used at 45% by MySQL and I have not too much connections or users at the same time.
    I found this link: http://www.shawnhogan.com/2005/10/mysql-problems-on-mac-os-x-server.html that explain something about mysql 5 on xserve.
    Is very strange if I reboot mysql it runs normally but after some queries it returns slow again...
    Anyone has some idea?
    Thank you all
    Yorh

    I've actually tried all the USB ports and get same results. I went and bought a Sonnet USB2.0 high speed PCI adapter thinking a dedicated controller would help. It was actually slower than the onboard USB2.0 ports. Again, this same drive hooked up to my Windows XP computer performs about 4 times faster.
    At this point I've purchased an external SATA card from Sonnet, the Tempo X eSATA 4x4 card. As soon as I get an eSATA cable this sould prove to be a lot better.
    I'd sitll like to know if other people get similar poor performance on USB2.0 external drives vs Firewire 400 on the same computer. I stopped buying FW enclosures when it seemed Apple was abandoning firewire on their ipods and there is no Firewire 800 on the macbook pro or intel iMacs.

  • Oracle Server Performance

    RDBMS Version: 9.2.0
    Operating System and Version: Red Hat Linux relase 3
    Database performance issue
    we are using oracle 9i in LINUX platform, since last 4 days database performance is drastically decreased it is when number of active sesseions increases then performance is slow even queries and report are taking time. There was no major changes made in the database except some rebuilding of indexes.
    Based on our technical support we have increased our number of processes to 350 and sessions_cache_cursor to 150 but there is no improvement. We have 60+ users and sessions parameter is 390. Ram is 1 GB. As this is our production server we are having a very hard time
    Can you please help me in this regard
    Thanks in advance

    Have looked at any of the application SQL? Find some SQL that the users are executing through the application or reports that have poor response time and use SQL Trace and TKPROF to see were the SQL is spending its time.
    Try the following:
    alter session set timed_statistics=true;
    alter session set max_dump_file_size=unlimited;
    alter session set tracefile_identifier='SLOW_SQL';
    alter session set events '10046 trace name context forever, level 12';
    <insert sql with poor response time>
    disconnect
    Use the TKPROF utility on the file found in USER_DUMP_DEST that contains the string SLOW_SQL.
    For information on how to interrupt the TKPROF output, see the following link.
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htm

  • Delay when querying from CUBE_TABLE object, what is it?

    Hi Guys,
    We are using Oracle OLAP 11.2.0.2.0 with an 11g Cube, 7 Dimensions, Compressed and partitioned by Month.
    We have run into a performance issue when implementing OBIEE.
    The main issue we have is a delay while drilling on a hierarchy. Users have been waiting 7-12 seconds per drill on a hierarchy, and the query is only returning a few cells of data. We have managed to isolate this to slow performing queries on CUBE_TABLE.
    For example, the following query returns one cell of data:
    SELECT FINSTMNT_VIEW.BASE, FINSTMNT_VIEW.REPORT_TYPE, FINSTMNT_VIEW.COMPANY, FINSTMNT_VIEW.SCENARIO, FINSTMNT_VIEW.PRODUCT, FINSTMNT_VIEW.ACCOUNT, FINSTMNT_VIEW.SITE, FINSTMNT_VIEW.TIME
    FROM "SCHEMA1".FINSTMNT_VIEW FINSTMNT_VIEW
    WHERE
    FINSTMNT_VIEW.REPORT_TYPE IN ('MTD' )
    AND FINSTMNT_VIEW.COMPANY IN ('E01' )
    AND FINSTMNT_VIEW.SCENARIO IN ('ACTUAL' )
    AND FINSTMNT_VIEW.PRODUCT IN ('PT' )
    AND FINSTMNT_VIEW.ACCOUNT IN ('APBIT' )
    AND FINSTMNT_VIEW.SITE IN ('C010885' )
    AND FINSTMNT_VIEW.TIME IN ('JUN11' ) ;
    1 Row selected in 4.524 Seconds
    Note: FINSTMNT_VIEW is the automatically generated cube view.
    CREATE OR REPLACE FORCE VIEW "SCHEMA1"."FINSTMNT_VIEW" ("BASE","REPORT_TYPE", "COMPANY", "SCENARIO", "PRODUCT", "ACCOUNT", "SITE", "TIME")
    AS
    SELECT "BASE", "REPORT_TYPE", "COMPANY", "SCENARIO", "PRODUCT", "ACCOUNT", "SITE", "TIME"
    FROM TABLE(CUBE_TABLE('"SCHEMA1"."FINSTMNT"') ) ;
    If we increase the amount of data returned by adding to the query, it only increased the query time by .4 seconds
    SELECT FINSTMNT_VIEW.BASE, FINSTMNT_VIEW.REPORT_TYPE, FINSTMNT_VIEW.COMPANY, FINSTMNT_VIEW.SCENARIO, FINSTMNT_VIEW.PRODUCT, FINSTMNT_VIEW.ACCOUNT, FINSTMNT_VIEW.SITE, FINSTMNT_VIEW.TIME
    FROM "SCHEMA1".FINSTMNT_VIEW FINSTMNT_VIEW
    WHERE
    FINSTMNT_VIEW.REPORT_TYPE IN ('MTD' )
    AND FINSTMNT_VIEW.COMPANY IN ('E01' )
    AND FINSTMNT_VIEW.SCENARIO IN ('ACTUAL' )
    AND FINSTMNT_VIEW.PRODUCT IN ('PT' )
    AND FINSTMNT_VIEW.ACCOUNT IN ('APBIT' )
    AND FINSTMNT_VIEW.SITE IN ('C010885', 'C010886', 'C010891', 'C010892', 'C010887', 'C010888', 'C010897', 'C010893', 'C010890', 'C010894', 'C010896', 'C010899' )
    AND FINSTMNT_VIEW.TIME IN ('JUN11' ) ;
    12 rows selected - In 4.977 Seconds
    If we increase the data returned even more:
    SELECT FINSTMNT_VIEW.BASE, FINSTMNT_VIEW.REPORT_TYPE, FINSTMNT_VIEW.COMPANY, FINSTMNT_VIEW.SCENARIO, FINSTMNT_VIEW.PRODUCT, FINSTMNT_VIEW.ACCOUNT, FINSTMNT_VIEW.SITE, FINSTMNT_VIEW.TIME
    FROM "SCHEMA1".FINSTMNT_VIEW FINSTMNT_VIEW
    WHERE
    FINSTMNT_VIEW.REPORT_TYPE IN ('MTD' )
    AND FINSTMNT_VIEW.COMPANY IN ('ET', 'E01', 'E02', 'E03', 'E04' )
    AND FINSTMNT_VIEW.SCENARIO IN ('ACTUAL' )
    AND FINSTMNT_VIEW.PRODUCT IN ('PT', 'P00' )
    AND FINSTMNT_VIEW.ACCOUNT IN ('APBIT' )
    AND FINSTMNT_VIEW.SITE IN ('C010885', 'C010886', 'C010891', 'C010892', 'C010887', 'C010888', 'C010897', 'C010893', 'C010890', 'C010894', 'C010896', 'C010899' )
    AND FINSTMNT_VIEW.TIME IN ('JUN11', 'JUL11', 'AUG11', 'SEP11', 'OCT11', 'NOV11', 'DEC11', 'JAN12') ;
    118 rows selected - In 14.213 Seconds
    If we take the time for each query and divide by the number of rows, we can see that querying more data results in a much more efficient query:
    Time/Rows returned:
    1 Row - 4.524
    12 Rows - 0.4147
    118 Rows - 0.120449153
    It seems like there is an initial delay of approx 4 seconds when querying the CUBE_TABLE object. Using AWM to query the same data using LIMIT and RPR is almost instantaneous...
    Can anyone explain what this delay is, and if there is any way to optimise the query?
    Could it be the AW getting attached before each query?
    Big thanks to anyone that can help!

    Thanks Nasar,
    I have run a number of queries with logging enabled, the things you mentioned all look good:
    Loop Optimization: GDILoopOpt     COMPLETED
    Selection filter: FILTER_LIMITS_FAST     7
    ROWS_FAILED_FILTER     0
    ROWS_RETURNED     1
    Predicates: 7 pruned out of 7 predicates
    The longest action I have seen in the log is the PAGING operation... but I do not see this on all queries.
    Time Total Time OPERATION
    2.263     27.864          PAGING     DYN_PAGEPOOL     TRACE     GREW     9926KB to 59577KB
    1.825     25.601          PAGING     DYN_PAGEPOOL     TRACE     GREW     8274KB to 49651KB
    1.498     23.776          PAGING     DYN_PAGEPOOL     TRACE     GREW     6895KB to 41377KB
    1.232     22.278          PAGING     DYN_PAGEPOOL     TRACE     GREW     5747KB to 34482KB
    1.17     21.046          PAGING     DYN_PAGEPOOL     TRACE     GREW     4788KB to 28735KB
    1.03     19.876          PAGING     DYN_PAGEPOOL     TRACE     GREW     3990KB to 23947KB
    2.808     18.846          PAGING     DYN_PAGEPOOL     TRACE     GREW     3325KB to 19957KB
    What is strange is that the cube operation log does not account for all of the query time. For example:
    SELECT "BASE_LVL" FROM TABLE(CUBE_TABLE('"EXAMPLE"."FINSTMNT"'))
    WHERE
    "RPT_TYPE" = 'MTD' AND
    "ENTITY" = 'ET' AND
    "SCENARIO" = 'ACTUAL' AND
    "PRODUCT" = 'PT' AND
    "GL_ACCOUNT" = 'APBIT' AND
    "CENTRE" = 'TOTAL' AND
    "TIME" = 'YR09';
    This query returns in 6.006 seconds using SQL Developer, if I then take the CUBE_OPERATION_LOG for this query and subtract the start time from the end time, I only get 1.67 seconds. This leaves 4.3 seconds unaccounted for... This is the same with the my other queries, see actual time and logged time below:
    Query     Actual     Logged      Variance
    S3     6.006     1.67     4.336
    L1     18.128     13.776     4.352
    S1     4.461     0.203     4.258
    L2     4.696     0.39     4.306
    S2     5.882     1.575     4.307
    Any ideas on what this could be or how I can capture this 4.3 second overhead?
    Your help has been greatly appreciated.

  • OLAP performance issue with Webi

    After lots of research I haven't found good information anywhere. We are an SAP shop, run SAP ECC, BW with BWA and for the last couple of years Business Objects as well. Since we are a utility company, our reporting needs for a good majority of our audience is transactional and detail oriented as opposed to analytical. Our biggest concern is poor Webi performance running against OLAP Universes. Currently, we are running BO XI 3.1 SP3, Integration Kit 3.1 and BW 7.01 SP7 and still experience very poor perfomance. Our user community is very frustrated with the timeouts and very slow running queries. They benchmark against popular websites and the performance they see there. We cannot restrict data sets any more than we already have, given the detailed/granular nature of data that the users need. What recommendations do you have for us? We seem to have hit a brick wall and don't want to lose our user base. Please advise.
    Thanks

    We had a Systems Integrator (who also happens to be an SAP partner) help us with the sizing.
    The website benchmarking was more from an end-user perspective and more generic. They are used to getting information very very quickly in the internet driven world and when reports don't perform well they get frustrated.
    I understand that there are multiple factors in play here. However, the need to get granular/transactional data is a very real business need and the use of the Webi tool to get the information out from an OLAP universe is a challenge. We are trying to give end-users more power and advocate self-service BI and so haven't popularized the use of Crystal as much. Also, we haven't implemented Advanced Analysis yet. Our belief is that we are using the right tool for the job and have structured our platform to work optimally but given the needs of the users the performance is still very slow and hence the frustration and the question.
    Thanks
    Edited by: Manoj Kumar on Feb 3, 2011 11:37 AM

  • More SAP Processes on Front End PC

    Hi all
    It seems to me that as PC's get more and more powerful --sometimes more powerful than the application server itself  the old 3 Layered Architecture level might not be the best use of resources.
    For instance in classical SAP a lot of time is often spent in reading data from data bases into internal tables, manipulating the tables via sorting / filtering etc etc  and then creating a list / or display -- all of these tasks using Central Processing functions..
    Maybe we should start looking at using the processing power of the PC itself for a lot of these task --especially where no data base update is required
    For example data could be downloaded very quickly on to the users PC where it can then be queried and manipulated. Network load is not usually an issue these days.
    Given the thrend of "Persistent Objects" it would seem this would be the way to go -- retrieve the object(s)  from the SAP server and then do all the queries locally.
    Not everything could be done this way  - you still need "Classical Batch Work"  but certainly a huge amount of stuff could be done on the users front end now.
    When SAP R3 first appeared PC's were not very powerful,  the Net as a business tool was hardly recognized and network connections were often painfully slow.
    Now it's totally the opposite. --It seems almost a crime to sit at an 8GB RAM dual processor PC simply to look at a sales order (VA03) or create some reports.
    Cheers
    -K

    I think, that strategically there are other approaches already on the market.
    Not everyone uses FAT clients, I know quite a few companies, who use very old PCs and connect to a Citrix or Terminal Server farm instead of distributing everything locally.
    The management of a huge number of PCs is becoming more and more a nightmare, given the fact, that more and more dependencies are added to run your SAP applications (speaking of Adobe SVG viewer, Flash, Java Runtime etc.) This nightmare would even become bigger if one thinks of synchronizing different steps of a transaction on a different PC. Out of my experience, it´s often just lazyness of the users, that make systems slow by quering just too much data. If you have a system with > 10 years of data and users are quering that data by NOT entering a "from" date, I can just say, that lazyness
    I agree with you, you don´t need a FAT client to display something in VA03, you don´t even need a full PC to do that, a Termina Server session is sufficient for such users.
    Markus

  • What are the areas impact query to run for ever/long?

    Hi All,
    When I have to talk about long running scripts or procedures I focus on DTA which suggests on Indexing and infact I think this is the main cause. What are the other areas which we need to consider for Long running queries and troubleshooting tools we need
    to use do you think?
    Ofcourse blocking impacts but am purely thinking at query perspective.
    Thanks
    Swapna

    What are the other areas which we need to consider for Long running queries and troubleshooting tools we need to use do you think?
    One of the reasons is blocking. Please take a look at these links:
    INF: Understanding and resolving SQL Server blocking problems
    Troubleshoot Slow-Running Queries In SQL Server
    T-SQL Articles
    T-SQL e-book by TechNet Wiki Community
    T-SQL blog

  • Need SQL tuning tips in oracle 10g.

    From time to time I come across some slow running queries used in SQR. I want to know the tuning technics in oracle 10g as I believe it is different now becasue of CBO.
    As I am not an oracle man to tune the query I can try if there are any guidelines. Also while optimizing a query what are the important things need to be considered:
    Do you want the first rows back quickly (typically during online processing) or is the total time for the query (typically a batch process) more important?
    Are the tables properly indexed to take advantage of the various operations available?
    How large are the tables? Joining smaller tables first is usually more efficient.
    How selective are the indexes? Indexes on fields that have only a few values don't really help.
    How is sorting done? Are sorting and grouping operations necessary?
    Any help is greatly appreciated.

    user5846372 wrote:
    As I am not an oracle man to tune the query I can try if there are any guidelines. Also while optimizing a query what are the important things need to be considered:Some things to consider about tuning.
    Re: Explain  "Explain Plan"...
    >
    Do you want the first rows back quickly (typically during online processing) or is the total time for the query (typically a batch process) more important?
    Are the tables properly indexed to take advantage of the various operations available? These are important considerations
    How large are the tables? Joining smaller tables first is usually more efficient. The optimizer usually makes this decision
    How selective are the indexes? Indexes on fields that have only a few values don't really help. But can still be useful if the data can be read from the index instead of the table.
    How is sorting done? Are sorting and grouping operations necessary? This is a business requirement, if you need to sort you need to sort.

  • Execution time of sql query differing a lot between two computer

    hi
    execution time of a query in my computer and more than 30 different computer is less than one second but on one of our
    customers' computers, execution time is more than ten minute. databases and data and queries are same. i re-install sql but problem remains. my sql is ms sql 2008 r2.
    any one has idea for this problem?

    Hi mahdi,
    Obviously, we can't get enough information to help you troubleshoot this issue. So, please elaborate your issue with more detail so that the community members can help you in more effecient manner.
    In addition, here is a good article regarding checklist for analyzing Slow-Running queries. Please see:
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    And SQL Server Profiler and Performance Monitor are good tools to troubleshoot performance issue, please see:
    Correlating SQL Server Profiler with Performance Monitor:
    https://www.simple-talk.com/sql/database-administration/correlating-sql-server-profiler-with-performance-monitor/
    Regards,
    Elvis Long
    TechNet Community Support

  • Query text and execution plan collection from prod DB Oracle 11g

    Hi all,
    I would like to collect query text, query execution plan and other statistics for all queries (mainly select queries) from production database.
    I am doing this by using OEM by click on Top activity link under performance tab but this gives top sql which is recent.
    This approach is helpful only when I need to debug recent queries only. If I need to know slow running queries and their execution plan at the end of day or sometime later then it’s not helpful for me.
    Anybody who has some better idea to do this will really be helpful.

    we did followings:
    1.Used awrextr.sql to export dmp file from production database.(imported snpashot id from 331 to 560)
    2.transfer file to test database.
    3.Used awrload.sql to import it in test database.
    but when we used OEM and went to Automatic Workload Repository link under Server tab
    its not showing snapshots of production database (which we have imported in test database )
    and showing only snapshot which was already there in test database.
    We did not find any error in import/export.
    do we need to perform something else also to display snapshots of production database in test database.

  • Exlcuding data from SQL table in query

    I may have been working too many hours lately, or I'm simply losing my mind, but I'm having a heck of a time with a very strange result from a query
    I have a simple query that retrieves emails from a list
    SELECT emaillist_email
    FROM emaillist
    Now let's say the above gives 50,000 records
    Now take this
    SELECT emaillist_block_email
    FROM emaillist_block
    Say that gives 5,000 records
    Now put them together
    SELECT emaillist_email
    FROM emaillist
    WHERE emaillist_email NOT IN
    SELECT emaillist_block_email
    FROM emaillist_block
    Now unless I am losing it, I should get 45,000 records, presuming that the 5,000 are in both tables.
    The issue is, I get zero records.
    Any ideas?
    *** I Have solved the problem after digging a little deeper. There was a NULL record in the data which had been imported from XLS causing it to throw the query out, once I removed it the correct records were returned ***

    While you got your answer, using "not in" slows down queries.  Something like this would be faster.
    SELECT emaillist_email
    FROM emaillist eml
    WHERE not exists
    (select *
    from emaillist_block emb
    where emb.emailllist_block_email = eml.emaillist_email)
    Or better yet,
    alter table emaillist_email add column isBlocked bit default 0

Maybe you are looking for

  • A630 series unable to install driver on Windows 8.1

    My Photosmart A630 series printer is connected via USB to my HP laptop running Windows 8.1.  I get a message saying that I need to install a driver for the printer.  When I download the driver software (A630_140_301) from the HP website and execute i

  • Cover Flow doesn't work

    My computer crashed and I had to restore all my music from my back up discs. Before my computer crashed, I was able to view my albums on Cover Flow, now I cannot. I get a message that says "unable to view cover flow on this computer." How can I get c

  • Sending a parameter to another WDA Application in my WDA Appliction

    Hi, i have a tree and i am calling another wda application on select the item of my tree but i want to send a parameter that is value of selected item of my tree. How can i call the wda applications with sending a parameter? Can somebody help me pls?

  • ISE 1.3 Guest account Activate

    Hi, Has anyone worked with ISE 1.3 with creating guest accounts using sponsor portal.?. Our issue is that whenever we create new guest account using sponsor portal the account is shown as "Created" not as "Active". When we try to use the same account

  • Can dld newer versions of Frfx but they are not getting installed?

    I am able to dld newer versions of firefox but they are not getting installed ... Currently 'about firefox' shows my version to be 27.0.1 but when I go into the troubleshooting option it shows I have version 28.0 and 29.0 in my update history. Please