Slow performance for large queries

Hi -
I'm experiencing slow performance when I use a filter with a very large OR clause.
I have a list of users, whose uid's are known, and I want to retrieve attributes for all users. If I do this one at a time, I pay the network overhead, and this becomes a bottleneck. However, if I try to get information about all users at once, the query runs ridiculously slow - ~ 10minutes for 5000 users.
The syntax of my filter is: (|(uid=user1)(uid=user2)(uid=user3)(uid=user4).....(uid=user5000))
I'm trying this technique because it's similar to good design for oracle - minimizing round trips to the database.
I'm running LDAP 4.1.1 on a Tru64 OS - v5.1.

This is a performance/tuning forum for iPlanet Application Server. You'd have better luck with this question on the Directory forum.
The directory folks don't have a separate forum dedicated to tuning, but they answer performance questions in the main forum all of the time.
David

Similar Messages

  • Slow Performance with large library (PC)

    I've been reading many posts about slow performance but didn't see anything addressing this issue:
    I have some 40000 photos in my catalog and despite generating previews for a group of directories, LR still is very slow in scrolling through the pics in these directories.
    When I take 2000 of these pics and import them into a new catalog - again generating previews, the scroll through the pics happens much much faster.
    So is there some upper limit of recommended catalog size for acceptable performance?
    Do I need to split my pics up by year? Seems counter productive, but the only way to see the pics at an acceptable speed.

    I also have serious performance issues, and i don´t even have a large database catalog, only around 2.000 pictures, the db file itself is only 75 mb big. Done optimization - didn´t help. What i encountered is that the cpu usage of LR 1.1 goes up and STAYS up around 85% for 4-5 minutes after programm start - during that time, zooming in to an image can take 2-3 minutes! After 4-5 minutes, cpu usage drops to 0%, the background task (whatever LR does during that time!) has finished and i can work very smoothly. preview generation cannot be the problem, since it also happens when i´m working in a folder that already has all previews build, close LR, and start again instantly. LR loads and AGAIN i´ll have to wait 4-5 minutes untill cpu ussage has dropped so i can continue working with my images smoothly.
    This is very annoying! I will stop using LR and go back to bridge/acr/ps, this is MUCH much faster. BUMMER!

  • Linux AMD64, JDK 1.5_03: slow performance with large heap

    Tomcat app server running on jdk 1.4.2 on 32 bit Linux configured with mx1750m, ms1750m, runs fast. Returns 2MB of data through HttpServlet in under 30 secs.
    Moving the same app server to 64 bit on jdk 1.5.03 configured with mx13000m, ms10000m, the same request for data takes 5-20 minutes. Not sure why the timing is not consistent. If the app server is configured with mx1750m, ms1750m, performance is about 60 secs or less.
    I checked java settings through jstat. -d64 is the default. Why would increasing the heap cause such slow performance? Physical memory on the box = 32MB.
    It looks like it's definitely java related since a perl app making a http request to the server takes under a minute to run. We moved to 64bit to get around 1.7GB limitation of 32bit Linux but now performance is unacceptable.

    I Aggree, a AMD 64 with only 32 MB of memory would be a very strange beast indeed, heck, my graphics card has 4 times that, and it's not the most up-to-date.
    Keep in mind that switching to 64 does not only mean bigger memory space but also bigger pointers (on the sub-java level) and probably more padding in your memory, which leads to bigger memory consumption which in turn leads to more bus traffic which slows stuff down. This might be a cause for your slowdown, but that should not usually result in a slowdown as sever as the one you noticed.
    Maybe it's also a simple question of a not-yet-completely-optimized JDK for amd64.

  • Slow Performance with large OR query

    Hi All;
    I am new to this forum... so please tread lightly on me if I am asking some rather basic questions. This question has been addressed before in this forum more than a year ago (http://swforum.sun.com/jive/thread.jsp?forum=13&thread=9041). I am going to ask it again. We have a situation where we have large filters using the OR operator. The searches look like:
    & (objectclass=something) (|(attribute=this) (attribute=that) (attribute=something) .... )
    We are finding that the performance between 100 attributes versus 1 attribute in a filter is significant. In order to increase performance, we have to issue the following filters in seperate searches:
    & (objectclass=something) (attribute=this)
    & (objectclass=something) (attribute=that)
    & (objectclass=something) (attribute=something)
    The first search takes an average of 60 seconds, and the combination of searches in the second filter takes an average of 4 seconds. This is a large performance improvement.
    We feel that this solution is not desirable because:
    1. When the server is under heavy load, this solution will not scale very well.
    2. We feel we should not have to modify our code to deal with a server deficiency
    3. This solution creates too much network traffic
    My questions:
    1. Is there a query optimizer in the server? If so, shouldn't the query optimizer take care of this?
    2. Why is there such a large performance difference between the two filters above?
    3. Is there a setting somewhere in the server (documented or undocumented) that would handle this issue? (ie average query size)
    4. Is this a known issue?
    5. Besides breaking up the filter into pieces, is there a better way to approach this type of problem?
    Thanks in advance,
    Paul Rowe

    I also have serious performance issues, and i don´t even have a large database catalog, only around 2.000 pictures, the db file itself is only 75 mb big. Done optimization - didn´t help. What i encountered is that the cpu usage of LR 1.1 goes up and STAYS up around 85% for 4-5 minutes after programm start - during that time, zooming in to an image can take 2-3 minutes! After 4-5 minutes, cpu usage drops to 0%, the background task (whatever LR does during that time!) has finished and i can work very smoothly. preview generation cannot be the problem, since it also happens when i´m working in a folder that already has all previews build, close LR, and start again instantly. LR loads and AGAIN i´ll have to wait 4-5 minutes untill cpu ussage has dropped so i can continue working with my images smoothly.
    This is very annoying! I will stop using LR and go back to bridge/acr/ps, this is MUCH much faster. BUMMER!

  • Need to improve performance for bex queries

    Dear Experts,
    Here we have bex queries buit on BW infoset, further infoset is buit on 2 dsos and 4 infoobjects.
    we have built secondary indices to the two dso assuming will improve performance, but still query execution time is very long.
    Could you suggest me on this.
    Thanks in advance,
    Mannu

    HI,
    Thanks for the repsonse.
    But as I have mentioned the infoset is based on DSOs and Infoobjects. So we could not perform on aggregates.
    in RSRT
    I have tried look in read mode of the query i.e. in 'x', which is also valid as qurey needs to fetch huge data.
    Could you pls look into other possible areas, in order to improve this.
    Thanks in advance,
    Mannu

  • Slow performance for context index

    Hi, I'm just a newbie here in forum and I would like ask for your expertise about oracle context index. I have my sql and I'm using wild character for searching '%%' .
    I used the sql below with a context index (ctxsys.context) in order to avoid full table scan for wild character searching.
    SELECT BODY_ID
                        TITLE, trim(upper(title)) as title_sort,
                        SUM(JAN) as JAN,
                        SUM(FEB) as FEB,
                        SUM(MAR) as MAR,
                        SUM(APR) as APR,
                        SUM(MAY) as MAY,
                        SUM(JUN) as JUN,
                        SUM(JUL) as JUL,
                        SUM(AUG) as AUG,
                        SUM(SEP) as SEP,
                        SUM(OCT) as OCT,
                        SUM(NOV) as NOV,
                        SUM(DEC) AS DEC
                        FROM APP_REPCBO.CBO_TURNAWAY_REPORT
                        WHERE contains (BODY_ID,'%240103%') >0 and
    PERIOD BETWEEN '1201' AND '1212'
                        GROUP BY BODY_ID, trim(upper(title))
    But i was surprised that performance was very slow, and when I try this on explain plan time of performance almost consume 2 hours.
    plan FOR succeeded.
    PLAN_TABLE_OUTPUT
    Plan hash value: 814472363
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1052K| 97M| | 805K (1)| 02:41:12 |
    | 1 | HASH GROUP BY | | 1052K| 97M| 137M| 805K (1)| 02:41:12 |
    |* 2 | TABLE ACCESS BY INDEX ROWID| CBO_TURNAWAY_REPORT | 1052K| 97M| | 782K (1)| 02:36:32 |
    |* 3 | DOMAIN INDEX | CBO_REPORT_BID_IDX | | | | 663K (0)| 02:12:41 |
    Predicate Information (identified by operation id):
    2 - filter("PERIOD">='1201' AND "PERIOD"<='1212')
    3 - access("CTXSYS"."CONTAINS"("BODY_ID",'%240103%')>0)
    16 rows selected
    oracle version: Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
    Thanks,
    Zack

    Hi Rod,
    Thanks for the reply, yes I already made gather stats on that table, including rebuild index.
    but its so strange when I use another body_id the performance will vary.
    SQL> EXPLAIN PLAN FOR
    2 SELECT BODY_ID
    3 TITLE, trim(upper(title)) as title_sort,
    4 SUM(JAN) as JAN,
    5 SUM(FEB) as FEB,
    6 SUM(MAR) as MAR,
    7 SUM(APR) as APR,
    8 SUM(MAY) as MAY,
    9 SUM(JUN) as JUN,
    10 SUM(JUL) as JUL,
    11 SUM(AUG) as AUG,
    12 SUM(SEP) as SEP,
    13 SUM(OCT) as OCT,
    14 SUM(NOV) as NOV,
    15 SUM(DEC) as DEC
    16 FROM WEB_REPCBO.CBO_TURNAWAY_REPORT
    17 WHERE contains (BODY_ID,'%119915311%')> 0 and
    18 PERIOD BETWEEN '1201' AND '1212'
    19 GROUP BY BODY_ID, trim(upper(title));
    SELECT * FROM TABLE (dbms_xplan.display);
    Explained.
    SQL>
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 814472363
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 990 | 96030 | 1477 (1)| 00:00:18 |
    | 1 | HASH GROUP BY | | 990 | 96030 | 1477 (1)| 00:00:18 |
    |* 2 | TABLE ACCESS BY INDEX ROWID| CBO_TURNAWAY_REPORT | 990 | 96030 | 1475 (0)| 00:00:18 |
    |* 3 | DOMAIN INDEX | CBO_REPORT_BID_IDX | | | 647 (0)| 00:00:08 |
    Predicate Information (identified by operation id):
    2 - filter("PERIOD">='1201' AND "PERIOD"<='1212')
    3 - access("CTXSYS"."CONTAINS"("BODY_ID",'%119915311%')>0)
    16 rows selected.

  • CAML query performance for large lists

    I have a list with more than 10000 items. I am retrieving the items and displaying it in a RAD Grid on my page using CAML query. While retrieving the items, around 1000 records are retrieved due to filter. I have enabled paging in my grid and PageSize is
    set to 25. I have noticed that the load time of my page is very slow as it retrieves all the 1000 records at once.
    Is it possible to retrieve just 25 records for the first page on load. On click on the Next button or Page number it should retrieve the next set of 25 records for that particular page.
    I want to know if there is any way to link CAMl query paging with RAD grid paging
    Any code example would be greatly helpful.

    Hi,
    For pagination in SPListItem use the SPQuery.ListItemCollectionPosition property. 
    http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.listitemcollectionposition(v=office.15).aspx
    check the usefull urls
    http://omourad.blogspot.in/2009/07/paging-with-listitemcollectionposition.html
    http://www.anmolrehan-sharepointconsultant.com/2011/10/client-object-model-access-large-lists.html
    Anil

  • Slow performance with large images working at 300 DPI

    I'm working on creating a poster for a film. I have my workspace set up for 24" x 36" movie poster size at 300 DPI. I have an intel i5 2500k processor @ 3.3 Ghz. I have 4 gigs of RAM and I have PS set to use 2/3 that. I have a scratch disk set on a large separate drive. The program runs very slow and takes forever to save or render a resize of any image. I'm wondering if there's a way to decrease the "size" of the images (in otherwords the data so the layers arent ginormous in terms of data) but still be able to work at 300 DPI?

    inDesign costs something like two thirds the price of Photoshop, so expensive for a one off job. It's sacrilege to mention it in these forums, but of you have a high end Office version installed, you might have Publisher on your system.  It would do the job easily, and is probably more intuitive to learn than inDesign.  But if your serious about it, inDesign is ten times the program, and the printer won't smile knowingly when you deliver the image file and they ask how it was created.

  • Apple TV slow performance with large iTunes library

    We have a rather large (movies + music) iTunes library and the appleTV is performing very poorly... Are there any suggestions how to improve wake-up time, menu response time or performance overall? Will it help to put our music on a separate HDD and turn that off when we want to use the appleTV? Anything else?

    Sorry to butt-in, we have a 2.6TB iTunes library. Music and Photos are kept on the AppleTV all else is streamed. We use a Lacie 4Big Quadra attached to a iMac G5 streamed via Airport Extreme.
    The only delays we get are for the Lacie coming out of sleep mode (takes about 45 seconds), which sometimes results in an on-screen message of File Format not compatible (which just means "not found"). Retrying the selection starts the streaming. I expect the iTunes library streams the Movie/TV programme to the AppleTV which buffers it for delivery. This could be the delay you're experiencing, maybe 15-20 seconds before it starts to play - is this what you mean?
    As our library has grown we've noticed no significant performance drop-off.
    Possibly it's a network issue, have you run the network diagnostics on Apple TV ?
    Roger

  • Very Slow performance with large files

    Using iTunes with my AppleTV I've been in the slow and painful process of digitizing my dvd library and when converting the LOTR (extended edition) trilogy I ran into a problem post-conversion. They play fine in Quicktime 7.3.1, I can add them to the iTunes library but when attempting to edit any information within iTunes and attempting to save iTunes freezes for several minutes before working or crashing (odds are around 50/50). If I just add the file to the library and try to play it the movie doesn't show up on the AppleTV either which is even stranger.
    Output format of the movie: MP4/H.264, native 720x480 resolution, 23.97fps, 2Mbps video stream, 128k audio stream(limit of winavi).
    Output Size: 4.4GB
    Length: 4hours 24minutes
    Software versions: iTunes 7.3.1, QuickTime 7.3.1
    OS: Windows XP Pro SP2(current patch level as of 7/15).

    Is possible than iTunes have 4 Gb folder limits. I'm trying put a little of light over the problem because iTunes Help don't said.
    Cheers

  • Performance for large scale of objects

    I'm working on a project that requires thousands of buttons to be shown on a page. Each button presents an object instance. It tooks more than 20 seconds to display the page. Is there any way to improve the performance? or any other faster approaches? Thanks.

    Thank for your reply. I should have described my question clearer.
    I'm using commandButton on a JSP as one of the graphic view of my JSF application. There is only one data bean class on the server for this view and I create a button for each instance of the bean class. Is JavaScript on the client-side a good idea, but its appearance may vary for different browsers, if I understand it correctly.
    With thousands of buttons I have to use scroll bar on the content area that contains the buttons. One approach that I'm thinking is to implement the scrolling function in a way similar to the paginating function - loads only the data needed for the display area. Whenever the scroll bar is clicked, I will load a new set of data just sufficient for the display. Is this a possible approach?

  • Big Library:Slow Performance?

    I have about 10,000 photos (.jpg ~3MB each) in my Aperture Library. I want to add many more photos but I'm worried about slow performance. My questions:
    1. How big is your library or how big do you think the libraries can get?
    2. Have you noticed slower performance with larger libraries?
    3. Any opinion on breaking up into multiple smaller libraries vs. 1 larger library?

    I am running two libraries,
    one for all of my work related imagery, 15,000+ images 50/50 raw & jpegs and
    the other for all of my stuff I shoot of sporting clubs, the bit I give back to the community, 18,000+ predominantly jpeg's
    both run smoothly, one on the MacPro and the other on the G4 laptop.
    Issue starts to be the backing up, if you are thinking it will get BIG, try a library for each client. Could be a good selling point as well, "your imagery is isolated from other clients and has its own dedicated backup".
    Tony

  • CS5 or Pc  Slow performance?

    Hi there,I have a problem with my pc at work and because I am not quite sure in which one is exactly the problem I decide to ask you.The pc is intel pentium 4 with 3 Gb Ram  XP professional service pack 3 Photoshop CS 5.
    The pc is connected to server(data base).When I open a bunch of photos (20-30) approximately 8 - 12 MB each one and sometimes I experience a slow performance(for example I am trying to retouch/clone something and it takes a lot of time for the pc to redraw/respond) the interesting part is that it doesn't happen always(sometimes can be even 3 photos and it will take ages to do something) also I notice that after appr. 20 minutes it's suddenly start to work as it should normally(it's like there was an update and when is updated/complete everything is fine).When that start to happen today I restart CS5 and load the same photos and it work fine, so it is really bizarre for me.I put a 89% ram usage to the CS5(I pretty much use only all the time Photoshop and bridge )   also made the history steps to 7 and the cache to 6 for big and flat files.I was checking the Efficiency and it's all the time 100 % so it's like it's not the photoshop but like I said today i just restard the ps and upload the same photos and got fix...I tried to copy the photos on my hard drive(so it should not be affect of the performance of the server but still the same as when I take them straight from the server.Because the hard drive is only 80 gb ( I don't store anything there )   i did clean up and disk defragment still  the same situation.Do you know what can be the problem and is it the Photoshop or the Pc?I'll appreciate it.
    Thank you  

    Could be that your PC is sometimes looking to check things on the server, and maybe the server or network is busy/bogged down and you PC gets caught in that cycle mess.....so,
    suggest next time you use Ps, set win performance monitor perfmon.exe or perfmon.msc ; setup to view graph of disk use and network activity and cpu usage.....then you may see/track a problem area.
    on win7, task manager does most of the same stuff

  • Slow performance of JDBC - ODBC MS ACCESS

    I experience a very slow performance for jdbc-odbc using ms access as the database. This program works fine with other computer (in term of performance). However, the harddrive is cranking big time with this computer (this is the fastest one among the computers I tested, and also has many gigabytes left to be used). The database is very small. Other computer use exactly the same java version and msaccess driver version. If anyone found the same problem, or have any suggestion please help. Thank you.

    I am having the same problem with one machine as well. Running MS Access 2000 (unfortunately), and all machines run well with one exception. DB reads take about 10 seconds each. If a solution has been found, please report.
    --Dave                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Extremely slow performance on projects under version control using RoboHelp 11, PushOk, Tortoise SVN repository

    We are also experiencing extremely slow performance for RoboHelp projects under version control. We are using RoboHelp 11, PushOk and a Tortoise SVN repository on a Linux server. We are using a Linux server on our IT guys advice because we found SVN version control under Windows was unstable.
    When placing a Robohelp project under version control, and yes the project is on my local machine, it can take up to two hours to complete. We are using the RoboHelp sample projects to test.
    We have tried to put the project under version control from Robohelp, and also tried first putting the project under version control from Tortoise SVN, and then trying to open the project from version control in Robohelp. In both cases, the project takes a ridiculous amount of time to open. The Robohelp status bar displays Querying Version Control Status for about an hour before it starts to download from the repository, which then takes more than an hour to complete. In many cases Robohelp becomes unresponsive and we have to start the whole process again.
    If adding the project to source control completes successfully, and the the project is opened from version control, performing any function also takes a very long time, such as creating a topic. When I generated a printed documentation layout it took an astonishing 218 minutes and 17 seconds to complete. Interestingly, when I generated the printed documentation layout again, it took 1 min and 34 seconds. However when I closed the project, opened from version control, and tried to generate a printed documentation layout, it again took several hours to complete. The IT guys are at a loss, and say it is not a network issue and I am starting to agree that this is a RoboHelp issue.
    I see there are a few other discussions here related to this kind of poor performance, none of which seem to been answered satisfactorily. For example:
    Why does it take so long when adding a new topic in RH10 with PushOK SVN
    Does anybody have any ideas on what we can do or what we can investigate? I know that there are options for version control, but am reluctant to pursue them until I am satisfied that our current issues cannot be resolved.
    Thanks Mark

    Do other applications work fine with the source control repository? The reason I'm asking is because you must first rule out that there are external factors causing this behaviour. It seems that your it guys have already looked at it, but it's better to be safe than sorry.
    I have used both VSS and TFS and I haven't encountered such a performance issue. I would suggest filing it as a bug if you rule out that the problem is not related to external influences: https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform&loc=en
    Kind regards,
    Willam

Maybe you are looking for

  • How do I use trains late for facebook

    How do I use trains late with Facebook, have it in my apps, but can't seem to use with Facebook?

  • ITunes Libraries on two computers with one iPhone 3G 16GB White

    My Dad wants to put music from my macbook and from my mum's imac on his iphone. what is the best way to do this without mixing the two libraries, as I know you cannot sync the iphone with two computers?

  • Job Termination In the source system (TSV_TNEW_PAGE_ALLOC_FAILED)

    Hi Experts,    I have a load in bw which extracts the data from R/3.  In the IP selection i have OLAP variable set to 0calmonth and the selection is one month.  The loads gets failed with the page allocation error in the source system.  I tried reduc

  • MIGO Error against SUB-Contract LB031

    Hi Expert, i have a Sub-contract scenario, both material and component under this material are managed with HU. step is: 1. Create a sub-contract PO ME21N 2. Create a delivery to Vendor for this component via ME2O 3. G/R against this sub-contract PO

  • Hp photsmart 7520 and iMac OS X 10.8.3 not talking in app's!

    I am able to print with the usb connection while having no problems, but this is not what my client wants.  I updated the firmware, downloaded the drivers, entered the IP address to find the printer, and this is where the problem begins.  During the