Wait() to slow?

I have a client/server model that use threads to handle the input and output. Basically, I used the idea behind awt's event listeners to handle when something occured that the program needed to deal with. Whenever a download of a byte array completes, the listener's downloadComplete(byte[] data) method is triggered. I have it synchronized and execute notify() in the method to tell the program that there is data to be processed. The actual thread that is created by my reciever class executes the downloadComplete(byte[]) method.
public synchronized void downloadComplete(byte[] data)
this.data = data;
notify();
}I also have a command system that just makes transfering ints quick in order to send 'commands' from client - server and vice versa.
public synchronized void commandRecieved(int command)
lastCommand = command;
notify();
}When I send a data request from the client to the server, this function executes:
public synchronized String getData(int location)
try
  client.sendCommand(201);
  wait();
  client.upload(location);
  wait();
  String temp = (String)(Convert.byteArrayToObject(data));
  //Convert is just a class with static methods from converting primitives and
  //objects into byte arrays and back.
  return temp;
catch(InterruptedException e)
}Basically, the client sends the command '201' to the server, which tells it the kind of data it wants. It then waits for the server to send back a confirmation code that it's waiting for the data's location (That trigger's the notify() in the commandRecieved(int) method). The client then uploads the location integer, then waits again. The server fetches the data, then transmits it back. When the download completes, it's stored in the global data variable and notify() is called again. The client converts the byte array data into a string and returns it.
This all works fine once. The first time it executes. The second time, I get hung up on the second wait() call. If I delay the server, it's fine. It's as if the server is getting the data and downloading it to the client before the client can go from the client.upload() call to the wait() call - even though it's a single line of code (In the client class, the final statement in the upload method is notify(), which starts the thread that handles the uploading.) I have been trying for two days to either fix this myself or find a solution online. The only thing that works is to put Thread.sleep(100) before the notify() in the downloadRecieved() method, but this slows the whole process down to unacceptable levels. Is there something about wait() I don't know?

I'll add the books to my list as I've been trying to learn concurrency through trial and error, but of course I had boolean controls. I originally had them in place, but as I went through tests trying to pinpoint my problem, I removed ALL the controls - and the program behaved exactly the same.
But my problem seems to be tied to my transmission threads. No download confirmation is sent back to the server when the client hangs, which means, somehow, every single thread in the client (There are minimum four at any given time) get stopped, all at the same time. I'm not going to go into my client/server code as it'll take all day to break it down. Thanks for those book suggestions.

Similar Messages

  • 10 day wait with slow ADSL...

    We had a line fault on the 21st December which took until last Friday 4th Jan to sort out.  The audio phone line didn't work at all but the ADSL did albeit very very slowly.
    Now the phone line is fixed but the ADSL is now stuck giving these stats pasted below - higher upload speed than download how about that! 
    Having put all the router back in its proper place this evening (I'd moved it when the fault started) and reading about slow ADSL on here I guess I have to wait another 10 days before it either improves or I can then try to get help from BT?  That'll be just about a month without any very useful usable broadband other than email or slowly browsing the web.  Is is worth asking BT to manually reset all the line at all?
    Thanks for any pointers.
    DSL Connection
    Link Information
    Uptime:
    0 days, 3:09:55
    Modulation:
    G.992.3 annex A
    Bandwidth (Up/Down) [kbps/kbps]:
    890 / 239
    Data Transferred (Sent/Received) [MB/MB]:
    8.92 / 107.18
    Output Power (Up/Down) [dBm]:
    12.5 / 0.0
    Line Attenuation (Up/Down) [dB]:
    26.0 / 48.5
    SN Margin (Up/Down) [dB]:
    7.5 / 26.5
    Vendor ID (Local/Remote):
    TMMB / IFTN
    Loss of Framing (Local/Remote):
    0 / 0
    Loss of Signal (Local/Remote):
    0 / 0
    Loss of Power (Local/Remote):
    0 / 0
    Loss of Link (Remote):
    0
    Error Seconds (Local/Remote):
    0 / 0
    FEC Errors (Up/Down):
    0 / 1
    CRC Errors (Up/Down):
    1 / 0
    HEC Errors (Up/Down):
    1 / 0
    Line Profile:
    Fast

    The line fault condition and associated interventions by BTo have driven the DSL into a banded profile.
    It's an automatic "normal" defensive failsafe situation.
    Frankly, if the line and BB is a new setup, I'd email in to the moderators, ask them to reset the connection totally.
    In which case it would need to undergo line training again for a 10 day period, and UNIMPEDED.
    During that time, the router or hub will restart sometimes of its own valition as differing types of modulation are tried.
    (Because it is a new connection it won't have any meaningful line connection history which is benefitial for future use
    as the interventions and problems will have stopped it accumulating any.)
    ... thus a total reset.  

  • Pages won't load if another tab is waiting for a slow script which is on the same website

    If there is a tab that is waiting for slow script other tabs on the same website will be affected, even though they should not.
    Test case:
    1. Open a tab and run a slow script ( example: http://localhost/slow_script.php )
    2. Open another tab and run a normal script ( example: http://localhost/normal_script.php )
    What should happen:
    The normal script will load, the slow one will be waiting for the server to finish.
    What happens:
    The normal script wont start loading until the slow one finishes.
    Is there any way to work around this issue?
    Thanks !

    I have to add that the delay is not caused by the server itself. Running two web browsers or using multiple computers does the trick.
    Using a simple sleep function to make the script take longer seems enough to reproduce this bug. There is no need to consume actual resources on the web server.

  • Best practices; how to reduce the wait when making live changes

    I am getting tired of waiting 20 seconds or so every time
    that I save a change to one file. How can I put changes to the
    server, and disregard all unchanged documents? I would be thrilled
    if I could just get rid of this wait, it slows my whole thought
    process down.

    Pacoan wrote:
    > I am getting tired of waiting 20 seconds or so every
    time that I save a change
    > to one file. How can I put changes to the server, and
    disregard all unchanged
    > documents? I would be thrilled if I could just get rid
    of this wait, it slows
    > my whole thought process down.
    >
    Are you working live from the server? In Site Management,
    setup your
    Local Info to point to your local hard disk, and your Remote
    Info to
    point to the live web site. The synchronize command will then
    do what
    you want.
    See the help button within the Site Manager.
    Harvey

  • Report generation toolkit (1.1.2) and 8.5

    I am not as crazy as I thought!
    I wrote a program today that collects data from excel reports in many directories using cell names and it worked well.  Unitl I saved it.  I had been saving the sub vi's, but when I went to close them I got a message about unsaved changes and I guess I clicked to save the changes today.  The program never worked the same again.  I would get errors like 41106 and 41110 at random (problems with excel not opening and other stuff).  I tried adding waits which slowed everything down and worked mostly.  But the only way to not drop data and get errors was to have a copy of excel open while getting my data.
      Well I just uninstalled/reinstalled the toolkit and rebuilt the vi that was having problems with the fresh install and everything is great again.  So my question is why does the toolkit have problems when I save it in 8.5?  The warning does say that the vi's are from 7.1 and I cant find any updates so I am hoping this is the latest version that came with my 8.5 suite.

    Hi Bryan,
    There should not be a problem with saving the VIs from the Report Generation toolkit in LabVIEW 8.5. I have checked the readme files to make sure but RGT 1.1.2 should work with any version of LabVIEW later than 7.0. However, the readme does discuss an specific upgrade issue. It says if you previously created VIs that use the built-in Report Generation VIs in LabVIEW Professional, Full, or Base packages or VIs from a previous version of the Report Generation Toolkit, installing the Report Generation Toolkit 1.1.2 might break the VIs you created. If the VIs you created contain any of the following VIs, edit them so they are compatible with the Report Generation Toolkit 1.1.2.
    Append Front Panel Image to Report VI—The Image Format input of the LabVIEW 6.1 version of this VI is a different enumerated type than the current toolkit version. Right-click the input and select Create»Control or Create»Constant from the shortcut menu to create a compatible control or constant.
    Append Table to Report VI—The connector pane position of the show grid lines input changed from the Report Generation Toolkit 1.0.1 version of the VI. Reconnect the original wire to the new input location.
    Get Report Settings VI—The cluster order of the font settings input changed from the Report Generation Toolkit 1.0.1 version of the VI. Right-click the output and select Create»Indicator from the shortcut menu to create a compatible indicator.
    Report Express VI—If a VI includes the LabVIEW built-in version of the Report Express VI on the block diagram, the VI breaks after you install the Report Generation Toolkit 1.1.2. Double-click the Report Express VI to launch its configuration dialog box, then click the OK button. Reconnect all wires to the Report Express VI.
    You might need to relink the following subVIs after installation because the connector panes changed in the Report Generation Toolkit 1.1.2. Right-click the VIs and select Relink to SubVI.
    New Report
    Dispose Report
    Append Front Panel Image to Report
    Append Hypertext Link Anchor to Report
    Append File to Report
    Append Horizontal Line to Report
    Set Report Font
    One thing that might have happened is if you had created the program in an earlier version of the toolkit, then saved it in the new version of the toolkit, maybe some of the upgrade issues regarding the VIs listed aboved cause LabVIEW to start giving you errors. Also, which version of excel are you using? After you re-did the VI program, have you tried to save it again and see if the same problem occurs?
    Carla
    National Instruments
    Applications Engineer

  • Multiple instances of a subVi to display data

    What
    is the best method for creating and using a subVi that was created specifically
    for displaying results?
    I created a subVi for displaying stress (tension & compression). I started
    with a Numeric Control/Vertical Pointer Slider which I changed to a Numeric
    Indicator. I added 2 LEDs for out of bounds indicators (+ and -). I used the
    Property Node for the Slider to permit setting the minimum and maximum values
    for the scales. The subVi has only two inputs; the min/max scale value and the
    data to be displayed. I added a Decoration in the form of a graduated color
    scale to match that used in PTC's Pro/E.
    When I use the subVi each instant in the Block Diagram uses the same Front
    Panel. This makes the Slider or Needle flicker.
    What I'd really like is for my subVi to work and act like the original Vertical
    Pointer Slider and appear directly on the main Vi.
    I know I can create multiple copies of the subVi's file in the folder on the
    disk but that makes maintenance a pain because each change will require the duplication
    once again. This will still create multiple Front Panels to display the data.
    I need a solution that will work with the data for about 20 strain gauges
    displayed on the screen at the same time.
    Copying
    and pasting the Block Diagram for my stress subVi into the main VI will work
    but will also create a wiring nightmare
    How
    do create this type of display.

    Dynamik,
    Thanks for "Slider2.vi" it was quite
    informative! It's taken me a while to get back to working on my VI.
    I tried to create a modified version of "slider2.vi" so I
    would understand what you had done (I still don't understand everything) as
    well as creating the vi I needed for my testing. I succeeded except for being
    able to display the information on the front panel. For some reason I create a
    cluster with 20 clusters within it each of them containing a slider (with a
    multi-colored scale) and hi/lo out of bounds indicators. I used "Cluster
    to Array" so I could use "Bundle by Name" within a "For
    Loop" to set the values. The problem comes when I attempt to take the
    array information and apply "Array to Cluster." The resulting cluster
    only has 9 elements. What happened to the other 11 elements? The
    "Help" has not been very helpful. With the broken wire deleted, I've
    inserted an indicator for the "For Loop" index and a "Wait"
    to slow down the loop so I can follow the values. The index goes from 0 to 19
    as expected.
    I'm at a loss as to how to proceed. I've attached my
    "Test-problem.vi" to this message. Is there something fundamental
    I've missed? Where is the problem?
     Thanks
    To answer your question: I've been programming for 25+ years, Everything
    from assembly language, to procedural languages (FORTRAN, PL/I, PASCAL,
    C), to object oriented languages (Java), to scripting languages (C
    Shell, Perl).
    Message Edited by JoeSz on 10-05-2005 06:17 PM
    Attachments:
    Test-problem.vi ‏227 KB

  • BT Home Hub 3 is **bleep** !!!!!!!!!!!!!!!!!

    Okay so its know secret that BT send out HH3 that are **bleep** and can't do the job of talktalk modem/routers so i asked if i could have a Home Hub 2 send out to me because i need to run a server and HH3 will not or can not do "loopback"
    so they are saying HH3 will work on my phone line but a HH2 will not work this sounds like **bleep** because i know people that still use their HH2 and my telephone exchange is the old type as i live in a very very small village.
    Im leaving BT as soon as my contract is up and i cant wait.
    SLOW SLOW BT CONGESTED EXCHANGE THAT I SEEM STUCK WITH HOWEVER AT PEEK TIMES WHEN I HIT A VERY SLOW 1MB ON MY BT LAND LINE THAT COST ME AN ARM AND A LEG I CAN JUST DISCONNECT FROM IT AND CONNECT TO NEXT DOORS VIRGIN MEDIA UNSECURED D-LINK ROUTER AND GET A 2.6 CONNECTION. HOW ???????????????????? AND WHY????????????????? IF ITS CONGESTED HOW IS THAT EVEN POSSIBLE OR DO VIRGIN PAY BT A **bleep** LOAD OF MONEY.
    Solved!
    Go to Solution.

    New message from BT and seems they got things wrong im happy to say they are sending out a HH2
    I dont like this part tho:
    "I have also sent you a returns bag so you can return the hub 3 to us"
    as i have been with them for 15months i would have thought i now own the HH3
    This is my good news for today from BT:
    "Sorry BT asked me to remove the reply so i had to remove it"
    SLOW SLOW BT CONGESTED EXCHANGE THAT I SEEM STUCK WITH HOWEVER AT PEEK TIMES WHEN I HIT A VERY SLOW 1MB ON MY BT LAND LINE THAT COST ME AN ARM AND A LEG I CAN JUST DISCONNECT FROM IT AND CONNECT TO NEXT DOORS VIRGIN MEDIA UNSECURED D-LINK ROUTER AND GET A 2.6 CONNECTION. HOW ???????????????????? AND WHY????????????????? IF ITS CONGESTED HOW IS THAT EVEN POSSIBLE OR DO VIRGIN PAY BT A **bleep** LOAD OF MONEY.

  • What's keeping me from buying the W540

    I have $$ to spend and no Lenovo to spend it on.  
    Here's what's keeping me from buying the W540:
    1) Physical buttons have been removed above the trackpad. 
    Many, many people complain about this in the W540 reviews:
    "Touchpad: I've commented elsewhere that this is the worse I've even seen. Absolutely useless. I've disabled both the touchpad and touchstick (again, useless) and just use a mouse."
    "I'd try the touchpad on the W540 for a while before you decide you can live with it, it is truly as bad as all the reviews say. I've disabled mine and use a mouse."
    "...two issues with the touch-pad. The first has to do with having to clunk the whole pad as opposed to tapping on a much smaller (and quieter) button. The second has to do with the touch pad activating when you are typing."
    Member "Bauden" provided an excellent rationale for why this feature change is wrong for Thinkpads:
    "For a MAC, a "click-pad" may work because the ENTIRE OS is designed to be used by a single mouse button. You CANNOT force this on PC users. On PC, right-click is used frequently for many essential functions, dragging and dropping and context menus . . . . the habit of using it has been ingrained into PC users by years of Microsoft tradition and applications (NOT APPS!)."
    I've used 10 Thinkpad laptops, I can't imagine being OK with that either.    Needing to use an external mouse is unacceptable.
    In fact, I'd bet the extra pressure needed to "tilt" or "clunk" the entire trackpad down to do the equivalent of pressing a gentle button from before will cause a greater frequency in RSI (Repetitive Stress Injury) in Thinkpad users.
    I do not see a single person who feel this feature removal/change is an improvement.
    Therefore, Lenovo, can you please change back?
    2) Bizzarre Resolution (2880 x 1620)
    I need / want UHD. I am thrilled that you are finally selling higher quality screens on your laptops.
    (Thank you Apple for putting that competitive pressure on laptop vendors.)
    Lenovo, please provide the resolution on the W540 that you have provided on the new Y50. (3840x2160)
    Why did you choose (2880 x 1620) resolution?
    1920 x 1080 isn't an even divisor of the native dimensions at full resolution, so resolving scaling issues by going to 1920x1080 would never look exactly right.
    But **please** Lenovo, keep the screen anti-glare.
    If you make above changes, please keep the following features:
    1) Anti-Glare screens
    2) Trackpoint
    3)  2 physical drive capability, 1 SSD + 1 other
    although it doesn't sound as easy as I'd prefer:
    "Thus, no matter what anyone tells you, you CANNOT buy a W-540 and think you will easily swap a DVD drive with, say, a second hard drive. It’s minor surgery, best done with a jeweler’s screwdriver and some fine needle nose pliers.
    Be sure to read the User Guide to keep from breaking the hatch cover. When I ordered mine, they shipped the Ultrabay carrier and the hard drive from the US, so they arrived early. I had to install them myself and found manipulating the tiny screws and pulling the hatch cover to be a bit scary.
    ……Not exactly what you would call “convenient.”
    from
    http://forums.lenovo.com/t5/W-Series-ThinkPad-Lapt​ops/LIVING-WITH-A-W-540/td-p/1528698
    4) Thunderbolt port
    And add back in the following features:
    I especially agree with these minuses in the "+/-" very detailed review by "Bauden"
    http://forums.lenovo.com/t5/W-Series-ThinkPad-Lapt​ops/LIVING-WITH-A-W-540/td-p/1528698/page/2
    = Missing drive indicator light
    Why in the world remove that. Please add that back in.
    = Missing bluetooth indicator light
    = USB 2.0 vs USB 3.0 ports can be easily identified with industry standard 'Blue' for USB 3.0.
    Why did you decide to not do that?
    After a couple hours of reading, I'd agree with the conclusion of Bauden when s/he said:
    "Overall a nice speedy machine, but that is not by Lenovo R&D.
    Intel, nVidia, and m.2 SATA are all wrapped in a container that is very poor when expecting a product labelled "Workstation". This is NOT a workstation among the items mentioned above and the wrong screen aspect ratio for photography / graphic applications."
    http://forums.lenovo.com/t5/W-Series-ThinkPad-Lapt​ops/LIVING-WITH-A-W-540/td-p/1528698/page/2
    Idea
    It seems like the team who designed the Lenovo Y50 did a great job for that machine's intended purpose.
    http://shop.lenovo.com/us/en/laptops/lenovo/y-seri​es/y50-uhd/
    (but is it really shipping in any quantity? Not a single user review posted to the community.)
    Maybe take the team who implemented the Y50 and have them design the new Thinkpad W550.

    Update:
    I'm glad those reasons above kept me from spending my own $ on this model since work ended out upgrading my old laptop.  Options were a W540 or a T440.  I had about 15 seconds to decide.  I went with the W540 tp get a larger screen. 
    Thoughts after several months: T
    Best thing on a day to day basis:  
    Overall, it's a screamer.  I can throw any workload at it and it doesn't flinch. 
    ( I7, 32gb memory, drive upgraded to a 1TB Samsung EVO SSD)
    Life is too short to wait on slow systems or slow hard drives.   This "best thing" overshadows everything else and makes me very happy with this system.  
    Worst thing on a day to day basis:  
    The keyboard layout. PageUp should be above PageDown.  Home should be above End. 
    Useless number key area. Why?  The NumLock key is right next to the Backspace key.  I'll accidentally toggle that Numlock key once a day or so when hitting backspace.  Then, the PageUp, Home, etc., keys on that keypad are useless and unpredictable.
    Mildly annoying:  
    1) The mousepad, for reasons described above.  Its tolerable but not great.
    2) The new DVD button location.  I accidentally open it twice a week when picking up / putting down the W540, and I've *never* used that silly DVD player.
    3) The "B" key, the key that's directly down from the red Thinkpoint tip, isn't right.  It doesn't press down cleanly.  bbbbbbbbbbbb   It's like it gets hung up on the red nibbet.

  • From 10.4.11 to Snow Leopard

    Hi
    I know you can get a Mac OS X Snow Leopard to upgrade..
    BUT If I get those Mac OS X Snow Leopard, will it wipe out my applications,files,etc off the face of the earth?
    If no, then i dont have to worry
    if yes, then How will I back up my applications?
    Johnny

    Personally, in the 20+ years I've been using a Mac I've experienced no disasters but I've had a few bumps in the road. There are two reasons I've had no disasters: 1) I've repaired my drive and then made a complete backup to an external drive so I could roll back to if disaster struck, and 2) prior to updating or upgrading I've updated all my software and any startup program I wasn't sure about I disabled before updating. The bumps in the road? They were programs or hardware that didn't work in the new OS (yet) but the backup of the old OS that I could retreat to kept them functional.
    Snow Leopard can be installed two ways - 1) the default it to upgrade - it'll install the new OS and support programs while removing the old. It won't touch your software, your prefs or your data. 2) use Disk Utility to erase the hard drive and then install SL which will erase all your stuff.
    So to directly answer your questions: no, SL doesn't erase your stuff by default though it is an option. Whether you choose to do the erase and install method is your own choice but backing up shouldn't be an option. SuperDuper! and CarbonCopyCloner are my two backup programs of choice at upgrade time because they not only copy your files but give you a disk that will boot the computer and let you use it. In times past I've found this very valuable while I've waited for slow vendors to update their software. (For me, SL has been a very painless upgrade and I am waiting for nothing to become compatible.)

  • Need help with sync waveform generation to 60 Hz trigger.

    I can't sycn an output waveform to a 60 Hz trigger with Labview 5.1, Win NT, PCI-6024E. I started with retriggerable_AO_easy_5.vi. I couln't load the advanced version due to missing files. If I set a small number of data points per cycle, it seems to trigger fairly well, but skips every fourth cycle. If I set more than 100 points/cycle it does not sync to the trigger at all. Any suggestions or examples of how to do this?
    Attachments:
    Noise_proj_cancel_temp_2.vi ‏132 KB

    Steve,
    With a 60Hz trigger you will have 16.67 milliseconds to complete your analog output operation of 100 points. What is probably happening is these 100 points are taking longer than this amount of time and the output operation is not completed in time to be set up to receive the next trigger.
    Normally 100 points would not be a problem; however, I cannot verify this with LabVIEW 5.1 because I don't have it installed as it is quite an older version. We are currently up to version 7.0 and a number of significant improvements have been made to both LabVIEW and our new NI-DAQ 7.0 driver. Especially in regard to retriggerable operations in NI-DAQ 7.0 which has made it incredibly easy.
    I found the retriggerable_AO_easy_5.vi you are referring to on
    our website. This is the first time I have seen this particular example for a retriggerable operation and not what I would expect. Placing AO Start in the while loop, as well as AO Wait, will slow down the loop and is probably the reason you are being limited to a low number of samples. I would definitely recommend, at the very least, the advanced example if you wish to exceed 100 points per cycle. In order to run the advanced example you will probably have to upgrade your NI-DAQ driver. Try version 6.9.3. However, I would definitely recommend an upgrade to LabVIEW 7.0 and NI-DAQ 7.0.
    Regards,
    Justin Britten
    Applications Engineer
    National Instruments

  • Response time of query utterly upside down because of small where clause change

    Hello,
    I'm wondering why a small change on a where clause in a query has a dramatic impact on its response time.
    Here is the query, with its plan and a few details:
    select * from (
    SELECT xyz_id, time_oper, ...
         FROM (SELECT 
                        d.xyz_id xyz_id,
                        TO_CHAR (di.time_operation, 'DD/MM/YYYY') time_oper,
                        di.time_operation time_operation,
                        UPPER (d.delivery_name || ' ' || d.delivery_firstname) custname,
                        d.ticket_language ticket_language, d.payed,
                        dsum.delivery_mode delivery_mode,
                        d.station_delivery station_delivery,
                        d.total_price total_price, d.crm_cust_id custid,
                        d.bene_cust_id person_id, d.xyz_num, dpe.ers_pnr ers_pnr,
                        d.delivery_name,
                        TO_CHAR (dsum.first_travel_date, 'DD/MM/YYYY') first_traveldate,
                        d.crm_company custtype, UPPER (d.client_name) partyname,
                        getremark(d.xyz_num) remark,
                        d.client_app, di.work_unit, di.account_unit,
                        di.distrib_code,
                        UPPER (d.crm_name || ' ' || d.crm_firstname) crm_custname,
                       getspecialproduct(di.xyz_id) specialproduct
                   FROM xyz d, xyz_info di, xyz_pnr_ers dpe, xyz_summary dsum
                  WHERE d.cancel_state = 'N'
                 -- AND d.payed = 'N'
                    AND dsum.delivery_mode NOT IN ('DD')
                    AND dsum.payment_method NOT IN ('AC', 'AG')
                    AND d.xyz_blocked IS NULL
                    AND di.xyz_id = d.xyz_id
                    AND di.operation = 'CREATE'
                    AND dpe.xyz_id(+) = d.xyz_id
                    AND EXISTS (SELECT 1
                                  FROM xyz_ticket dt
                                 WHERE dt.xyz_id = d.xyz_id)
                    AND dsum.xyz_id = di.xyz_id
               ORDER BY di.time_operation DESC)
        WHERE ROWNUM < 1002
    ) view
    WHERE view.DISTRIB_CODE in ('NS') AND view.TIME_OPERATION > TO_DATE('20/5/2013', 'dd/MM/yyyy')
    plan with "d.payed = 'N'" (no rows, *extremely* slow):
    | Id  | Operation                          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                  |  1001 |  4166K| 39354   (1)| 00:02:59 |
    |*  1 |  VIEW                              |                  |  1001 |  4166K| 39354   (1)| 00:02:59 |
    |*  2 |   COUNT STOPKEY                    |                  |       |       |            |          |
    |   3 |    VIEW                            |                  |  1001 |  4166K| 39354   (1)| 00:02:59 |
    |   4 |     NESTED LOOPS OUTER             |                  |  1001 |   130K| 39354   (1)| 00:02:59 |
    |   5 |      NESTED LOOPS SEMI             |                  |   970 |   111K| 36747   (1)| 00:02:47 |
    |   6 |       NESTED LOOPS                 |                  |   970 |   104K| 34803   (1)| 00:02:39 |
    |   7 |        NESTED LOOPS                |                  |   970 | 54320 | 32857   (1)| 00:02:30 |
    |*  8 |         TABLE ACCESS BY INDEX ROWID| XYZ_INFO         |    19M|   704M| 28886   (1)| 00:02:12 |
    |   9 |          INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5     | 36967 |       |   296   (2)| 00:00:02 |
    |* 10 |         TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY      |     1 |    19 |     2   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | SB11_DSMM_XYZ_UK |     1 |       |     1   (0)| 00:00:01 |
    |* 12 |        TABLE ACCESS BY INDEX ROWID | XYZ              |     1 |    54 |     2   (0)| 00:00:01 |
    |* 13 |         INDEX UNIQUE SCAN          | XYZ_PK           |     1 |       |     1   (0)| 00:00:01 |
    |* 14 |       INDEX RANGE SCAN             | DNTI_NI1         |    32M|   249M|     2   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | XYZ_PNR_ERS      |     1 |    15 |     4   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN             | DNPE_XYZ         |     1 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
      1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
      2 - filter(ROWNUM<1002)
      8 - filter("DI"."OPERATION"='CREATE')
    10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
    11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
    12 - filter("D"."PAYED"='N' AND "D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
                  ^^^^^^^^^^^^^^
    13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
    14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
    16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
    plan with "d.payed = 'N'" (+/- 450 rows, less than two minutes):
    | Id  | Operation                          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                  |  1001 |  4166K| 58604   (1)| 00:04:27 |
    |*  1 |  VIEW                              |                  |  1001 |  4166K| 58604   (1)| 00:04:27 |
    |*  2 |   COUNT STOPKEY                    |                  |       |       |            |          |
    |   3 |    VIEW                            |                  |  1002 |  4170K| 58604   (1)| 00:04:27 |
    |   4 |     NESTED LOOPS OUTER             |                  |  1002 |   130K| 58604   (1)| 00:04:27 |
    |   5 |      NESTED LOOPS SEMI             |                  |  1002 |   115K| 55911   (1)| 00:04:14 |
    |   6 |       NESTED LOOPS                 |                  |  1476 |   158K| 52952   (1)| 00:04:01 |
    |   7 |        NESTED LOOPS                |                  |  1476 | 82656 | 49992   (1)| 00:03:48 |
    |*  8 |         TABLE ACCESS BY INDEX ROWID| XYZ_INFO         |    19M|   704M| 43948   (1)| 00:03:20 |
    |   9 |          INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5     | 56244 |       |   449   (1)| 00:00:03 |
    |* 10 |         TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY      |     1 |    19 |     2   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | AAAA_DSMM_XYZ_UK |     1 |       |     1   (0)| 00:00:01 |
    |* 12 |        TABLE ACCESS BY INDEX ROWID | XYZ              |     1 |    54 |     2   (0)| 00:00:01 |
    |* 13 |         INDEX UNIQUE SCAN          | XYZ_PK           |     1 |       |     1   (0)| 00:00:01 |
    |* 14 |       INDEX RANGE SCAN             | DNTI_NI1         |    22M|   168M|     2   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | XYZ_PNR_ERS      |     1 |    15 |     4   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN             | DNPE_XYZ         |     1 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
       2 - filter(ROWNUM<1002)
       8 - filter("DI"."OPERATION"='CREATE')
      10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
      11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
      12 - filter("D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
      13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
      14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
      16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
    XYZ.PAYED values breakdown:
    P   COUNT(1)
    Y   12202716
    N    9430207
    tables nb of records:
    TABLE_NAME           NUM_ROWS
    XYZ                  21606776
    XYZ_INFO            186301951
    XYZ_PNR_ERS           9716471
    XYZ_SUMMARY          21616607
    Everything that comes inside the "select * from(...) view" parentheses is defined in a view. We've noticed that the line "AND d.payed = 'N'" (commented above) is the guilty clause: the query takes one or two seconds to return between 400 and 500 rows if this line is removed, when included in the query, the response time then switches to *hours* -sic !- but then the result set is empty (no rows returned). The plan is exactly the same whether this "d.payed = 'N'" is added or removed, I mean the nb of steps, access paths, join order etc., only the rows/bytes/cost columns values change, as you can see.
    We've found no other way of solving this perf issue but by taking out this "d.payed = 'N'" condition and setting it outside the view along with view.DISTRIB_CODE and view.TIME_OPERATION.
    But we would like to understand why such a small change on the XYZ.PAYED column turns everything upside down that much, and we'd like to be able to tell the optimizer to perform this check on payed = 'N' by itself in the end, just like we did, through the use of a hint if possible...
    Anybody ever encountered such a behaviour before ? Do you have any advice regarding the use of a hint to reach the same response time as that we've got by setting the payed = N condition outside of the view definition ??
    Thanks a lot in advance.
    Regards,
    Seb

    I am really sorry I couldn't get back earlier to this forum...
    Thanks to you all for your answers.
    First I'd just like to correct a small mistake I made, when writing
    "the query takes one or two seconds": I meant one or 2 *minutes*. Sorry.
    > What table/columns are indexed by "DNTI_NI1"?
    aaaa.dnti_ni1 is an index ON aaaa.xyz_ticket(xyz_id, ticket_status)
    > And what are the indexes on xyz table?
    Too many:
    XYZ_ARCHIV_STATE_IND           ARCHIVE_STATE
    XYZ_BENE_CUST_ID_IND           BENE_CUST_ID
    XYZ_BENE_TTL_IND               BENE_TTL
    XYZ_CANCEL_STATE_IND           CANCEL_STATE
    XYZ_CLIENT_APP_NI              CLIENT_APP
    XYZ_CRM_CUST_ID_IND            CRM_CUST_ID
    XYZ_DELIVE_MODE_IND            DELIVERY_MODE
    XYZ_DELIV_BLOCK_IND            DELIVERY_BLOCKED
    XYZ_DELIV_STATE_IND            DELIVERY_STATE
    XYZ_XYZ_BLOCKED                XYZ_BLOCKED
    XYZ_FIRST_TRAVELDATE_IND       FIRST_TRAVELDATE
    XYZ_MASTER_XYZ_IND             MASTER_XYZ_ID
    XYZ_ORG_ID_NI                  ORG_ID
    XYZ_PAYMT_STATE_IND            PAYMENT_STATE
    XYZ_PK                         XYZ_ID
    XYZ_TO_PO_IDX                  TO_PO
    XYZ_UK                         XYZ_NUM
    For ex. XYZ_CANCEL_STATE_IND on CANCEL_STATE seems superfluous to me, as the column may only contain Y or N (or be null)...
    > Have you traced both cases to compare statistics? What differences did it reveal?
    Yes but it only shows more of *everything* (more tables blocks accessed, the same
    for indexes blocks, for almost all objects involved) for the slowest query !
    Greping WAIT on the two trc files made for every statement and counting the
    object IDs access show that the quicker query requires much less I/Os; the
    slowest one overall needs much more blocks to be read (except for the indexes
    DNSG_NI1 or DNPE_XYZ for example). Below I replaced obj# with the table/index
    name, the first column is the figure showing how many times the object was
    accessed in the 10053 file (I ctrl-C'ed my second execution ofr course, the
    figures should be much higher !!):
    [login.hostname] ? grep WAIT OM-quick.trc|...|sort|uniq -c
        335 XYZ_SUMMARY
      20816 AAAA_DSMM_XYZ_UK (index on xyz_summary.xyz_id)
        192 XYZ
       4804 XYZ_INFO
        246 XYZ_SEGMENT
          6 XYZ_REMARKS
         63 XYZ_PNR_ERS
        719 XYZ_PK           (index on xyz.xyz_id)
       2182 DNIN_IDX_NI5     (index on xyz.xyz_id)
        877 DNSG_NI1         (index on xyz_segment.xyz_id, segment_status)
        980 DNTI_NI1         (index on xyz_ticket.xyz_id, ticket_status)
        850 DNPE_XYZ         (index on xyz_pnr_ers.xyz_id)
    [login.hostname] ? grep WAIT OM-slow.trc|...|sort|uniq -c
       1733 XYZ_SUMMARY
      38225 AAAA_DSMM_XYZ_UK  (index on xyz_summary.xyz_id)
       4359 XYZ
      12536 XYZ_INFO
         65 XYZ_SEGMENT
         17 XYZ_REMARKS
         20 XYZ_PNR_ERS
       8598 XYZ_PK
       7406 DNIN_IDX_NI5
         29 DNSG_NI1
       2475 DNTI_NI1
         27 DNPE_XYZ
    The overwhelmingly dominant wait event is by far 'db file sequential read':
    [login.hostname] ? grep WAIT OM-*elect.txt|cut -d"'" -f2|sort |uniq -c
         36 SQL*Net message from client
         38 SQL*Net message to client
    107647 db file sequential read
          1 latch free
          1 latch: object queue header operation
          3 latch: session allocation
    > It will be worth knowing the estimations...
    It show the same plan with a higher cost when PAYED = N is added:
    SQL> select * from sb11.dnr d
      2* where d.dnr_blocked IS NULL and d.cancel_state = 'N'
    SQL> /
    | Id  | Operation                   | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                      |  1002 |   166K|    40   (3)| 00:00:01 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| XYZ                  |  1002 |   166K|    40   (3)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | XYZ_CANCEL_STATE_IND |       |       |     8   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("D"."XYZ_BLOCKED" IS NULL)
       2 - access("D"."CANCEL_STATE"='N')
    SQL> select * from sb11.dnr d
      2  where d.dnr_blocked IS NULL and d.cancel_state = 'N'
      3* and d.payed = 'N'
    SQL> /
    Execution Plan
    Plan hash value: 1292668880
    | Id  | Operation                   | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                      |  1001 |   166K|    89   (3)| 00:00:01 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| XYZ                  |  1001 |   166K|    89   (3)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | XYZ_CANCEL_STATE_IND |       |       |    15   (0)| 00:00:01 |

  • Database producer consumer architecture

    Dear NI supporters,
    I am trying to devolope Producer/Consumer program to communicate with MS SQL database.
    The Producer loop select data from database1 and process them.The Consumer loop insert proccesed data to database2.
    The problem is, fast Producer loop must wait for slow Consumer loop and I have no idea why.
    I have attached really simplified VI. There are two parallel processes (while loops). First loop read large data and the execution time is high. The second loop read really small data, so execution time is low. So if I execute the VI, I would expect the fast while loop (FAST PROCESS) will get much more iterations then the slow one (SLOW PROCESS). But the reallity is, the FAST PROCESS will get the same number of iteration as SLOW PROCESS. It looks like the processes don´t run in parallel or did I just missed something?
    Could you explain me, why this occur, and how can I improve my code?
    Thanks in advance.

    vasicekv wrote:
    Thank you Jim, it really helps me!
    Do you also know any other library to communicate with database in labview which is reentrant?
    I mean I am focused on speed my labview-databse communcation up.
    Perhaps you should be considering why your consumer loop is so slow?  Is your database running on a dedicated server?  Is it getting bogged down?
    Please share a little of your real code so we can try to improve the performance of your consumer loop.

  • New software is junk.

    DONT BUY A CANON!
    I did because I liked my old Canon and loved the software. The new software is worse than junk. OMG, It's terrible.

    The image downloading and viewing software that comes with the cameras (powershot elph130is) vs the old version of software that came with my powershot 560.
    Downloading the images is 10 times more work.
    Before I plugged in camera and it did it all, saved images into folders by year/ month (which if still does) then removes images from camera with absolutely ZERO interaction. Unplug camera and zoom brower would open with thumbnails of all the images you just loaded again with ZERO interaction.
    With the new software.
    Plug in camera. Software requires me to OK auto download or it does nothing. Not very automatic.
    When it's done it reuires me to OK that it is done. It's not like there is a chioce to not OK it, so what am I OKing?
    Unplug camera. Now I have to OK that it will auto open the image browser. It does nothing till I OK. Again, not what I call automatic.
    Finally we at ready to view the pictures except the new software for that takes 10+ minutes to thumbnail a dozen pictures. The old view did it in seconds.
    But wait, still not done. Since the new software won't auto delete images on camera after download now I need to go do that manually.
    Old software= zero interaction. time required= zero.
    New software= Have to sit here and keep clicking OK to progress, then wait for slow viewer, then manually delete off camera.
    The new tool is junk.
    Before you tell me it's the user, I am a IT administrator with 30 years of experience, started on Unix before Windows existed. Which is why I find it so anoying Canon would supply software that sucks when the old stuff was great. Change is fine if there is a resaon to change, but at least don't change it to make it worse.  And yes, I did try every possible way to get the new camera to work with the old software, it would not.
    The thread right below this someone else started complaining about the spead. If you search there is another about the software not auto deleteing after download.

  • Is this ram compatible with my Macbook pro

    I am thinking of upgrading the ram in my Macbook pro 13" 2.5ghz mid 2012 and I just wanted to make sure before i buy it that it is the correct ram http://www.amazon.co.uk/gp/product/B006EWUOL8/ref=olp_product_details?ie=UTF8&me =&seller=
    Please help if you have tried it or you no its the correct ram
    thanks

    Fiachrag wrote:
    The specs of the corsair vengence seem to be the same as what goes into it and it is at a great price, the crucial memory is rather expencive and i cant buy of macsales as i live in ireland.
    Not the same timings - the one you want is at CL 9 - the one being sold by OWC is at CL 11 which is 2 wait states slower.  The same goes with Crucial and Corsair - they're all at CL 11
    The ones you want might work - you willing to take a chance?
    Why not spend the extra $$ and get the one that's Apple Certified and not have any issues when you install them.
    Here's a set of Kingston for your Macbook if you want to save some money.
    In any event, it's all up to you.
    Good luck with that - hope it goes smoothly for you.

  • Logging with threads

    does any one know how to write log information from multiple threads to the same log file, without mess up the log file? i.e, is it possible to keep info from different threads in order without calling the wait() to slow down the program?
    any thoughts will be appreciated.

    I don't think so (hence multithreading). If you don't want to syncronize the call to the .log() method within the threads then I would guess your best bet would be to try and ensure that all the individual threads only log 1 message each.
    You could get each thread to say add strings to a logging HashMap assigned to each thread and then just before the thread ends - walk throught the HashMap and create a single log entry from the contents there and write it to the file.
    Not ideal, but depending on your circumstances, it might just work.

Maybe you are looking for

  • Can't Even Open After Upgrading to iTunes 7.6!

    I upgraded two days ago to 7.6 (which, on a side note, when looking about "About iTunes" it still listed it as 7.5) and all of the sudden today it says "The file "iTunes Library" cannot be read because it was created with a newer version of iTunes."

  • SetAttributeName doesn't seem to work

    Hi all, I have the following Servlet: package nl.hince.xxx; import java.io.IOException; import javax.servlet.*; import javax.servlet.http.*; import java.io.*; import nl.hince.xxx.DbBean; public class Servlet extends HttpServlet {      public void ini

  • Release to SNP from DP BOM

    Hello everybody, I need a help of you. I'm trying to do a release of a key figure in area of a DP-BOM. I enter the transaction / SAPAPO/MC90 and release runs successfully, however, when I access the transaction / SAPAPO/RRP3 the orders are not create

  • Display Linked Data Source

    I created a Linked Data Source in SharePoint Designer 2010.  Is there any way to display the Link Data Source with a web part outside of SharePoint Designer?  I do not have enough rights to create a web part page in Designer.   Thanks! David L. Crook

  • Applications NOT Sorting Correctly

    Hello - I'm having a little issue with Finder and the way its sorting my Apps by type. Just recently it Finder started keeping VLC and Mac-The-Ripper at the top of my App list. When the Apps are sorted by name everything is sorted corrected, so it's