Performance problems in IE- tried FixIt -fails

Constant problems with IE 8 since upgraded 4-6 weeks ago-- very slow/ freezes/terminates/Error on pg symbol.
Running on XP SP3.. Had been up to date on software updates, defrag, other maintenance.
Hours trying to resolve-- no improvmt. Gave up for a while -used laptop. 
But now in crunch time... trying different suggestions from troubleshooting section,  but no success.
Tried to use "Fix IT" at least 6 times- Uploads fine, but then terminates having "encountered an unexpected error"
Normally use Security Essentials, today followed suggestion to use Defender-- it found no issues.
Imperative that I get some level of performance for work project on this box [OLD upgraded Dell] & get ready for delivery of new PC.
Recommendations, Please!  Thanks

Tried a clean boot ? http://support.microsoft.com/kb/310353/en-us
Arnav Sharma | http://arnavsharma.net/ Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
the thread.

Similar Messages

  • I ran an iTunes update.  It failed with the message  System Error.  The program can't start because MSVCR80.DLL is missing from your computer.  Try reinstalling the program to fix this problem.   I tried reinstalling but get the same message.

    I ran an iTunes update.  It failed with the message  "System Error.  The program can't start because MSVCR80.DLL is missing from your computer.  Try reinstalling the program to fix this problem."   I tried reinstalling but get the same message.  I looked in my Recycle Bin for that file name but there is none there.  Is this a new file that iTunes wants?

    Click here and follow the instructions. You may need to completely remove and reinstall iTunes and all related components, or run the process multiple times; this won't normally affect its library, but that should be backed up anyway.
    (99683)

  • I have iphone 4s with iO6. My problem is I can no longer update my apps since "cannot connect to itunes" always shows on my screen when i try to. Tried fixing by changing the dates, turning off location as suggested in some forums but both didn't work!

    I have iphone 4s with iO6. My problem is I can no longer update my apps since "cannot connect to itunes store" always shows on my screen when i try to. Tried fixing by changing the dates, turning off location as suggested in some forums but both didn't work!  Please help!! Thanks.

    You can not merge accounts.
    Apps are tied to the Apple ID used to download them, you can not transfer them.

  • What does this mean: "The program can't start because MSVCR80.dll is missing from your computer.  Try reinstalling the program to fix this problem"?  I tried reinstalling, but I get an error message saying something about permission to launch services.

    What does this mean: "The program can't start because MSVCR80.dll is missing from your computer.  Try reinstalling the program to fix this problem"?  I tried reinstalling, but I get an error message saying something about permission to launch services.

    Solving MSVCR80 issue and Windows iTunes install issues.

  • TS1717 My itunes won't open - it says: 'The program can't start because MSVCR80.dll is missing from your computer. Try reinstalling the program to fix this problem' I have tried un-installing then re-installing itunes and I still get the same message??

    My itunes won't open - it says: 'The program can't start because MSVCR80.dll is missing from your computer. Try reinstalling the program to fix this problem' I have tried un-installing then re-installing itunes and I still get the same message?? Can anyone help?

    I have the exact same problem.
    I have uninstalled the program, rebooted and downloaded the update again. Same problem, same message.
    Please, where is that missing MSVCR80.dll? I don't remember misplacing it.
    I'm a Windows 7 user.

  • I bought an album on iTunes and 3 of the songs only play for 30 seconds. I have tried re-downloading the album 3 times and the problem has not been fixed. I would like some help please and thank you.

    I bought an album on iTunes and 3 of the songs only play for 30 seconds. I have tried re-downloading the album 3 times and the problem has not been fixed. I would like some help please and thank you.
    Artist: Imagine Dragons
    Album: Night Vision

    Report it here:
    iTunes Store Support
    http://www.apple.com/emea/support/itunes/contact.html

  • My ipod touch wont play any music, when i try to play a song, it just keeps pausing. no sound comes out of the headphones or internal speakers. what can i do to fix this problem? ive tried charging it and resetting it but nothing works.

    my ipod touch wont play any music, when i try to play a song, it just keeps pausing. no sound comes out of the headphones or internal speakers. what can i do to fix this problem? ive tried charging it and resetting it but nothing works.

    my ipod touch has the same problem. I restored it three times, but that can't help.

  • [svn] 3363: Fix performance problem when changing multiple DisplayObject-dependent properties .

    Revision: 3363
    Author: [email protected]
    Date: 2008-09-25 11:58:56 -0700 (Thu, 25 Sep 2008)
    Log Message:
    Fix performance problem when changing multiple DisplayObject-dependent properties. The call to assignDisplayObjects() is now batched up into the next commitProperties() call.
    Bugs: SDK-17033
    Reviewer: Deepa
    Ticket Links:
    http://bugs.adobe.com/jira/browse/SDK-17033
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/flex/core/Group.as

    hello sir,
    i want to your help
    i was installed fresh windows 7 via cd rom and then after installed all software.
    and now after 1 day customer complained me that cd rom not read any cd and i m also check when i insurt cd so its not read and when i am double click on cd rom icon its eject so what i do for that please reply on my email address.
    [text removed for privacy]
    VIMAL

  • How I tried fix my partitions and restore a Dell Diagnostic Partition.

    UPDATED: 2 September 2012
    UPDATE: Now that I reflect back on the incidence, I realize that the Dell Utility partition had lost its ability to boot long before I had messed up my partition table. But I realized it only when I was testing all my partitions after fixing the partition table. But as I have already written this long post, lets just keep it here for future reference of anyone else stumbling across a similar problem. Read on to know about my experience.
    RECOMMENDATION: I do not recommend trying out all the steps below as they did not completely solve the problem for me. If you need the Diagnostic Utility, download the update package from Dell's support website for your model. This package can be used to create a boot-able USB drive and/or CD/DVD. These work fine and are pretty fast also.
    Something strange happened to me and I am now reporting my experience in trying to solve it (somewhat unsuccessfully!!)
    First of all, my setup is: Dell Studio 1555 laptop. I dual boot Windows 7 and Archlinux. So here's how it went:
    After using the partitioning tool Gparted under Archlinux to resize a partition, I found a problem had occurred. The Partition was NTFS formatted and all of my data files were stored on it. The partition worked fine under Archlinux as I was able to access my files fine under it. But in Windows, although the partition was listed under Windows explorer, it wanted to format it!! When I tried to access the partition it gave an error that it was not formatted (
    The drive is not formatted, do you want to format it now?
    ). Of course, that was not right and Gparted had messed something up. I fixed that using Testdisk under Archlinux (See the Details). So now the partition problem under windows was fixed. But now another problem cropped up under Archlinux. When I booted into Archlinux and started Gparted to confirm everything was fine I saw something strange in Gparted. The whole space on my hard disk was marked as "unallocated" under Gparted. Windows and Archlinux could "see" the partitions. By this I mean that I was able to boot fine under both my OSes. And I could access all my files under all my partitions. But somehow Gparted was not able to "see" them. Gparted was reporting my whole disk to be marked as unallocated. After that I researched a lot and lots of stuff happened experimenting to fix the problem. I used a lot of utilities. But actually only one fixed the problem-fixparts from the gptfdisk package. But it seemed like a lot of work trying to get the problem fixed (See the Details).
    Now we get to the point in discussion. I was able to get my partitions back under Gparted. But I lost the Dell Diagnostics Utility partition's ability to boot up. [Actually, now that I reflect back on the incidence, I realize that the Dell Utility partition had lost its ability to boot long before I had messed up my partition table. But I realized it only when I was testing all my partition after fixing the partition table. But more on this later.] It gave an error that the partition was not found.
    So, in short: After all this restoring partitions' visibility under Gparted, I realized that the Dell Utility partition on my Studio 1555 was not booting up. To explain this, it means that when I press F-12 when starting the laptop and select Diagnostics from the menu to run the Diagnostic Utility and after running the Pre-boot System Assesment tests when I consented to boot the Diagnostic Utility partition, it gave me the error that the partition was not found. When I tried to run the "Dell 32 Bit Diagnostics (Graphical User Interface version)" update package under Windows, it resulted in a similar error: Partition not found.
    For some background on what makes the Dell Utility partition so special, please read this thread and the third post on this thread.
    WARNING: You and only you are responsible for your data. Please make a backup before performing any of the partitioning steps below.
    NOTE: Please read the entire post before actually performing the steps.
    So, to try and fix this I did the following:
    Boot into Windows 7.
    Open Disk Management under the Computer Management console (To open the Computer Management console, right click on Computer in the Start menu and select Manage).
    Reformat the Dell Diagnostic Utility partition as FAT(not FAT32). This is the first partition on the drive (marked as Healthy (OEM Partition) under the Status column). [This step may not be required, however I had done it. See Notes below.]
    UPDATE: After reading around a bit I found that these steps to format the partition might not actually be necessary. Simply changing the type of the partition (as detailed below) might also work. However as I had done that, lets just keep these steps over here.
    Reboot into a Linux distribution Live CD (I had Ubuntu 10.10). Or, If you dual boot with a Linux distribution that does not complain about the now inconsistent fstab entry, you can also boot into that distribution directly. I had to boot into Live CD to fix my /etc/fstab.
    UPDATE: After considering all the aspects from start to end I have come to a conclusion regarding the efficacy of this method on dual boot machines with Windows and Linux installed. I doubt anyone with a dual boot Windows/Linux setup would be able to boot into the Dell Diagnostic Utility even with the Utility Partition restored. This is detailed below.
    (As noted above my Archlinux install did not boot up after I had reformatted  my Dell Diagnostic Utility partition. This is because I was mounting the Dell Utility partition at boot using fstab inside Archlinux. And I was using the UUID to mount the partition. After reformatting the partition its UUID changed. So, it wouldn't mount. And because of how my fstab was setup Archlinux won't boot. So, I had to boot into a live environment to fix this. This step applied only to me. YRMV.) Fix the fstab entry.
    (This step also applied to me.) Boot into the repaired Linux Distribution.
    And open a Terminal.
    In the open terminal run fdisk on your drive, e.g.,
    fdisk /dev/sda
    This is how it looks:
    [abhishek@Nitaichand ~]$ sudo fdisk /dev/sda
    Password:
    Command (m for help):
    To change the partition type give the appropriate command, i.e.,
    Command (m for help): t
    Specify the partition, i.e.,
    Partition number (1-10): 1
    Type L to see available codes:
    Hex code (type L to list codes):L
    0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
    1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-
    2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-
    3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
    4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx
    5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data
    6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .
    7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility
    8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt
    9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access
    a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O
    b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor
    c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs
    e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT
    f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/
    10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b
    11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor
    12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor
    14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary
    16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS
    17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE
    18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto
    1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep
    1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT
    1e Hidden W95 FAT1 80 Old Minix
    Type the desired code, i.e.,
    Hex code (type L to list codes): de
    Write the partition table with:
    Command (m for help): w
    The partition table has been altered!
    Calling ioctl() to re-read partition table.
    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table. The new table will be used at
    the next reboot or after you run partprobe(8) or kpartx(8)
    Syncing disks.
    [abhishek@Nitaichand ~]$
    Download the required Diagnostics Update Package from the Drivers Download page for your model. Got mine from here.
    Run the downloaded package under the OS you it downloaded for. That is, run the .exe on Windows. Or, if you downloaded the .bin file for Linux then first make it executable:
    chmod u+x CL1367A0.bin
    And now run it under a Linux distribution with an older version of python installed (I think <2.7). I say this because the .bin package didn't run on an updated Archlinux for me, probably because it has the latest python. I ran it from the Ubuntu 10.10 Live CD and it ran fine under that.
    On Windows, If you are not automatically prompted with an option to update your Utility Partition then you need to browse to the location where the package was extracted (for me it was C:\dell\drivers\R239866).
    Now you need to manually run the extracted file (for me it was DDDP.exe). Most probably you'll need to right-click it and run it as an Administrator. And if all went well, it will extract/update the diagnostic utilities to/on the Dell Utility Partition.
    I believe the above steps should be sufficient for someone who's lucky and who's update package is smart enough. However these steps were not sufficient for me. My "Partition not found" error was gone because I had changed the partition type. And so the update package was able to recognize the partition and extract the necessary files to it. But I was still not able to boot the utility partition. After the Pre-boot System Assessment although I no longer got the "Partition not found" error, but I was just dropped onto the GRUB boot menu prompt.
    [UPDATE: As stated above, I realize that the recovery partition had lost its ability to boot long before I had messed up my partition table. But I realized it only when I was testing all my partition after fixing the partition table. Please refer to this forum thread for further Details. I  do not think that it is possible longer to boot from the Dell Utility Parttion on my setup which has GRUB installed to the MBR. But the rest of the post documents my attempts to slove the problem without the knowledge from the forum post.]
    Anyways, it was a pain to again and again set up/update the partition and test it after waiting half an hour or so for the Pre-boot System Assessment to complete. But I was determined to solve the problem at-least partially, until next time. So I created a GRUB entry to boot the Utility partition. Assuming the partition is the first partition on the drive (which is the case here), the grub entry is simply:
    title Dell Utility
    rootnoverify (hd0,0)
    chainloader +1
    I tried downloading an older update package. I updated my Partition with it. And tested. Still, I was unsuccessful. I researched a little bit and found this link. Out of frustration,I decided to use brute force this time . So, the below are the steps which let me have at-least a glimpse of The Dell Diagnostic Utility booting up from the partition:
    Backup your partition table using the sfdisk command (not fdisk).
    Follow the instructions in the link I gave above (i.e., http://community.spiceworks.com/how_to/show/1123) and build your Utility Partition from scratch.
    Now after that when you try to boot into the OS you'll be presented with an blue bar on top. This is because the mkup batch file from the Dell Diagnostic/Drivers CD/DVD wiped your partition table and rewrote it with only one partition on it- the Dell utility partition.
    Boot into a Live environment and restore your partition table from the backup created earlier using sfdisk.
    Now boot with a Windows disc to repair your Windows boot problem. This applied to me but may not apply to you.
    Again boot into a live environment and restore GRUB to MBR.
    After a reboot press F-12 to get to the BIOS boot menu and select Diagnostics.
    Let the Pre-boot Assessment run and after its complete it will ask you to press any key to boot the Dell Utility partition. Do that.
    You'll notice your still dropped into GRUB instead of getting the Diagnostics GUI.
    Now when on GRUB prompt don't boot any other OS.
    Press any key (other than <Enter>, that is ) to stop the timer if you have one set.
    Now look carefully at the boot menu.
    Remember I told you that I had created a GRUB menu entry to boot the Dell Utility partition. Select that. And if you are lucky you might just be able boot the partition. This worked for me (finally!).
    After this initial run I was unable to run the Diagnostics GUI from the GRUB menu entry again. I haven't tried to re-run the Pre-boot Assessment and wait to see, if I'm able to boot it from there. But now, I'm satisfied that at-least the file there are in a running condition.
    Also, the update package can be used to create boot-able USB drives or boot-able CD/DVDs which can run the Diagnostics just fine. They are almost as fast as the partition (especially the USB which seems even faster). They are recommend, instead of going into this trouble to recreate the partition. That is unless you are a purist/perfectionist .
    Notes:
    At first, I panicked and tried a lot of steps that are not exactly documented above for the sake of convenience to others who might refer.
    I have thus rewritten the post in a manner to make it very general in nature as it did not become very fruitful for myself.  If you attempt to use this guide, use common sense where necessary .
    Of course, if you are trying to build a Utility partition on a bare hard drive or you're feeling adventurous, you can always follow this link .
    Last edited by bhadotia (2012-10-08 19:03:18)

    bhadotia wrote:Anyway's the file downloaded from dell to update the partition for Studio 1555 is corrupted (checksums don't match). My partition still doesn't boot. I'm working to fix this and will update my post when I'm done.
    The file seems to create the CD/DVD/Image and USB just fine. So I used this only to create a CD image which I then wrote on a blank CD which seems to work fine. Also, I played around a bit and had some partial success in booting the partition. I've updated my original opening post with the new findings.
    Whew!! what a waste of time! Never want to do all of this again .
    Last edited by bhadotia (2012-03-03 00:05:22)

  • How to fix failed volume structure

    I ran tech tool deluxe and got the diagnostic that the volume structure failed the test.  Tech Tool Deluxe doesn't offer an option to "fix" this problem.

    Does it boot to Single User Mode, CMD+s keys at bootup, if so try...
    /sbin/fsck -fy
    Repeat until it shows no errors fixed.
    (Space between fsck AND -fy important).
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck...
    http://docs.info.apple.com/article.html?artnum=106214
    Just recently I ran into a problem when I tried to Verify my hard disk and when it tried to verify the catalog, it responded "Invalid sibling link." Repair Disk didn't work. I searched the web and Apple's site, and couldn't find anything useful except to buy DiskWarrior or reformat the drive. Knowing that OS X is built on Unix gave me a few clues on how to proceed. The solution is pretty simple:
    Boot off the OS X CD (reboot, hold C while booting).
    The installer will load up, go to Utilities in the menu and run Terminal.
    Type df and look for the drive that has your Mac system mounted---you'll have to unmount this. On my MacBook Pro, it was /dev/disk0s2.
    Type umount /dev/disk0s2, replacing disk0s2 with whatever disk your OS lives on.
    Type fsck_hfs -r /dev/disk0s2. If you umounted the wrong thing, it will complain that you can't repair a mounted drive. Go back and umount the right thing and repeat this step.
    Just for fun, you might want to run another fsck_hfs on your disk (use the -f option because your drive is probably journaled). Hope this helps someone so they don't buy a program that's going to do pretty much what we did with fsck_hfs, and so they don't waste time searching for an answer to no avail. By the way, TechTool Deluxe (3.1.1) didn't find the Catalog problem for some reason (you'll have this on a CD if you have AppleCare), which is why I resorted to fsck.
    http://hints.macworld.com/article.php?story=20070204093925888
    Your best bet is DiskWarrior, you need the CD/DVD though.
    http://www.alsoft.com/DiskWarrior/
    But others that may work…
    Drive Genius…
    http://www.prosofteng.com/products/drive_genius.php
    TechTool Pro…
    http://www.micromat.com/index.php?option=com_content&task=view&id=31&Itemid=83

  • Performance problems with Leopard 10.5.1

    Hello,
    I use an iMac 24 Alu 2,8Ghz and upgraded to Leopard. There are some major performance problems and bugs in the recent version of Leopard:
    1. While accessing USB devices, the display speed, windows moving, animations etc. slow down
    2. Adobe CS3 Photoshop 10.01 and Flash CS3 are sometimes extremly slow. I tried the recent demo packages from Adobe:
    2.1. The Photoshop dialogue "save for web" slows down the system completly, and this problem stays when quitting Photoshop. A restart is neccessary then.
    2.2. Flash CS3 movie preview is very slow and stuttering. It's so slow you cannot imagine how the real movie flow will be.
    2.3. Recent Flash Player 9,0,115,0 with hardware acceleration enabled doesn't really work with QuartzGL enabled Leopard: The movies slow down a lot. Try www.neave.tv for example.
    3. Safari, Mail and other bundled software hang sometimes. You have to force quit them then. It doesn't matter whether QuartzGL is enabled or not. This especially happens to my system if it is online for some hours.
    4. A lot of Apple applications doesn't seem to work with 2dextreme enabled. Why this? Apple supporters told me in Leopard there will be a much better 2dextreme support. Also Quartz2dExtreme in OSX 10.4 worked with all applications and i guess it's the same feature like "QuartzGL" in Leopard. So Leopard isn't finished here. It would be nice if Apple could make it's own software QuartzGL compatible.
    5. Very often the desktop slows down or lags. This is the main reason I often still witch to Windows XP PC to do work in a faster, less annoying way.
    6. Safari crashes randomly sometimes. It is unstable still. Also it crashes more often if you resize/move the window a lot, so I guess it is a graphics extension-related problem.
    I hope you people from Apple will fix these annoying points and optimize your new system in the next update release.
    Best regards

    Thanks for you answer. I repaired in the way as described above. There were some errors, some file index was wrong (don't remember exactly the phrase), now the DU reports the partition was successfully repaired / the volume appears to be ok.
    The crashes in Safari are gone, but all other described problems still exist. Adobe CS3 is not really usable for me.
    By the way, in iMacSoftwareUpdate 1.3, which was replaced by OSX 10.5.1 update, there is one extension called AppleVADriver.kext that does not exist in the OSX 10.5.1 update. Is it an important extension?

  • Performance problems with DFSN, ABE and SMB

    Hello,
    We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
    Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
    The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
    We have only one AD-site and don't use DFS-Replication.
    Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
    Network is a LAN.
    Our DFSN looks like this:
    \\contoso.com\home
        Contains 10.000 Links
        Drive mapping on clients to subfolder \\contoso.com\home\username
    \\contoso.com\group
        Contains 2500 Links
        Drive mapping on clients directly to \\contoso.com\group
    On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
    We have to use ABE, so that users see only accessible Links (folders)
    We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
    After six weeks of researching and analyzing we were able to identify the exact problem.
    Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
    dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
    dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
    This is done a few times a day.
    There is no possibility to create the folder and set the permissions in one step.
    DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
    At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
    Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
    The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
    All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
    This was identified by an network trace on our DFSN-servers and different clients.
    Due to ABE the servers have to compute the folder listing for each request.
    DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
    Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
    Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
    Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
    Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
    Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
    Problem also occurs with Windows Server 2012 R2 as DFSN-server.
    There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
    This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
    Is there a possibility to disable the SMB change notification on server side ?
    TIA and regards,
    Ralf Gaudes

    Hi,
    Thanks for posting in Microsoft Technet Forums.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • Serious performance problem - SELECT DISTINCT x.JDOCLASSX FROM x

    I am noticing a huge performance problem when trying to access a member that
    is lazily loaded:
    MonitorStatus previousStatus = m.getStatus();
    This causes the following query to be executed:
    SELECT DISTINCT MONITORSTATUSX.JDOCLASSX FROM MONITORSTATUSX
    This table has 3 million records and this SQL statement takes 3 minutes to
    execute! Even worse, my app heavily uses threads, so this statement is
    executed in each of the 32 threads. As a result the application stops.
    Is there any way that I can optimize this? And more importantly, can Kodo
    handle a multithreaded app like this with a huge database? I've been having
    a lot of performance problems since I've started doing stress & load
    testing, and I'm thinking Kodo isn't ready for this type of application.
    Thanks,
    Michael

    You can prevent this from happening by explicitly enumerating the valid
    persistent types in a property. See
    http://docs.solarmetric.com/manual.html#com.solarmetric.kodo.PersistentTypes
    for details.
    >
    Inconveniently, this nugget of performance info is not listed in the
    optimization guide. I'll add in an entry for it.This setting did in fact prevent the query from running which fixed the
    problem. It definitely belongs in the optimization guide.
    And more importantly, can Kodo
    handle a multithreaded app like this with a huge database? I've beenhaving
    a lot of performance problems since I've started doing stress & load
    testing, and I'm thinking Kodo isn't ready for this type of application.I'd like to find out more information about details about your issues. We
    do a decent amount of stress / load testing internally, but there are
    always use cases that we don't test. Please send me an email (I'm assuming
    that [email protected] is not really your address) and let's
    figure out some way to do an analysis of what you're seeing.This email is just for posting to usenet, to avoid spam. I'm now running my
    app through stress/load testing so I hope to discover any remaining issues
    before going into production. As of this morning the system seems to be
    performing quite well. Now the biggest performance problem for me is the
    lack of what I think is called "outer join". I know you'll have this in 3.0
    but I'm suprised you don't have this already because not having it really
    affects performance. I already had to code one query by hand with JDBC due
    to this. It was taking 15+ minutes with Kodo and with my JDBC version it
    only takes a few seconds. There are lots of anti-JDO people and performance
    issues like this really give them ammunition. Overall I just have the
    impression that Kodo hasn't been used on many really large scale projects
    with databases that have millions of records.
    Thanks for configuration fix,
    Michael

Maybe you are looking for

  • How Do I Save the Background Color with my File

    I am operating on a Mac Air Book with the new Adobe Creative Cloud. I am new to Illustartor, so do not know how to use it well. I saved my document as a pdf, however, in the preview, it does not have the background color, but is jjust plain white. Ho

  • Adobe X Pro Authenicity

    Is it possible to verify if the Acrobat X Pro software I purchased is not counterfeited BEFORE I install it on my computer?  I'm suspicious because the seller's auction and history was removed by the site a few days after I paid for the software.  Th

  • Authentication to CR and Xcelsius

    Hi All, What authentication mechanisms are supported by Crystal reports and Xcelsius when connected to Web Serves, i.e. WSE3.0/WCF etc In what scenarions Win AD will work for Crystal reports and Xcelsius. Is it possible to setup WinAD SSO for custom

  • CF Builder closes after Firefox preview

    I'm using CF Builder (not 2) on Apache 2.2  with CF9. My project is set up just fine.  I have Firefox and IE tabs set up in the IDE for previewing. When I preview a cfm project in my project, the correct behavior for my site is to first require a log

  • When I select a city from the search results I get...

    Hi Everyone, I have nokia 5800 xpressmusic, I bought the phone last month in US,I have downloaded the latest OVI maps(3.03), software is updated one, Now I am in india, I have downloaded the indian map from OVI suite, the problem is whenever I select