Poor OEL5-3 performance when using over 1Gb of RAM

Hi, I have a Linux distribution ( Oracle Enterprise Linux 5.3 i.e. Redhat ) that I have installed. It works fine when I used 2*512Mb dimms or replace them with a single 1Gb dimm. However when I try to go above 1 Gb the bootup and general performance deteriorates badly. The BIOS picks up the memory changes ok and I am using the same type of memory sticks. It makes no difference if I load a single memory channel with the 2 sticks or balance the sticks over the 2 channels - I still get poor performance. When I return to 1Gb of RAM performance is great
In terms of install I just followed the Oracle Release note and Readme on the DVD itself. To be honest they are incredibly basic. There does not appear to be any specific OEL 5-3 installation manual on Technet. I simply followed the prompts on scereen and choose a default install
I have seen quite a few hits concerning problems with performance over 1Gb for Linux such as having to enable Highmem support in the kernel - however I have not seen any instruction how to check or do this - any ideas ?

jimthompson wrote:
Hi, I have a Linux distribution ( Oracle Enterprise Linux 5.3 i.e. Redhat ) that I have installed. It works fine when I used 2*512Mb dimms or replace them with a single 1Gb dimm. However when I try to go above 1 Gb the bootup and general performance deteriorates badly. The BIOS picks up the memory changes ok and I am using the same type of memory sticks. It makes no difference if I load a single memory channel with the 2 sticks or balance the sticks over the 2 channels - I still get poor performance. When I return to 1Gb of RAM performance is great
In terms of install I just followed the Oracle Release note and Readme on the DVD itself. To be honest they are incredibly basic. There does not appear to be any specific OEL 5-3 installation manual on Technet. I simply followed the prompts on scereen and choose a default install
I have seen quite a few hits concerning problems with performance over 1Gb for Linux such as having to enable Highmem support in the kernel - however I have not seen any instruction how to check or do this - any ideas ?I still might have a concern about your hardware:
Your two 512M sticks are undoubtably well matched and live comfortably together. Hopefully they did actually use dual channel mode.
Your 1G lives well on its own as well
However for the greater than 1G, ie 1G + 0.5G you may be using 2 sticks that do not live well together and are causing isses with delayed response. (At worst can only 0.5G is being recognised - (use grep MemTotal /proc/meminfo if necessary to check) ). In general issues seem better with DDR2 than with earlier memory in this regard. I suppose timing with memtest86 might be an interesting metric. I must admit myself i'd prefer to use 1g+1g or even 2g + 2g (i know on 32bit I might not see it all) rather than 1g+2g or 1g+0.5g.
Anyway what I am trying to say is that hardware should not be eliminated as a possible cause at this stage.
Rgds - bigdelboy

Similar Messages

  • Poor performance when using kde desktop effect

    Hey,
    I'm having trouble when using kde effet (system settings -> desktop -> desktop effects).
    I have a dual core E5200 3 ghz, 2 GB memory pc8500 and an HD4850 using fglrx driver, but I got incredible bad performance when using desktop effect and watching video, I can barely watch a 800x600 video in full screen mode without having bad performance, X getting up to 40% cpu usage.
    Its really like my graphic card isnt handling the rendering stuff but 3D acceleration is working, I can play 3D game without problem so far (as long as the deskstop effect arent enabled cause the cpu have like a hard time handling both for recent game)
    So I guess its some trouble with 2D acceleration or something like that, I read that some people had such issue, but I didnt figure a way to fix it.
    Here is my xorg.conf, in case something is wrong with it :
    Section "ServerLayout"
    Identifier "X.org Configured"
    Screen 0 "aticonfig-Screen[0]-0" 0 0
    InputDevice "Mouse0" "CorePointer"
    InputDevice "Keyboard0" "CoreKeyboard"
    EndSection
    Section "Files"
    ModulePath "/usr/lib/xorg/modules"
    FontPath "/usr/share/fonts/misc"
    FontPath "/usr/share/fonts/100dpi:unscaled"
    FontPath "/usr/share/fonts/75dpi:unscaled"
    FontPath "/usr/share/fonts/TTF"
    FontPath "/usr/share/fonts/Type1"
    EndSection
    Section "Module"
    Load "dri2"
    Load "extmod"
    Load "dbe"
    Load "record"
    Load "glx"
    Load "dri"
    EndSection
    Section "InputDevice"
    Identifier "Keyboard0"
    Driver "kbd"
    EndSection
    Section "InputDevice"
    Identifier "Mouse0"
    Driver "mouse"
    Option "Protocol" "auto"
    Option "Device" "/dev/input/mice"
    Option "ZAxisMapping" "4 5 6 7"
    EndSection
    Section "Monitor"
    Identifier "Monitor0"
    VendorName "Monitor Vendor"
    ModelName "Monitor Model"
    EndSection
    Section "Monitor"
    Identifier "aticonfig-Monitor[0]-0"
    Option "VendorName" "ATI Proprietary Driver"
    Option "ModelName" "Generic Autodetecting Monitor"
    Option "DPMS" "true"
    EndSection
    Section "Device"
    ### Available Driver options are:-
    ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
    ### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
    ### [arg]: arg optional
    #Option "ShadowFB" # [<bool>]
    #Option "DefaultRefresh" # [<bool>]
    #Option "ModeSetClearScreen" # [<bool>]
    Identifier "Card0"
    Driver "vesa"
    VendorName "ATI Technologies Inc"
    BoardName "RV770 [Radeon HD 4850]"
    BusID "PCI:8:0:0"
    EndSection
    Section "Device"
    Identifier "aticonfig-Device[0]-0"
    Driver "fglrx"
    BusID "PCI:8:0:0"
    EndSection
    Section "Screen"
    Identifier "Screen0"
    Device "Card0"
    Monitor "Monitor0"
    SubSection "Display"
    Viewport 0 0
    Depth 1
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 4
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 8
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 15
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 16
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 24
    EndSubSection
    EndSection
    Section "Screen"
    Identifier "aticonfig-Screen[0]-0"
    Device "aticonfig-Device[0]-0"
    Monitor "aticonfig-Monitor[0]-0"
    DefaultDepth 24
    SubSection "Display"
    Viewport 0 0
    Depth 24
    EndSubSection
    EndSection
    Thank you for any help.

    Section "Device"
    ### Available Driver options are:-
    ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
    ### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
    ### [arg]: arg optional
    #Option "ShadowFB" # [<bool>]
    #Option "DefaultRefresh" # [<bool>]
    #Option "ModeSetClearScreen" # [<bool>]
    Identifier "Card0"
    Driver "vesa"
    VendorName "ATI Technologies Inc"
    BoardName "RV770 [Radeon HD 4850]"
    BusID "PCI:8:0:0"
    EndSection
    and
    Section "Monitor"
    Identifier "Monitor0"
    VendorName "Monitor Vendor"
    ModelName "Monitor Model"
    EndSection
    I see no reason for those to be there.
    make a backup of your xorg.conf and remove / comment those lines.

  • Is anyone able to explain really poor performance when using 'If Exists'?

    Hello all.  We've recently had a performance spike when using the 'if exists' construct, which is a construct that we use through much of our code.  The problem is, it appears illogical since if can be removed via a tiny modification that does
    not change the core code.
    I can demonstrate  
    This is the (simplified) format of the base original code.  It's purpose is to identify when a column value has changed comparing to a main table and a complex view
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    This is doing a table scan, however the table only has 17000 rows while the view only has 7000 rows.  The sql executes in approximately 3 seconds.
    However if we add the 'If Exists' construct around the original query, like such:
    if exists (
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    print 1
    The sql now takes over 2 minutes to run.  Note that the core sql is unchanged.  All I have done is wrap it with 'If Exists'
    I can't fathom why the if exists construct is taking so much longer, especially since the code code is unchanged, however more importantly I would like to understand why since we commonly use this type of syntax.
    Any advice would be greatly appreciated

    OK, that's interesting.  Adding the top 1 clause greatly affects the performance (in a bad way).
    The original query (as below) still runs in a few seconds.
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    The 'Top 1' query (as below) takes almost 2 minutes however.  It's exactly the same query, but with 'top 1' added to it.
    select top 1 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    I suspect that the top 1 is performing a very similar operation to the exists, in that it is 'supposed' to exit as soon as it finds a single row that satisfies it's results.
    It's still not really any closer to making me understand what is causing the issue however.

  • Poor Performance when using Question Pooling

    I'm wondering if anyone else out there is experiencing
    Captivate running very slow when using question pooling. We have
    about 195 questions with some using screenshots in jpeg format.
    By viewing the Windows Task Manager, CP is using anywhere
    between 130 to 160 K worth of memory. What is going on here? It's
    hammering the system pretty hard. It takes a large effort just to
    reposition the screenshot or even move a distractor.
    I'm running this on a 3.20GHz machine with 3GB of RAM.
    Any Captivate Gurus out there care to tackle this one?
    Help.

    MtnBiker1966,
    I have noticed the same problem. I only have 60 slides with
    43 questions and the Question Pool appears to be a big drain on
    performance. Changing the buttons from Continue to Go to next slide
    helped a little, but performance still drags compared to not using
    a question pool. I even tried reducing the number of question
    pools, but that did not affect the performance any. The search
    continues.
    Darin

  • ZBook 17 g2 - poor DPC Latency performance when running from z Turbo Drive PCIe SSD

    I'm setting up a new zBook 17 g2 and am getting very poor DPC latency performance (> 6000 us) when running from the PCIe SSD. I've re-installed the OS (Win 7 64 bit) on both the PCIe SSD and a SATA HDD and the DPC latency performance is fine when running from the HDD (50 - 100 us) but horrible when running from the PCIe SSD (> 6000 us).  I've updated the BIOS and tried every combination of driver and component enabling/disabling I can think of.  The DPC latency is extremely high from the initial Windows install with no drivers installed.  Adding drivers seems to have no effect on the DPC latency. Before purchasing the laptop I found this review: http://www.notebookcheck.net/Review-HP-ZBook-17-E9X11AA-ABA-Workstation.106222.0.html where the DPC latency measurement (middle of the page) looks OK.  Of course, this is the prior version of the laptop and I believe it does not have the PCIe SSD.  Combining that with the fact that I get fine performance when running from the HDD I am led to believe that the PCIe SSD is the cause of the problem. Has anyone found a solution to this problem?  As it stands right now my zBook is not usable for digital audio work when running from the PCIe SSD.  But it cost me a lot of money so I'd sure like to use it...! Thanks, rgames

    Hi mooktank, No solution yet but, as of about six weeks ago, HP at least acknowledged that it's a problem (finally).  I reproduced it perfectly on another zBook 17 g2 and another PCIe SSD in the same laptop and HP was able to reproduce the problem as well.  So the problem is clearly in the BIOS or with some driver related to the PCIe SSD.  It could also be with the firmware in the drive, itself, but I can't find any other PCIe drives in the 60 mm form factor.  So there's no way to see if a differnt type of drive would fix the problem. My suspicion is that it's related to the PCIe sleep states - those are known to cause exactly these types of problems because the drive takes quick "naps" to save power and there's a delay when it is told to wake back up.  That delay causes a delay in the audio buffer that results in pops/crackles/stutters that would never be noticed doing other tasks like video editing or CAD work .  So it's a problem specific to folks who need low-latency audio performance (very few apps require low latency audio - video editing, for example, uses huge buffers with relatively high latency).  A lot of desktops offer a BIOS option to disable those sleep states but no such option exists in HP's BIOS for that laptop.  In theory you can do it from within Windows but it doesn't have an effect on my system.  That might be one of those options that Windows allows you to change but that actually has no effect. One workaround is to disable CPU throttling.  That makes the CPU run at full speed all the time and, I believe, also disables the PCIe and other sleep states.  When I disable CPU throttling, DPC latency goes back to normal.  However, the CPU is then running full-speed all the time so your battery life basically goes to nothing and the laptop gets *very* hot. Clearly that is not necessary because the laptop runs fine from the SATA SSD.  HP needs to fix the latency problem associated with the PCIe drive. The next logical step is to provide a BIOS update that provides a way to disable the PCIe sleep states without disabling CPU throttling, like on many desktop systems.  The bad news is that HP tech support is not very technical, so it takes forever for them to figure out what I'm talking about.  It took a couple months for them to start using the DPC Latency checker. Hopefully there will be a fix at some point... in the meantime, I hope that HP sends me a check for spending so much time educating their techs on how computers work.  And for countless hours lost re-installing different OSes only to show that the performance is exactly the same as shown in the DPC Latency checker. rgames

  • Optimizing EtherCAT Performance when using Scan Engine

    Hello everyone,
    This week I have been researching and learning about the limitations of the LabVIEW's Scan Engine. Our system is an EtherCAT (cRIO 9074) with two slaves (NI 9144). We have four 9235s, two 9237s and two 9239 modules per chassis. That means we have a total of 144 channels. I have read that a conservative estimation for each scan is 10 usec. This means with our set up, assuming we only scan it would take 1.44 msec which would yield roughly a rate of 694 Hz. I know that when using a shared variable, the biggest bottle neck is transmitting the data. For instance, if you scan at 100 Hz, it'll be difficult to transmit that quickly, so it's best to send packets of scans (which you can see in my code).
    With all of that said, I'm having difficulties scanning any faster than 125 Hz without railing out my CPU. I can record at 125 Hz at 96% of the CPU usage but if I go down to 100 Hz, I'm at 80%. I noticed that the biggest factor of performance comes when I change my top loop's, the scan loop, period. Scanning every period is much much more demanding than every other. I have also adjusted the scan period on the EtherCAT's preferences and I have the same performance issues. I have also tried varying the transmission frequency(bottom loop), and this doesn't affect the performance at all.
    Basically, I have a few questions:
    1. What frequency can I reasonably expect to obtain from the EtherCAT system using the Scan Engine with 144 channels?
    2. What percent of the CPU should be used when running a program (just because it can do 100%, I know you shouldn't go for the max. Is 80% appropriate? Is 90% too high?)
    3.Could you look through my code and see if I have any huge issues? Does my transmission loop need to be a timed structure? I know that it's not as important to transmit as it is to scan, so if the queue doesn't get sent, it's not a big deal. This is my first time dealing with a real time system, so I wouldn't be surprised if that was the case.
    I have looked through almost every guide I could find on using the scan engine and programming the cRIO (that's how I learned the importance of synchronizing the timing to the scan engine and other useful facts) and haven't really found a definitive answer. I would appreciate any help on this subject.
    P.S. I attached my scan/transmit loop, the host program and the VI where I get all of the shared variables (I use the same one three times to prevent 144 shared variables from being on the screen at the same time).
    Thanks,
    Seth
    Attachments:
    target - multi rate - variables - fileIO.vi ‏61 KB
    Get Strain Values.vi ‏24 KB
    Chasis 1 (Master).vi ‏85 KB

    Hi,
    It looks like you are using a 9074 chassis and two 9144 chassis, all three full with Modules and you are trying to read all the IO channels in one scan?
    First of all, if you set your scan engine speed for the controller (9074), then you have to synchronize your Timed Loop to the Scan Engine and not to use a different timebase as you do in your scan VI.
    Second the best performance can be achieved with I/O variables, not shared variables and you should make sure to  not allocate memory in your timed Loop.  memory will be allocated if an input of a variable is not connected, like the error cluster for example or if you create Arrays from scratch as you do in your scan VI.
    If you resolve all these Issues you can time the code inside your Loop to see how long it really takes and adjust your scan time accordingly.  The 9074 does not have that much power so you should not expect us timing. 500 Hz is probably a good estimate for the max. Performance for 144 channels and depending on how much time the additional microstrain calculation takes.
    The ECAT driver brings examples which show how to program these kinds of Apps. Another way of avoiding the variables would be the programmatic approach using the variable API.
    DirkW

  • Outlook prompts for credentials when used over DirectAccess

    Hello; I have recently setup a DirectAccess Server for our network.  The DirectAccess Server is running Windows Server 2012R2. It all appears to be running just fine. The clients are connecting fine and have access to all of our internal network resources,
    including the Exchange Server. 
    However, when using MS Outlook to connect to the internal Exchange Server, we are seeing a strange behavior.  The Outlook client initially connects just fine.  It says "CONNECTED TO: Microsoft Exchange" down in the status bar, and the
    user's mailbox gets updated just fine.  But then, after just a few minutes, MS Outlook will prompt the user for their credentials.  If the user enters their credentials, the pop up goes away, but then comes right back after another minute or two. 
    As far as I can tell, the DirectAccess connection is stable, as the user has uninterrupted access to our other internal resources.  I've verified the Outlook's connection to the Exchange is using RPC/TCP.
    Can anyone shed some light on to what is going on here and how I may resolve this issue?

    Hi,
    A simple assumption, would clearing the credentials from Windows Credential Manager work?
    1. Launch the Credential Manager (from [Control Panel] and [User Settings])
    2. In the Generic Credentials section you’ll see a setting for [MS Outlook]. Click the downward-pointing arrow to the right of that value
    3. The section will expand to show further details. Under those details is a link to Remove from vault. Click this and Outlook will no longer have a stored copy of your password
    The next time the users need to enter a password, select the option that let Outlook remember the credentials.
    Thanks,
    Melon Chen
    Forum Support
    Come back and mark the replies as answers if they help and unmark them if they provide no help.
    If you have any feedback on our support, please click
    here

  • Poor Lightroom 5.7 Performance when using part repair tool

    Hi Lightroom pros,
    I'm using Lightroom 5.7 for about half a year and are very happy with it. Unless I'm working on photos where I have to use the part repairt tool to much.
    I'm specialized in car photograpy and often have to remove dust from the cars paint work and chrome. When I have to work on hughe areas Lightroom becomes slower and slower. I read that the repair tool costs a lot of performance but I'm wondering if I could optimize something or if there is a workaround for that? Because right now its not usable doing one click and waiting 2 minutes...
    I'm using the newest Lightroom 64-bit version on a Windows 8.1 Enterprise 64-bit Laptop with an internal SSD drive. LR is installed on that drive and the Catalogue also is stored there. I have 70 GB of free space on that drive, my catalogue is around 5 GB, my Camera Raw Cache Size is 20 GB.
    The machine has 16 GB of RAM and an Intel Core i7 3 GHz Quadcore processor. My RAW files are stored on an external USB 3.0 Drive (not SSD).
    When working on that kind of picutres I usually close as much programms as possible, so that most of the time LR is the only working desktop programm. What seems strange to me is, that even if I'm waiting for minutes for LR to answer, that my processor and RAM are only using about 40% of it's power. The rest seems not to be used by LR?!
    I would be really thankful for tipps or ideas how to improve that issue because right now I'm really wasting a lot of time.
    Kind regards
    Torsten

    I guess this depends on what you are doing, what type of original files you are using, and your own perception of quality. I would believe that in most cases, if you do all the editing in LR except for the spot removal, get the photo to appear the way you want (except for the spot removal), and then lastly move it to PSE and remove the spots, that there isn't really any noticeable loss of quality.
    I must admit I'm not a pro concerning color managment. But I already downloaded the PSE testversion and realized that it is not capable to deal with 16-bit color depth. Wouldn't that be noticeable after the finale spot removal step?
    So, you should tell Adobe this, not me. This is a forum for users (like me) to help other users (like you).
    I know. I just wanted to emphasize why switching to PS wouldn't be the perfect solution for me (even if I'm considering it).
    Thanks so far!

  • BIBean poor performance when using Query.setSuppressRows()

    Does anyone have experience in suppressing N/A cell values using BIBean? I was experimenting the use of Query.setSuppressRows(DataDirector.NA_SUPPRESSION). It does hide the rows that contains N/A values in a crosstab.
    The problem is that the performance degrades significantly when I started drilling down the hierarchy. Without calling the method, I was able to drill into the pre-aggregated hierarchy in a few seconds. But with setSuppressRows(), it took almost 15 minutes to do the same drill.
    Just for a comparison, I used DML to report on the same data that I wanted to drill into. With either 'zerorow' to yes or no, the data was fetched less than a second.
    Thanks for any help.
    - Wei

    At the moment we are hoping this will be fixed in a 10g database patch which is due early 2005. However, if you are using an Analytic Workspace then you could use OLAP DML to filter the zero and NA rows before they are returned to the query. I think this involves modifying the OLAP views that return the AW objects via SQL commands.
    Hope this helps
    Business Intelligence Beans Product Management Team
    Oracle Corporation

  • Poor performance when using skip-level hierarchies

    Hi there,
    currently we have big performance issues when drilling in a skip-level hierarchy (each drill takes around 10 seconds).
    OBIEE is producing 4 physical SQL statements when drilling f.e. into the 4th level (for each level one SQL statement). The statements runs in parallel and are pretty fast (the database doesn't need more than 0,5 seconds to produce the result), but ... and here we have probably somewhere a problem ... putting all the 4 results together in OBIEE takes another 8 seconds.
    This are not big datasets the database is returning - around 5-20 records for each select statement.
    The question is: why does it take so long to put the data together on the server? Do we have to reconfigure some parameters to make it faster?
    Please guide.
    Regards,
    Rafael

    If you really and exclusively want to have "OBIEE can handle such queries" - i.e. not touch the database, then you had best put a clever caching strategy in place.
    First angle of attack should be the database itself though. Best sit down with a data architect and/or your DBA to find the best setup possible physically and then when you've optimized that (with regard to the kind of queries emitted against it) you can move up to the OBIS. Always try t fix the issue as close to the source as possible.

  • Poor performance when using drop down box on web report

    We are using dropdown box functionality in web reporting to allow easy selection of characteristics values. We have 4 dropdown boxes
    that represents Region, Area, Country and division.
    We need to use booked_values = 'Q' to show only relevant values for selection in the dropdown. However the issue is takes a long time for results to appear
    on the template. Read from Fact table is quick but process of deriving drop down values is very slow.
    <object>
             <param name="OWNER" value="SAP_BW"/>
             <param name="CMD" value="GET_ITEM"/>
             <param name="NAME" value="DROPDOWNBOX_1"/>
             <param name="ITEM_CLASS" value="CL_RSR_WWW_ITEM_FILTER_DDOWN"/>
             <param name="DATA_PROVIDER" value="DATAPROVIDER_1"/>
             <param name="BORDER_STYLE" value="BORDER"/>
             <param name="GENERATE_CAPTION" value=""/>
             <param name="IOBJNM" value="ZPC_ORG14"/>
             <param name="BOOKED_VALUES" value="Q"/>
             <param name="TARGET_DATA_PROVIDER_1" value="DATAPROVIDER_1"/>
             ITEM:            DROPDOWNBOX_1
    </object>
    Do you have any suggestions on how to improve performance on drop down?
    Thanks, Jay

    Dear Jayant Dixit,
    1) If the values in the drop down box....are NOT dependent on user selection...then webserver can send them along with the first page....and cache them on client m/c...for multiple client/server dialogs....
    2) If the values are changing based on what the user selected from a previous input......then...you need to do some research on how SAP Webserver could optimize the "network traffic"......For example: a) Apache webserver has a module that compresses the content on the webserver side and then send to the client...b)A Client browser plugin decompresses the received packet and displays appropriately....
    3) We may need to research the latest SAP webserver capabilities....
    Good luck, BB

  • Poor Video performance when using ver 9.0.2 of Jdeveloper

    I am using the 9.0.2 version of JDeveloper under Linux Suse 8.0. Does anyone know why when I use this product ubder this operating system the screens either do not refresh quickley or completely? I am using a 16MB video card.

    I tried this test:
    drop table foo;
    create table foo (numeric number, text varchar2(2000));
    begin
      for i in 1 .. 100000 loop
        insert into foo values (i, 'the cat sat on the mat');
      end loop;
    end;
    create index fooindex on foo(text) indextype is ctxsys.context
    filter by numeric
    EXEC dbms_workload_repository.create_snapshot;
    begin
      for i in 1 .. 100000 loop
        delete from foo where numeric = i;
      end loop;
    end;
    commit;
    EXEC dbms_workload_repository.create_snapshot;
    @?/rdbms/admin/awrrpt.sqlIn the AWR report, I can see 100,000 inserts into CTXSYS.DRV$DELETE2 and 100,000 deletes from CTXSYS.DRV$SDATA_UPDATE2. On my desktop PC these take 4.06 seconds and 2.85 seconds of CPU time respectively, compared to 5.43 seconds for the delete from the $K table. Elapsed times are almost identical to CPU times.
    That doesn't seem unreasonable to me - 70 microseconds per row updated. I added "alter system flush shared pool" before starting the delete but it made very little difference. If your figures are much worse than this then perhaps we should look further into it.

  • Poor query performance when using date range

    Hello,
    We have the following ABAP code:
    select sptag werks vkorg vtweg spart kunnr matnr periv volum_01 voleh
          into table tab_aux
          from s911
          where vkorg in c_vkorg
            and werks in c_werks
            and sptag in c_sptag
            and matnr in c_matnr
    that is translated to the following Oracle query:
    SELECT
    "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN 000000000100000000 AND 000000000999999999;
    Because the field SPTAG is not enclosed by apostropher, the oracle query has a very bad performance. Below the execution plans and its costs, with and without the apostrophes. Please help me understanding why I am getting this behaviour.
    ##WITH APOSTROPHES
    SQL> EXPLAIN PLAN FOR
      2  SELECT
      3  "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN '20101201' AND '20101231' AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
    Explained.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    Id
    Operation
    Name
    Rows
    Bytes
    Cost (%CPU)
    0
    SELECT STATEMENT
    932
    62444
    150   (1)
    1
    TABLE ACCESS BY INDEX ROWID
    S911
    932
    62444
    149   (0)
    2
    INDEX RANGE SCAN
    S911~VAC
    55M
    5   (0)
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("VKORG"='D004' AND "SPTAG">='20101201' AND
                  "SPTAG"<='20101231')
       2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
                  "MATNR"<='000000000999999999')
    ##WITHOUT APOSTROPHES
    SQL> EXPLAIN PLAN FOR
      2  SELECT
      3  "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
    SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Id
    Operation
    Name
    Rows
    Bytes
    Cost (%CPU)
    0
    SELECT STATEMENT
    2334
    152K
    150   (1)
    1
    TABLE ACCESS BY INDEX ROWID
    S911
    2334
    152K
    149   (0)
    2
    INDEX RANGE SCAN
    S911~VAC
    55M
    5   (0)
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("VKORG"='D004' AND TO_NUMBER("SPTAG")>=20101201 AND
                  TO_NUMBER("SPTAG")<=20101231)
       2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
                  "MATNR"<='000000000999999999')
    Best Regards,
    Daniel G.

    Volker,
    Answering your question, regarding the explain from ST05. As a quick work around I created an index (S911~Z9), but still I'd like to solve this issue without this extra index, as primary index would work ok, as long as date was correctly sent to oracle as string and not as number.
    SELECT                                                                         
      "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" ,        
      "PERIV" , "VOLUM_01" , "VOLEH"                                               
    FROM                                                                           
      "S911"                                                                       
    WHERE                                                                          
      "MANDT" = :A0 AND "VKORG" = :A1 AND "SPTAG" BETWEEN :A2 AND :A3 AND "MATNR"  
      BETWEEN :A4 AND :A5                                                          
    A0(CH,3)  = 003              
    A1(CH,4)  = D004             
    A2(NU,8)  = 20101201  (NU means number correct?)       
    A3(NU,8)  = 20101231         
    A4(CH,18) = 000000000100000000
    A5(CH,18) = 000000000999999999
    SELECT STATEMENT ( Estimated Costs = 10 , Estimated #Rows = 6 )                                                              
        5  3 FILTER                                               
             Filter Predicates                                                                               
    5  2 TABLE ACCESS BY INDEX ROWID S911                 
                 ( Estim. Costs = 10 , Estim. #Rows = 6 )         
                 Estim. CPU-Costs = 247.566 Estim. IO-Costs = 10                                                                               
    1 INDEX RANGE SCAN S911~Z9                     
                     ( Estim. Costs = 7 , Estim. #Rows = 20 )     
                     Search Columns: 4                            
                     Estim. CPU-Costs = 223.202 Estim. IO-Costs = 7
                     Access Predicates Filter Predicates          
    The table originally includes the following indexes:
    ###S911~0
    MANDT
    SSOUR
    VRSIO
    SPMON
    SPTAG
    SPWOC
    SPBUP
    VKORG
    VTWEG
    SPART
    VKBUR
    VKGRP
    KONDA
    KUNNR
    WERKS
    MATNR
    ###S911~VAC
    MANDT
    MATNR
    Number of entries: 61.303.517
    DISTINCT VKORG: 65
    DISTINCT SPTAG: 3107
    DISTINCT MATNR: 2939

  • Poor performance when using bind variable in report

    I have a report that takes 1 second to run if i 'hardcode' a particular value into the where clause of the report. However, if i now replace the hardcoded value with a bind variable and set the default value for the bind variable to be the (previous) hard coded value the report now takes 50 seconds to run instead of 1 second!!
    Has anyone else seen this behaviour - any suggestions to workaround this will be gratefully received

    More info
    SELECT patch_no, count(*) frequency
    FROM users_requests
    WHERE patchset IN (SELECT arps2.patchset_name
    FROM aru_bugfix_relationships abr, aru_bugfixes ab, aru_status_codes ac,
    aru_patchsets arps, aru_patchsets arps2
    WHERE arps.patchset_name = '11i.FIN_PF.E'
    AND abr.bugfix_id = ab.bugfix_id
    AND arps.bugfix_id = ab.bugfix_id
    AND abr.relation_type = ac.status_id
    AND arps2.bugfix_id = abr.related_bugfix_id
    AND abr.relation_type IN (601, 602))
    AND included ='Y'
    GROUP BY patch_no
    order by frequency desc, patch_no
    Runs < 1 sec from SQL navigator and from portal (if i hardcode the value for fampack.
    Takes ~50 secs if i replace with :fampack and set default value to 11i.FIN_PF.D

  • Improving performance when using LineStripArray?

    I'm rendering approximately 680 LineStripArrays to represent an airport on a situation display. I read the data from a DXF file and I imagine I can strip that down by removing parts of the airport I don't want to show.
    However, performance is poor - I'm probably hitting 50fps when I'm barely rendering anything. Apart from not displaying some LineStripArrays, what can I do to improve performance? Should I merge some LineStripArrays? Is there another geometry class I should use?
    My code is:
                   // This array will hold the vertices.
                   float[] vertices = new float[count*3];
                   // Create the colour.
                   Color3f colour = new Color3f(1.0f,1.0f, 1.0f);
                   // iterate over all vertex of the polyline
                   for (int i = 0; i < count; i++) {                    
                        vertices[(i*3)+0] = (float)vertex.getX();
                        vertices[(i*3)+1] = (float)vertex.getY();
                        vertices[(i*3)+2] = (float)vertex.getZ();
                   }And then:
    layerData = new LineStripArray(count, LineArray.COORDINATES, strip_counts);
    layerData.setCoordinates(0, vertices);Thanks
    Edited by: BobCrivens on Sep 18, 2008 7:39 AM

    Yes, it will cause performance issues.  Whether you notice it or not may be a different story.
    LabVIEW drawing engine starts at the bottom layer and works its way up.  So, it has to redraw the image and then redraw the control when you update the control/indicator.
    It's been a while since I benchmarked this on a project, but in LabVIEW 6.1, I looked into why my tests ran so slow, and saw a 10-15% decrease in test time by removing the background decorations I used to make the window pretty.  If I didn't show the GUI feedback for the test at all (no GUI windows for each test), I saw a 30% decrease in test time.
    You will also find that better video cards will have a positive effect on this, as they redraw the screen faster.  In the same benchmark, I was able to outperform the early PXI controllers with a slower PC because NI was using a lower end video chip for their onboard graphics.

Maybe you are looking for