Annoying little performance problem...

I just changed the font from the default to something else...just for the heck of it. And now when I type in code, there is an amazing amount of lag in the characters showing up in the editor vs my typing speed. And then I notice the CPU usage while typing shoots up by quite a bunch. I am sure this is not a normal problem in Jdeveloper. My machine has a 3.4 ghz P4 with 2 gb of DDR400 RAM. I am using JDK 5 Update 7. Any ideas what could be causing this to happen?

Hi Frank,
Thanks for giving it a shot. No its not something that happens after a while. It happens right away. I dont know what it is. Tomorrow in the morning I will revert back to the old font and see if I have a problem or not. It is rather weird. I dont remember experiencing this before. Would you suggest I do a clean install of JDeveloper?
Thanks.

Similar Messages

  • Annoying Apex performance problem

    Hello,
    We are facing a strange issue with our applications performance. Some pages (also APEX internal) can take up to 20 seconds to load. When the page is loading and I do another request to the server (on another web page) - both requests are executed immediately.
    The problem is not connected to complex pages - even very simple pages (only showing some text) may also take 20 seconds to load.
    Anyone experienced the same, or have any solutions to this?
    Please let me know if you need any additional info - greatly appreciate some help here!
    Brgds
    Christian

    Hello Joel,
    First of all, I misunderstood the setup - we are using PL/SQL gateway...really sorry about this confusion.
    This is however interesting - we where at a Advanced APEX course in London last week, and had the same experience here --> when experiencing a "hang" on a page it was resolved when clicking Ctrl+N to bring up a new page or refreshing another page going against the same database. I understood on the instructor that Oracle also used PL/SQL gateway on the student pc setup.
    I did a new test today - a page took about ten seconds to load, and debug mode time says 0,74 seconds.
    Regarding your questions:
    1) This morning NO users connected as APEX_PUBLIC_USER and no idle
    2) DAD uses DNS address
    "SQL> show parameter dispatcher
    NAME TYPE VALUE
    dispatchers string (PROTOCOL=TCP) (SERVICE=P119XD
    B)
    max_dispatchers integer
    SQL> show parameter local_listener
    NAME TYPE VALUE
    local_listener string (ADDRESS = (PROTOCOL = TCP)(HO
    ST = p119.hydro.com)(PORT = 51
    119))
    3) We don't know of any network issues. Several other databases and listeners are running on the same server without experiencing any network problems.
    Brgds
    Christian

  • MOVED: [Athlon64] Annoying little problem! PLease help

    This topic has been moved to Operating Systems.
    [Athlon64] Annoying little problem! PLease help

    Hi Ben.
    Thank you very much vor your replay
    but I still can get it
    Here the code
    on testAlphaChannels sourceImage, cNewWidth, cNewHeight,
    pRects
    cSourceAlphaImage=sourceImage.extractAlpha()
    newImage = image(cNewWidth, cNewHeight, 32)
    newImage.useAlpha = FALSE
    newAlphaImage = image(cNewWidth, cNewHeight, 8)
    repeat with i=1 to pRects.count
    destRect=......
    newImage.copyPixels(sourceImage, destRect, pRects
    newAlphaImage.copyPixels(cSourceAlphaImage, destRect,
    pRects,
    [#ink:#darkest])
    end repeat
    newImage.useAlpha = TRUE
    newImage.setAlpha(newAlphaImage)
    textMember = new(#bitmap)
    textMember.image=newImage
    end
    But the result is not correct. O my example
    http://www.lvivmedia.com/fontPr/Fontproblems3.jpg
    image to the left is
    created on background image, and image to the right - with
    code above
    What is wrong in the code, I quoted above?
    Any help will be appreciated
    Jorg
    "duckets" <[email protected]> wrote in
    message
    news:ekhekq$c6g$[email protected]..
    > I think this is what you'll have to do:
    >
    >
    >
    > Do the copypixels command as per your 2nd result example
    (where "no
    background
    > image is used") using destImage.useAlpha = false.
    >
    > Create a new image as a blank alpha channel image (8
    bit, #greyscale)
    >
    > Repeat the same copypixels commands for each number, but
    this time the
    source
    > image is 'sourceAlphaImage', and the dest Image is this
    new alpha image.
    And
    > the crucial part, use: [#ink:#darkest] for these
    operations. This is
    because
    > you are merging greyscale images which represent the
    alpha channels of
    each
    > letter. The darker parts are more opaque, and the
    lighter parts are more
    > transparent, so you always want to keep the darkest
    pixels with each
    copypixels
    > command.
    >
    > hope this helps!
    >
    > - Ben
    >
    >
    >
    >

  • Performance problems with Leopard 10.5.1

    Hello,
    I use an iMac 24 Alu 2,8Ghz and upgraded to Leopard. There are some major performance problems and bugs in the recent version of Leopard:
    1. While accessing USB devices, the display speed, windows moving, animations etc. slow down
    2. Adobe CS3 Photoshop 10.01 and Flash CS3 are sometimes extremly slow. I tried the recent demo packages from Adobe:
    2.1. The Photoshop dialogue "save for web" slows down the system completly, and this problem stays when quitting Photoshop. A restart is neccessary then.
    2.2. Flash CS3 movie preview is very slow and stuttering. It's so slow you cannot imagine how the real movie flow will be.
    2.3. Recent Flash Player 9,0,115,0 with hardware acceleration enabled doesn't really work with QuartzGL enabled Leopard: The movies slow down a lot. Try www.neave.tv for example.
    3. Safari, Mail and other bundled software hang sometimes. You have to force quit them then. It doesn't matter whether QuartzGL is enabled or not. This especially happens to my system if it is online for some hours.
    4. A lot of Apple applications doesn't seem to work with 2dextreme enabled. Why this? Apple supporters told me in Leopard there will be a much better 2dextreme support. Also Quartz2dExtreme in OSX 10.4 worked with all applications and i guess it's the same feature like "QuartzGL" in Leopard. So Leopard isn't finished here. It would be nice if Apple could make it's own software QuartzGL compatible.
    5. Very often the desktop slows down or lags. This is the main reason I often still witch to Windows XP PC to do work in a faster, less annoying way.
    6. Safari crashes randomly sometimes. It is unstable still. Also it crashes more often if you resize/move the window a lot, so I guess it is a graphics extension-related problem.
    I hope you people from Apple will fix these annoying points and optimize your new system in the next update release.
    Best regards

    Thanks for you answer. I repaired in the way as described above. There were some errors, some file index was wrong (don't remember exactly the phrase), now the DU reports the partition was successfully repaired / the volume appears to be ok.
    The crashes in Safari are gone, but all other described problems still exist. Adobe CS3 is not really usable for me.
    By the way, in iMacSoftwareUpdate 1.3, which was replaced by OSX 10.5.1 update, there is one extension called AppleVADriver.kext that does not exist in the OSX 10.5.1 update. Is it an important extension?

  • Curious performance problem

    Hello,
    I have a very curious performance problem. I have a query which returns 0 rows and takes around 9 secs to execute in TopLink. If I try to execute generated SQL for that ReadAllQuery (taken from log) directly through JDBC, it takes only 70ms. I use TopLink 9.0.3 with Oracle9i 9.2.0.3. I've traced through sources and identified that the problem is not in Toplink directly but in call to Oracle JDBC driver. But then I don't understand why in my JDBC case it is so fast. The problem is the same no matter if I use thin or OCI driver.
    I've prepared a little test to show it up:
    import com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO;
    import java.util.Vector;
    import oracle.toplink.expressions.Expression;
    import oracle.toplink.expressions.ExpressionBuilder;
    import oracle.toplink.queryframework.ReadAllQuery;
    import oracle.toplink.queryframework.SQLCall;
    import oracle.toplink.sessions.DatabaseSession;
    import oracle.toplink.sessions.DefaultSessionLog;
    import oracle.toplink.sessions.Project;
    import oracle.toplink.tools.profiler.PerformanceProfiler;
    import oracle.toplink.tools.workbench.XMLProjectReader;
    * @author mstraka
    public class ToplinkTest {
    public static void main(String[] args) {
    try {
    // Pure JDBC test
    String sql =
    "SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, " +
    "VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE " +
    "FROM POTMESSAGELOG " +
    "WHERE " +
    "((((TIMESTAMP >= TO_DATE('2003-07-21 15:00:00', 'YYYY-MM-DD HH24:MI:SS')) " +
    "AND (TIMESTAMP <= TO_DATE('2003-07-21 16:00:00', 'YYYY-MM-DD HH24:MI:SS'))) " +
    "AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND " +
    "(object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) " +
    "ORDER BY TIMESTAMP ASC";
    Class.forName("oracle.jdbc.driver.OracleDriver");
    java.sql.Connection con = java.sql.DriverManager.getConnection("jdbc:oracle:oci8:@katka", "sco", "sco");
    long time = System.currentTimeMillis();
    java.sql.PreparedStatement ps = con.prepareStatement(sql);
    java.sql.ResultSet rs = ps.executeQuery();
    int rows = 0;
    while (rs.next()) {
    rows++;
    System.out.println("*** Pure JDBC test ****");
    System.out.println("Rows: " + rows);
    System.out.println("JDBC Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    rs.close();
    ps.close();
    con.close();
    // TopLink test
    XMLProjectReader xmlReader = new XMLProjectReader();
    Project project = xmlReader.read("./config/bc/tlproject.xml");
    project.getLogin().setUserName("sco");
    project.getLogin().setPassword("sco");
    DatabaseSession dbSession = project.createDatabaseSession();
    dbSession.logMessages();
    DefaultSessionLog log = (DefaultSessionLog) dbSession.getSessionLog();
    log.logDebug();
    log.logExceptions();
    log.logExceptionStackTrace();
    log.printDate();
    dbSession.login();
    java.util.Calendar cal = java.util.Calendar.getInstance();
    cal.set(java.util.Calendar.YEAR, 2003);
    cal.set(java.util.Calendar.MONTH, 6);
    cal.set(java.util.Calendar.DAY_OF_MONTH, 21);
    cal.set(java.util.Calendar.HOUR_OF_DAY, 15);
    cal.set(java.util.Calendar.MINUTE, 0);
    cal.set(java.util.Calendar.SECOND, 0);
    cal.set(java.util.Calendar.MILLISECOND, 0);
    ExpressionBuilder eb = new ExpressionBuilder();
    Expression ex = eb.get("timestamp").greaterThanEqual(new java.sql.Date(cal.getTimeInMillis()));
    cal.set(java.util.Calendar.HOUR_OF_DAY, 16);
    ex = ex.and(eb.get("timestamp").lessThanEqual(new java.sql.Date(cal.getTimeInMillis())));
    Expression pot = eb.get("potOrder").greaterThanEqual(1);
    pot = pot.and(eb.get("potOrder").lessThanEqual(172));
    dbSession.setProfiler(new PerformanceProfiler());
    ReadAllQuery rq = new ReadAllQuery(PotMessageLogJDO.class);
    rq.setSelectionCriteria(ex.and(pot));
    rq.addAscendingOrdering("timestamp");
    time = System.currentTimeMillis();
    Vector result = (Vector)dbSession.executeQuery(rq);
    System.out.println("*** TopLink ReadAllQuery test ****");
    System.out.println("Rows: " + result.size());
    System.out.println("TopLink Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    time = System.currentTimeMillis();
    result = (Vector)dbSession.executeSelectingCall(new SQLCall(sql));
    System.out.println("*** TopLink direct SQL test ****");
    System.out.println("Rows: " + result.size());
    System.out.println("TopLink SQL Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    } catch (Exception e) {
    e.printStackTrace();
    ...and here is output from run:
    *** Pure JDBC test ****
    Rows: 0
    JDBC Time: 62 ms
    2003.07.21 06:07:44.127--DatabaseSession(30752603)--Connection(20092482)--TopLink, version:TopLink - 9.0.3 (Build 423)
    2003.07.21 06:07:44.736--DatabaseSession(30752603)--Connection(20092482)--connecting(DatabaseLogin(
         platform => OraclePlatform
         user name => "sco"
         datasource URL => "jdbc:oracle:oci8:@katka"
    2003.07.21 06:07:44.799--DatabaseSession(30752603)--Connection(20092482)--Connected: jdbc:oracle:oci8:@katka
         User: SCO
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.3.0 - Production
         Driver: Oracle JDBC driver Version: 9.2.0.1.0
    2003.07.21 06:07:44.971--DatabaseSession(30752603)--#executeQuery(ReadAllQuery(com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO))
    Begin Profile of{ReadAllQuery(com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO)
    2003.07.21 06:07:45.002--DatabaseSession(30752603)--Connection(20092482)--SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE FROM POTMESSAGELOG WHERE ((((TIMESTAMP >= {ts '2003-07-21 15:00:00.0'}) AND (TIMESTAMP <= {ts '2003-07-21 16:00:00.0'})) AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND (object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) ORDER BY TIMESTAMP ASC
    Profile(ReadAllQuery,
         class=com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO,
         total time=9453,
         local time=9453,
         query prepare=15,
         sql execute=9422,
    } End Profile
    *** TopLink ReadAllQuery test ****
    Rows: 0
    TopLink Time: 9468 ms
    2003.07.21 06:07:54.439--DatabaseSession(30752603)--#executeQuery(DataReadQuery())
    Begin Profile of{DataReadQuery()
    2003.07.21 06:07:54.439--DatabaseSession(30752603)--Connection(20092482)--SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE FROM POTMESSAGELOG WHERE ((((TIMESTAMP >= TO_DATE('2003-07-21 15:00:00', 'YYYY-MM-DD HH24:MI:SS')) AND (TIMESTAMP <= TO_DATE('2003-07-21 16:00:00', 'YYYY-MM-DD HH24:MI:SS'))) AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND (object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) ORDER BY TIMESTAMP ASC
    Profile(DataReadQuery,
         total time=0,
         local time=0,
    } End Profile
    *** TopLink direct SQL test ****
    Rows: 0
    TopLink SQL Time: 16 ms
    Thanks a lot!
    Marcel

    Marcel,
    TopLink supports native SQL generation that will use the TO_DATE operators. You can turn on native SQL in a couple of ways.
    1. SESSIONS.XML
              <login>
                   <platform-class>oracle.toplink.internal.databaseaccess.OraclePlatform</platform-class>
                   <user-name>user</user-name>
                   <password>password</password>
                   <uses-native-sequencing>true</uses-native-sequencing>
    2. Through DatabaseLogin API:
    After the project is read in or instantiated:
    project.getLogin().useNativeSQL();
    This should get the SQL you need and address your performance issue.
    Doug

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance problem in Mapping Designer using UDF with external imports

    Hello,
    we do have a big performance problem in developing (not in execution) graphical Mappings as far as we use "user defined functions" (UDF) with include-entries referencing to jar files which are imported as "imported archives".
    For example the execution of invice mapping with a little bit bigger test file in the Mapping designer:
    - after opening, not in change mod: 6 seconds
    - after switching to change mod: 37 seconds (that's clear, now everything is compiled first)
    - after adding "com.seeburger.functions.permstore.CounterFactory;" into the "import" field of one UDF, no other change: 227 seconds
    - after saving and submiting the changlist (no longer in change mode): 6 seconds
    - after switching to change mode: 227 seconds
    So execution speed of testing (and also when watching queues) only increases in changemod more then three minutes when using UDF with imports, referencing to external JAR files. It doesn't depend on Seeburger functions (we are using XI also for EDIFACT, so we also use some Seeburger functions), I can reproduce it with any other JAR file which is used from a UDF.
    Using java included functions like "java.text.NumberFormat;" in "Import" doesn't slow down the testing.
    Can anybody reproduce this? We are using XI 3.0 SP19 on a AIX machine, so we also have to use the Java version from IBM.
    cu
    Manfred

    Problem was fixed by a upgrad of the JDK.

  • How to finally get rid of those annoying little plus sign boxes in a table

    Hello everyone!
    Pages treat table cells as a _*TEXT BOX*_
    If those +*annoying little plus sign boxes*+ are popping up, do this to get rid of them:
    1. Open Inspector
    2. _*Select all*_ on your table
    3. Click on T
    4. Click TEXT
    5. GO to _*INSET MARGIN*_ and SLIDE until the little plus sign boxes disappear.
    FINALLY the boxes are gone!

    Hej Fruhulda
    Obviously you are one of the fortunate ones who has never had to deal with the dreaded little box with an X in it!
    All of us here were seeing it all the time up until yesterday and it was driving us CRAZY!!
    Imagine this:
    You are trying to get some work done in Pages
    In a table
    You add some text to a cell
    Then you find out that you have to make the table smaller because
    suddenly you have to insert another table on the page
    As you are making the table smaller
    AHHHHHHHHHHHHH
    Inside the cell with the text *a little box with an X in it appears* and will not go away.
    If you are working with MANY cells in a table
    You are looking at A LOT OF LITTLE BOXES with X's in them.
    Suddenly you feel *SEA SICK* looking at all those little boxes
    You read the entire iWork Manual line by line BUT cannot find anything to HELP
    Your project is not a numbers candidate
    You have been a mac user for YEARS and YEARS
    You know your macs, you know your mac issues
    WHAT TO DO
    I am sure you can see the dilemma we were in!
    Then all of us realized that this is similiar to a text box issue and cell padding issue from our Adobe CS suite programs and sure enough it WAS!!
    That is basically the problem ALL of US were having.
    So if our solution can help anyone - *WE ARE GLAD*
    because they were probably getting *SEA SICK TOO* trying to get some work done
    Thank your lucky stars you did not ever have this issue pop up on a deadline!!
    Have a great day
    ( I miss my Östermalmstorg FIKA in the afternoon)
    Big Al

  • Performance problem with CR SDK

    Hi,
    I'am currently on a customer site and I have the following problem :
    The client have a performance problem with a J2EE application wich call a Crystal report with th CR SDK. To reproduce the problem on the local machine (the CR server), I have developped a little jsp page wich used the Crystal SDK to open a Crystal report on the server (this report is based on a XML data source), setting the new data source (with a new xml data flow) and refresh the report in PDF format.
    The problem is that the 2 first sequences take about 5 seconde each (5 sec for the opening report and 5 seconds for the setting data source). Then the total process take about 15 seconds to open and refresh the document that is very long for a little document.
    The document is a 600Ko file, the xml source is a 80Ko file.
    My jsp page is directly deployed on the tomcat of the Crystal Report Server (CRXIR2 without Service Pack).
    The Filestore and the MySQL database are on the CR server.
    The server is a 4 quadripro (16 proc) with 16Go of RAM and is totally dedicated to Crystal Report. For the moment, there is no activity on the server (it is also used for the test).
    The mains jsp orders are the followings :
    IEnterpriseSession es = CrystalEnterprise.getSessionMgr().logon("administrator", "", "EDITBI:6400", "secEnterprise");
        IInfoStore infoStore = (IInfoStore) es.getService("", "InfoStore");
        IInfoObjects infoObjects = infoStore.query("SELECT * FROM CI_INFOOBJECTS WHERE SI_NAME='CPA_EV' AND SI_INSTANCE=0 ");
        IInfoObject report = (IInfoObject) infoObjects.get(0);
    IReportAppFactory reportAppFactory = (IReportAppFactory)es.getService("RASReportFactory");
    ReportClientDocument reportClientDoc = reportAppFactory.openDocument(report.getID(), 0, null);
    IXMLDataSet xmlDataSet = new XMLDataSet();
    xmlDataSet.setXMLData(new ByteArray(ligne_data_xml));
    xmlDataSet.setXMLSchema(new ByteArray(ligne_schema_xml));
    DatabaseController db = reportClientDoc.getDatabaseController();
    db.setDataSource(xmlDataSet, "", "");
    ByteArrayInputStream bt = (ByteArrayInputStream)reportClientDoc.getPrintOutputController().export(ReportExportFormat.PDF);
    My question is : does this method is the good one to do this ?
    Thank's in advance for your help
    Best regards
    Emmanuel

    Hi,
    My problem is not resolved and I have'nt news from the support.
    If you have any idea/info, don't forget me
    Thank's in advance
    Emmanuel

  • Performance problem while CPU is 80% Idel ?

    Hi,
    My end users are claim for performance problem during execution of batch process.
    As you can see there are 1,745 statement executing each second.
    Awr report shows 98.1% of the time , waits on CPU .
    Also Awr report shows that Host CPU is :79.9% Idel.
    The second wait event shows only 212 seconds waits on db file sequential read.
    Yet , 4 minute in 1 hour period is seems not an issue.
    Please advise
    DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
    QERP          xxx        erp                 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    erptst           HP-UX IA (64-bit)                  16    16       4     127.83
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     40066 22-Jan-13 20:00:52       207       9.6
      End Snap:     40067 22-Jan-13 21:00:05       210       9.6
       Elapsed:               59.21 (mins)
       DB Time:              189.24 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     8,800M     8,800M  Std Block Size:         8K
               Shared Pool Size:     1,056M     1,056M      Log Buffer:    49,344K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                3.2 ;               0.1 ;      0.00 ;      0.05
           DB CPU(s):                3.1 ;               0.1 ;      0.00 ;      0.05
           Redo size:          604,285.1 ;          27,271.3
       Logical reads:          364,792.3 ;          16,463.0
       Block changes:            3,629.5 ;             163.8
      Physical reads:               21.5 ;               1.0
    Physical writes:               95.3 ;               4.3
          User calls:               68.7 ;               3.1
              Parses:              212.9 ;               9.6
         Hard parses:                0.3 ;               0.0
    W/A MB processed:                1.2 ;               0.1
              Logons:                0.3 ;               0.0
            Executes:            1,745.2 ;              78.8
           Rollbacks:                1.2 ;               0.1
        Transactions:               22.2
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00 ;      Redo NoWait %:  100.00
                Buffer  Hit   %:   99.99 ;   In-memory Sort %:  100.00
                Library Hit   %:   99.95 ;       Soft Parse %:   99.85
             Execute to Parse %:   87.80 ;        Latch Hit %:   99.99
    Parse CPU to Parse Elapsd %:   74.76 ;    % Non-Parse CPU:   99.89
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.37 ;  76.85
        % SQL with executions>1:   95.31 ;  85.98
      % Memory for SQL w/exec>1:   90.33 ;  82.84
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                           11,144          98.1
    db file sequential read              52,714         214      4    1.9 User I/O
    SQL*Net break/reset to client        29,050           6      0     .1 Applicatio
    log file sync                         2,536           6      2     .0 Commit
    buffer busy waits                     4,338           2      1     .0 Concurrenc
    Host CPU (CPUs:   16 Cores:   16 Sockets:    4)
    ~~~~~~~~         Load Average
                   Begin       End     %User   %System      %WIO     %Idle
                    0.34 ;     0.33 ;     19.7 ;      0.4 ;      1.8 ;     79.9

    Nikolay Savvinov wrote:
    if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
    I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
    In this case I think "the entire system" IS "the batch process"
    Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
    everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
    If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
    But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
    Regards
    Jonathan Lewis

  • Performance problem counting ocurrences

    Hi,
    I have an infocube with 5 characteristics (region,company,distribution center, route, customer) and 3 key figures, I have set one of this KF to average ( values different to 0), i am loading data from 16 months and 70 weeks. In my query i have set a calculated KF which is counting the ocurrences by the lowest characateristic to obtain it by granularity level therefore I always count the lowest detail (customer) there are aprox, 500K customers so my web templates are taking more than 10 minutes displaying the 12 months, I have looked up to make aggregations however the query is not using them anyway, has anyone had this kind of performance problems with such a low level of data (6 million for 12 months), Has anyone found a workaround to improve performance? I really expect someone has this experience and could help me out, this will depend on the life of BW in the organzation.
    Please help me out!
    Thanks in advance!

    Hi,
    First of all thanks for your advices, I have taken part of both in my solution, I am now not considering anymore to use the avg defined in the ratio, how ever i am still considering  it in the query, it is answering at least for now taking up to 10 mins. Now my exact requirement is to display the count of distinct customers groped by the upper levels. I have populated my infocube with 1 in my key figure however, it may be duplicated for a distribution center, company or region, therefore i have to find out the distinct customer. With SAP's "How to count occurences" i managed that, but it is not performing at an acceptable level , i have performed tests without the division between CKF customer/ CKF avg customer and found this is what is now slowing the query. I find the boolean evaluation might be more useful and less costly if you could hint a little more in how to do it, i would appreciate with points, also a change in the model could be costly by the front end part because of dependences with queries and web templates, i rather have it solved in BW workbench by partitioning, aggregation, new infocubes,  which is already a solution I have analyzed by disggregating the characteristics by totals in different infocubes with the same KF and then by query selecting the appropiate one. I was wondering if an initial routine could do the count distincts and group by with the same ratio for different characteristics so i do not rework the other configuration I already have

  • Performance Problems PrPro CS5 - Upgrade to CS6?

    Hi
    we often notice massive performance problems with PrPro CS5 here.
    e.g. last week we were shooting with a Sony NEX-VG10. (MediaInfo:)
    Format: AVC
    Format profile: [email protected]
    Format settings, CABAC: Yes
    Format settings, ReFrames: 2 frames
    Format settings, GOP: M=2, N=13
    Bit rate: 16.0 Mbps
    Width: 1 920 pixels
    Height: 1 080 pixels
    Display aspect ratio: 16:9
    Frame rate: 25 fps
    After linking footage into PrPro plaback stutters.
    PrPro is very often not responsive to the keyboard at all.
    Sometimes PrPro needs some seconds before playback starts.
    This way editing is a pain...
    No fx in timeline, only one or two clips. Same behaviour if I play a clip in the source window.
    Project and sequence settings are correct. Playback quality set to 1/4.
    These problems occure in several projects with footage from different cameras.
    PrPro works ok  with this (PAL/SD-)footage for example - MediaInfo:
    Format: DV
    Commercial name: DVCPRO
    Width: 720 pixels
    Height: 576 pixels
    Display aspect ratio: 16:9
    About the computer:
    Dell Precision WorkStation T5500
    24 GB RAM
    2 processors w/ 8 cores together: Intel Xeon CPU E5620 @ 2.40GHz
    System: 250 GB (SSD)
    Media: 2 TB (this is one normal internal hard drive, no raid)
    NVIDIA Quadro 4000 (driver version 297.03)
    connected to internet (is a must in this company)
    connected to intranet, server (is a must in this company, too)
    optimized for performance
    windows firewall active
    AVIRA Porfessional
    What would  PrPro force to work with HD footage?
    Add an RAID? internal/external?
    Upgrade to CS6?
    Edncode footage to another codec before linking it into PrPro? (which codec? encode with AME?)
    Buy a new processor? (i7? which one?)
    Buy any Hardware pieces?
    TIA for your guidiance and best regards.

    Take a look at the Dell T7400 currently at rank # 568, 'Base2008PT1' on Benchmark Results. It is somewhat similar to your own system, but has more memory, a better video card and some raid0 arrays. Nevertheless, it is around 3.4 times slower than a fast system. My guess is that your system is even slower.
    The material you try to edit is very demanding and requires a beefy computer. Even though there is nothing wrong with the dual Xeon E5620's, their clock speed works against you (I have the same CPU's in a file server, but that is far less demanding than editing), as does the amount of memory in the system. You can be helped with more disks, but don't expect miraculous performance improvements. But all little things help.
    The alternative is a complete new system, but that can be pricey. Even if you build it yourself, see Planning & Building a NLE system it can be costly. It gives you very fast performance as demonstrated on the Reflections page and you should not get such a system from Dell or HP, unless you have unlimited funds.

  • Annoying little O2 Icon

    So two days into my S6 experience and this is annoying me!   If you click on it you get this  These are various numbers - just out of interest the Customer Srv number is for PAYG - I am on refresh. Despite my best efforts I can't delete this O2 icon, if I select edit on the screen it's on the little delete icon that appears on other icons isn't on the O2 icon and and if i press and hold there is no delete option to drag it to. Not a big deal I know - but anyone have any ideas how to remove it?

    djbsuffolk wrote:
    Toby wrote:
    Did you ever solve this problem @djbsuffolk?No @Toby! I moved the icon into a folder and it keeps moving itself out of the folder - it's very annoying! Any idea appreciated!!http://community.o2.co.uk/t5/Android-Devices-Samsung-HTC-Sony/Annoying-little-O2-Icon/m-p/896853#M49770

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

Maybe you are looking for