Burst your IDE ATA133 performance question

This may sound stupid, but I wrote an article on this
that says that I should install VIA 4 in 1(are 4.45 o.k.?)
,VIA RAID Patch, of PCI latency patch.
After, the article says this works for IDE drives too(non-RAID).
This may sound stupid, but should I install RAID patch?
Or could I only install PCI latency patch?
Thanx!

Just install 4.45 and PCI latency patch!
You dont need RAID patch unless you have a RAID card or onboard RAID!

Similar Messages

  • J2ME Interview Questions-Please share your ideas.It's Useful for everyone.

    Hi friends,
    Recently, I attended J2ME interview. I could answer the basic questions but for few questions, I was doubtful. I�ve listed the few tough questions. Please share your ideas for
    these questions. Hope it will be useful for everyone.
    Questions:
    Which mobile operating system did you use?
    What do you mean by porting of applications?
    If a MIDP application runs in an emulator then, is there guarantee that
    It will work fine in device also? If no, then what issues are to be taken care?
    What are the design issues to be taken care while developing Games
    Which should be portable across all the vendor phones?
    I�ve 1 MB png file in server but mobile supports 512 KB only So, how can
    You download/access the image?
    The mobile device has only 512 KB heap memory. An application needs 1 MB heap memory. How to make this application run without any problem?
    If there are 3 applications each of 512 KB heap memory then, how memory management is done?
    How to load the Image from Servlet to MIDlet?
    When Garbage Collection is invoked, is there any guarantee that memory
    Will be released?
    5 forms are present and 1 Image is there. I need to access the same image
    Over the 5 forms efficiently without creating the image using createImage, How?
    Wht is use of Tilde layer?
    Difference between Canvas & GameCanvas?
    When do you need to use more than one thread in J2ME?
    Is Threading internally implemented in the same way in Java & J2ME or is
    There any difference?
    Default value of Boolean variable in destroyApp() when it is invoked?
    Difference between notifyDestroyed() and destroyApp()?
    What are the ways to port applications to mobile?
    How to find Button Clicks in J2ME similar to EventListeners in AWT?
    Thanks,
    Ravi.

    Hi,
    1) With 11g recently becoming available, should we upgrade to 11g or upgrade to 10g ?You should upgrade to 10g, and apply the latest 10g patches.
    2) There is a white paper on Metaline (313418.1) that discusses using 10g with EBS. Is there anything similar for 11g ?11g is not available with the EBS. It has a different architecture and is not supported with EBS.
    3) Can anyone share with me how long the actual upgrade process takes (please specify if your timings are 10g or 11g) ? To upgrade you need to do a new install of Discoverer 10g and upgrade the EUL. Make sure you have a good backup of the EUL before starting. The Discoverer install is straightforward and the EUL upgrade only takes a minute. BUT although all the workbooks are upgraded they are changed in ways that you might not have expected. Parameters become optional, occasionally worksheets disappear, titles get corrupted, the SQL generated can different, may perform more slowly and may occasionally return different results. It is sorting out these issues and regression testing the workbooks that take the time.
    Rod West

  • Seeking your ideas

    i would surely appreciate your ideas on the design of my first jms app, as this is my FIRST jms app, i have been trying to sort out the proper ways of doing it. if my questions are funny, i dont pretend i know everything, and your ideas/corrections will be highly appreciated.
    what i am trying to do is to delever reports to my clients, about 200 of them. i would publish all the reports to a topic/reports, and let my clients subscribe to the topic. to ensure that all people will receive all reports and not to miss one, i need to go with durable subcription. then i realize durable subscription only allows one at a time, so sharing one durable topic does seem to work for 200 people. i would then have to create one topic for each of them. is this proper? it seems to be such a waste. also is there a limit on how many topics can be created? if the number of clients increases in the future, to 2000, for example, i would then need to have 2,000 topics?

    especially
    that i have to give each user a durabl topic; i
    thought a topic could be durabally suscribed bymore
    than one subscriber concurrently, or am i missing
    something? Yes you are missing something :-)
    You can have 1 topic and as many durable subscribers
    as you want. Each client would have its own durable
    but you only need one topic.oops, you are right, i wasnt clear enough. you can share one topic durabally among more than one subscribers, but the problem is there can only be one active subscriber at a time, this wont work in what i am trying to do: i need to let many people watch something concurrently, they cannot wait in turn. so i guess i have to give each one of them a topic of their own. i do feel this doesnt seem to be right, especially if one has lots of subscribers, imagine you would have to create lots of topics, and i am not sure if there is a limit on how many topics you can create, and you would have to publish the same message/thing to lots of topics, will there be a performance issue?

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • What's YOUR Idea of an "Ideally Organized HD?

    What's YOUR Idea of an "Ideally Organized HD?"
    I've been giving this a lot of thought lately. Whereas it is obvious that OSX organizes your hard drive better than anything on Windoze, especially when you consider the power derived from using Spotlight, I have been wondering exactly WHAT, WHAT does an Ideally Organized Hard Drive look like? What are it's properties? I don't mean how it should look specifically to YOU, the single user. I mean what does an ideally organized Hard Drive look like to everyone running OSX? (which is everyone). What are some of the components of a ideally organized hard drive? What does it look like/feel like? Not necessarily in order of importance, I'll start this one off:
    An Ideally Organized Hard Drive Has These Properties (feel free to add your ideas):
    1) All the music, documents, apps, pictures and movies go into their designated locations, just for starters. You may even want to create another main Category such as I did, and call it "All Talk & Sound FX". Here's where I stick my voice, and talk radio, and verbal jokes etc. for example.
    2) There are NO identical (duplicate) files, but the thorough and profuse use of Alias files are implemented. {{{if you have duplicates, and you update the one, you necessarily have to update the other, otherwise, you don't have duplicates anymore, right? But if you use an Alias, no matter which file, original or Alias, that you update, BOTH files are updated.}}}
    3) The HD is organized for EASY Backup on a daily basis: Everything new gets placed into an "Everything New" file (call it what you want) on the Desktop, then this one folder is backed up daily, saved onto an external HD, then loaded back and now actually saved onto the HD as new stuff just once a week (in accordance to #1); this is the outcome from doing a Restore from this backed-up "Everything New" folder. Everything goes into this "Everything New" folder on a daily basis; however, Applications are installed immediately whereas everything else just gets popped into the "Everything New" folder for holding.
    4) Many files are annotated in the Get Info Window with easy to find key words and comments. Spotlight will do the rest my friends!
    5) A DMG of the HD (a perfect Clone which is achieved using your Tiger Disk--Disk Utility) is done on a weekly basis (heck, all you have to do is launch the software at night, go to bed, have an automatic shutdown on your Mac for about 3.5 hours later (for a 23GB DMG Disk Image)). {{Note that a Restore from the "Everything New" folder must be done first!, prior to making the DMG}} When this Disk Image is made, it will have All of your Preferences, All of your newly installed applications, All of your Bookmarks, All of your new additions to iCal, All of your new Addresses, EVERYTHING, and therefore these specific folders do NOT have to be backed up **separately** by using this process as I describe.
    Once a week you will Restore from this DMG (which takes an hour if you have previously verified/mounted this image), then delete the week-old Backup of the "Everything Folder", because your HD now now has all these files added to it (remember, the key here is to do a Restore from the "Everything New" folder first, before you made the most recent DMG). You can now also delete any old Disk Images that you want, because you will be making more! (I always keep 2 or 3 on hand). You can now also delete any old "Everything New" backups from your External, because you will be making more of these backups as well!
    6) Your Hard Drive should utilize the copious amount of custom icons, in order to quickly spot and identify files/folders.
    7) You have created shortcuts (Alias') on the HD, which point to spots on the External HD, (which is not only used for Backup as recently described) to facilitate the transfer of large files (example: AIFF's) to/from the external HD. My External HD has a working "Powerbook" folder where these files are saved to, keeping my internal HD at a bare minimum of growing size, yet the files are easily uploaded/downloaded between the external and internal, and viewed, when the External is attached (of course) to the internal.
    8) The hard drive lacks any sensitive material whatsoever, i.e. passwords are kept on an external hard drive, and new ones are backed up daily to the Everything New folder. Using a free program such as Password Vault also strengthens this area of security and organization. If the Passwords are kept to an external location, and yet are easily accessed by an Alias, then they are 100% safe to reside on the External, since the External would have to be attached in order for the passwords to be read.
    9) Maintenance is run routinely on the HD, using a program such as Onyx, especially before and after the disk image process. You can also schedule Onyx to run the Apple maintenance scripts automatically, when you are asleep. Also part of this maintenance would be running a program such as Disk Warrior, before and after the disk image process. Onyx and Disk Warrior go hand in hand, and although you will not "see" (visually) HOW your HD has been organized more efficiently, you will experience the benefits of using Disk Warrior (faster/more responsive), which organized your HD Directory automatically.
    10) Another nice little Utility is SpeedTools, which has a great program for Defraging files. Yes, I've found that Disk Defrag does work. Point #10 does nothing for "organizing", however I make this point because Disk Defrag does indeed help your HD to run more efficiently (thus faster).
    *** Ohh by the way, maybe I'm saying the following as a joke, maybe I'm not. But if you follow my suggestions above, you wouldn't be so paranoid about downloading the latest update to Tiger (or Leopard when that comes out) because the old "Archive & Install" option becomes obsolete. If you run into trouble NOW, using my methods, you now have the peace of knowing that you have a perfectly Cloned Disk Image of your valuable, ideally organized Mac HD, residing on an external drive and just waiting to be called into action! ***
    Finally, please note that I am not telling you how to organize your hard drive, I am only suggesting this as one way to do it, and the way that I do it. If you have something totally different from this, but it works for you, please post that. If you want to add to what I've said, go right ahead! But if you don't agree with something I've said, then by all means offer your own suggestion and be civil about it! Thanks!
    ~ Vito

    You and everyone else that takes the time to read, and understand what I said, and can benefit from this, is WELCOME! ; )
    By the way, I forgot to mention. I use "Micon" a little terrific freeware program (from VersionTracker) to make (initialize) my custom icons. I also use Graphic Converter to make my own original icons of anything I like. Don't underestimate the value in making your own custom icons-- they really stand out from the "standard old blue".
    ~ Vito

  • Got a feature idea? Post your idea or vote on an idea at the ideas site

    We have just launched a new site (
    http://ideas.acrobat.com) so
    that we can hear new ideas and have the community vote on which
    ideas are important. Post your idea or vote on an idea. We hope
    you’ll check it out.
    Thanks,
    The Acrobat.com Team
    Buzzword is a part of Acrobat.com

    The installing groups is actually already implemented in the beta of 2.6, which is in ftp.archlinux.org/other/pacman
    From the ChangeLog itself:
    Added group handling, so one can run 'pacman -S kde' and
                    install all files from the KDE group
    Not sure about the other things though, those sound like some very good ideas that should be considered and hopefully they aren't too hard to implement.
    Kritoke

  • Please tell me your idea about my manner of using proxy user

    Hello
    Please say to me your idea about my manner of using proxy user , I don't know that my method is right , doesn't it has security weakness ?
    Let me to say what I wanna do :
    I want my application users authenticated by database , therefore in my database (Oracle database) I create a user for every application user for example if I have 10 application user I create 10 user in the database for them and I grant necessary privileges to them
    Now I create a proxy user and grant to my users connect through the proxy user
    for example
    alter user user1 grant connect through user_proxy with role role1 authentication required;
    alter user user2 grant connect through user_proxy with role role1 authentication required;
    alter user user10 grant connect through user_proxy with role role1 authentication required;
    And now in Weblogic I create a DataSource that connect to the database by that proxy user
    My client Application (It's a stand alone application) obtain the DataSource from Weblogic and then get from operator its user name and password and then create a proxy session
    in below I've written the application's code
    Hashtable env = new Hashtable();
    env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" );
    env.put(Context.PROVIDER_URL, "t3://127.0.0.1:7001");
    try{
    Context context=new InitialContext( env );
    ds=(javax.sql.DataSource) context.lookup ("OracleConnection2");
    conn=(OracleConnection) ds.getConnection();
    java.util.Properties prop = new java.util.Properties();
    String username=getUserNameFromOperator();  //Operator enter user1
    String password =getPasswordFromOperator(); //Operator enter its password
    prop.put(OracleConnection.PROXY_USER_NAME, username);
    prop.put(OracleConnection.PROXY_USER_PASSWORD,password);
    conn.openProxySession(OracleConnection.PROXYTYPE_USER_NAME, prop);
    stmt= conn.createStatement();
    rs=stmt.executeQuery("select SYS_CONTEXT('USERENV','PROXY_USER') ||'--->' || user from dual");
    My anxiety is that this section
    String username=getUserNameFromOperator();  //Operator enter user1
    String password =getPasswordFromOperator(); //Operator enter its password
    prop.put(OracleConnection.PROXY_USER_NAME, username);
    prop.put(OracleConnection.PROXY_USER_PASSWORD,password);
    operator enter a real database username and password
    don't you think it cause security weakness ?
    do you have better suggestion for me ?
    thank you

    Gmail offers POP, which means you can use Mail.app like a regular ISP account. If you need an invite to GMail email me at the address on http://Gnarlodious.com/
    Once yiou have a Gmail address you need to enable POP (which includes SMTP). There is a help link on the GMail page that allows you to enable POP and other features.
    Once that is turned on, you need to set up the Mail.app for downloading your mail (and uploading). There are excellent instruction pages on Gmail, which I believe someone else gave you.

  • I have a problem after update version 6.0.1, the call not smoothly. how is your ideas?

    i have a problem after update version 6.0.1, the call not smoothly. how is your ideas?

    Hi all
    Like to share that my AV Adapter problem is solved ... brought it to Apple Service Center as a last resort after the new iOS v5.1 did not solve mine like some others have theirs solved.
    In fact, the Apple Service Center tried my AV Adapter and find that it does not work with their iphone and ipad too and flagged for a replacement.
    Just this week collected the replacement and tested that the replaced adapter works !
    So this "unsupported device" error msg could also be due to faulty adapter .... although it is unbelievable that a "simple" wired connector could so easily breakdown after 1 time use !   I never did believe that it can be broken so easily as we hardly used it after the first testing with our TV on HDMI.
    Closing this chapter of mine finally.

  • I already updated! The version iOS 6.0.1 without include me the McTube or MxTube. The McTube free I was using, capacity for 10 videos download only. I need McTube Pro free! What's your Idea?

    I already updated! The version iOS 6.0.1 without include me the McTube or MxTube. The McTube free I was using, capacity for 10 videos download only. I need McTube Pro free! What's your Idea?

    I already updated! The version iOS 6.0.1 without include me the McTube or MxTube. The McTube free I was using, capacity for 10 videos download only. I need McTube Pro free! What's your Idea?

  • HT5699 I forgot the answers to your Apple ID security questions. and don't see this link or don't have access to your rescue address

    I forgot the answers to your Apple ID security questions. and don't see this link or don't have access to your rescue address

    Alternatives for Help Resetting Security Questions and/or Rescue Mail
         1. If you have a rescue email address or a Security Questions issue, then see:
             If you forgot the answers to your Apple ID security questions - Apple Support.
             Manage your Apple ID primary, rescue, alternate, and notification email addresses - Apple Support
         2. Fill out and submit this form. Select the topic, Account Security. You must
             have a Rescue Email to use this option.
         3. This is the only option if you do not already have a valid Rescue Email.
             These are telephone numbers for contacting Apple Support in your country.
             Apple ID- Contacting Apple for help with Apple ID account security. Select
             the appropriate country and call. Ask to speak to the Account Security Team.
         4. Account security issues almost always require you to speak directly to an
             Apple representative to securely establish your identity as the account holder.
             You can set it up so that Apple calls you, either immediately or at a time
             convenient to you.
                1. Go to www.apple.com/support.
                2. Choose Contact Support and click Contact Us.
                3. Choose Other Apple ID Topics and choose the appropriate topic for
                    your issue.
                4. Follow the onscreen instructions.
             Note: If you have already forgotten your security questions, then you cannot
             set up a rescue email address in order to reset them. You must set up
             the rescue email address beforehand.
    Your Apple ID: Manage My Apple ID.
                            Apple ID- All about Apple ID security questions.

  • PL/SQL performance questions

    Hi,
    I am responsible for a large, computation-intensive PL/SQL program that performs some batch processing on a large number of records.
    I am trying to improve the performance of this program and have a couple of questions that I am hoping this forum can answer.
    I am running Oracle 11.1.0.7 on Windows.
    1. How does compiling with DEBUG information affect performance?
    I found that my program units (packages, procedures, object types, etc) run significantly slower if they are compiled with debug information
    I am trying to understand why this is so. Does debug information instrument the code and result in more code that needs to be executed?
    Does adding debug information prevent compiler optimizations? both?
    The reason I ask this question is to understand if it is valid to compare the performance of two different implementations if they are both compiled with debug information. For example, if one approach is 20% faster when compiled with debug information, is it safe to assume that it will also be 20% faster in production (without debug information)? Or, as I expect, does the presence of debug information change the performance profile of the code?
    2. What is the best way to measure how long a PL/SQL program takes?
    I want to compare to approaches, such as using a VARRAY vs. a TABLE variable. I have been doing this by creating two test procedures that performs the same task using the two approaches I want to evalulate.
    How should I measure the time an approach takes so that it is not affected by other activity on my system? I have tried using CPU time (dbms_utility.get_cpu_time) and elapsed time. CPU time seems to be much
    more consistent between runs, however, I am concerned that CPU time might not reflect all the time the process takes.
    (I am aware of the profiler and have used that as well, however, I am at the point where profiling is providing diminishing returns).
    3. I tried recompiling my entire system to be native compiled but to my great surprise, did not notice any measurable difference in performance!
    I compiled all specification and bodies in all schemas to be native compiled. Can anyone explain why native compilation would not result in a significant performance improvement on a process that seems to be CPU-bound when it is running? Are there any other settings or additional steps that need to be performed for native compilation to be effective?
    Thank you,
    Eric

    Yes, debug must add instrumentation. I think that is the point of it. Whether it lowers the compiler optimisation level I don't know (I haven't read anywhere that it does) but surely if you stepping through code manually to debug it then you don't care.
    I don't know of a way to measure pure CPU time independently of other system activity. One common approach is to write a test program that repeats your sample code a large enough number of times for a pattern to emerge. To find how much time individual components contribute, dbms_profiler can be quite helpful (most conveniently via a button press in IDEs such as PL/SQL Developer, but it can also be invoked from the command line.)
    It is strange that no native compilation appears to make no difference. Are you sure everything is actually using it? e.g. is it shown as natively compiled in ALL_PLSQL_OBJECT_SETTINGS?
    I would not expect a PL/SQL VARRAY variable to perform any differently to a nested table one - I expect they have an identical internal implementation. The difference is that VARRAYs have much reduced functionality and a normally unhelpful limit setting.
    Edited by: William Robertson on Nov 6, 2008 11:49 PM

  • Named Searches - Performance Questions

    Dear MDM Pros,
    I have question regarding the performance of Named Searches.
    I have a repository with 600.000 datasets (and various lookup tables) now I need to setup named searches for restricting the access to the data.
    I have one field with classifications (Number 8 digits / 15.000 different classes) on that I want to restrict access. The restriction should work on the first 2 digits of a class.
    Example Classification:
    21010509
    21010503
    21010504
    21010507
    19050711
    19050912
    31020530
    Rule:
    LEFT(CLASSIFICATION,2) >= 19 AND LEFT(CLASSIFICATION,2) <= 21
    So my idea is to use this
    LEFT(CLASSIFICATION,2) >= 19 AND LEFT(CLASSIFICATION,2) <= 21
    Expression in the search and save this as Named Search.
    As wrote before I think this is really slow.
    Can anybody give me a hint how to find a performance optimized solution for this problem.
    Best regards
    Roman

    Hi Christian,
    here is what SAP said:
    07.10.2008 - 16:22:51 CET - Antwort von SAP     
    Dear Mr Becker,
    I have looked at this issue that you described.
    I have attached 2 notes: 1077701 and 1138862.
    Note 1077701 describes under " 6) Named Search" that the use of
    expressions is not supported for named search.
    The reason for this is that an expression can result in extensive searchoperation as you experience it. The response time in MDM Data Manager
    was in a range of 2-3 minutes when I executed a free form search
    with the expression you used in my environment. My observation in
    our SAPconnect session yesterday was that you have a similar response
    time with MDM Data Manager in your environment.
    Note 1138862 describes under "Situation 2:" that the
    SRM-MDM Catalog Search UI will show a 5-7 times slower response
    time then the MDM Data Manager for the same search.
    This is why you experience response times of 10-15 minutes in
    SRM-MDM Catalog Search UI when calling it with a named search that
    uses an expression.
    Kind regards,
    Alexander Ohlgart
    I am running MDM with 5.5 SP3 actual fix.
    HTH
    Roman

Maybe you are looking for