Performance gain ?

We have an Package with constant definitions. (about 1400 constant)
A lot of views are using this constants and need about 8 sec to execute
One of the developers found a performance gain from about factor 4, if he substitutes the constant in a view with the value filled in the constant.
Do we have a possibility to get the performance gain without hardcoding the values?
Thanks in advance

You cannot reference packaged constants in SQL.
If you mean you are calling packaged functions which return constants then yes - I would expect there to be some small overhead for calling them from SQL (which could be magnified to noticeable levels of degradation if the function is called repeatedly). The overhead has been reduced somewhat with later versions of Oracle.
Of course there are also reasons why the optimizer might choose a more efficient plan when substituting literal values for unknown values up-front, since you provide more information to the optimizer.

Similar Messages

  • Not able to get performance gain using Multidimensional Clustering

    Hi All,
    We are trying to test the effect of Multidimensional Clustering on the
    performance of queries, infocubes/DSO loading and deletion of requests
    from BW objects.
    Unfortunately we are not able to see any performance gain when we use
    MDC.
    We are using following steps to test MDC:
    1> We have created copy infocube ZPOS10_CP from original infocube
    ZPOS10_V5.
    2>In the new infocube ZPOS10_CP, we switched on the MDC Settings.
    3> We are using following dimensions as MDC Dimensions:
           1> SID_0CALMONTH Calendar Year/Month
           2> Dimension ZPOS10_CP5 (This dimension is based on Site, Dist.
    Channel etc).
    Dimensions were selected based on consideration that:
           We should use those dimensions which are often used in query    
    Restriction.
    When we are trying to load the same file in both original infocube with
    NO MDC and new infocube with MDC there does not seem to be any major
    improvement. Both of these loads are taking almost same time.
    Can you please tell us how can we effectively use MDC? Is there any
    setting that we need to do regarding extent size of tablespace etc?
    Kindly help us to resolve this issue.
    Regards,
    Nilima Rodrigues

    Hi,
    In MDC we are not creating any dimensions.
    Basically we just select some dimensions of Infocube as MDC Dimensions.
    based on the suggestions provided by SAP on MDC, we have selected 0CALMONTH as one of the MDC Dimensions.
    Other dimensions are chosen on the following recommendation provided by SAP:
    ●      Select dimensions for which you often use restrictions in queries.
    ●      Select dimensions with a low cardinality.
    Regards,
    Nilima Rodrigues

  • Materialized views performance gain  estimation

    Hi;
    I have to estimate the performance gain of a materialized view for a particular query without creating it.
    that means, i have:
    - A query with initial execution plan
    - A select statement with is considered as a probable MV
    i need to show what is the performance gain of creating the MV for the previous Query.
    I see the DBMS_MVIEW.EXPLAIN_REWRITE but it shows the performance gain for a yet created MV.
    any idea .
    Thanks

    Hi Bidi,
    - for the first (agregation ones), i don't know how to estimate the performance gain.It's the difference in the elapsed time between the original aggregation and the time required to fetch the pre-computed summary, ususally a single logical I/O (if you index the MV).
    i want to determinate the impact of creating materialized views on the performance of a workload of queries. Perfect! Real-world workload tests are always better than contrived test cases!
    If you have the SQLAccess advisor, you can define a SQL Tuning Set, and run a representative benchmark with dbms_sqltune:
    http://www.dba-oracle.com/t_dbms_sqltune.htm
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Performance Gain for IRIX servers if Personal flag is removed from magnus.conf file.

    Performance Gain for IRIX servers if Personal flag is removed from magnus.conf file.
    <p>
    As shipped by SGI, some of the server products have a flag set in the
    magnus.conf file setting a small footprint for the servers, generally
    less than 1 megabyte of memory. This flag is the Personal flag, and
    looks something like:
    <P>
    MaxProcs 1
    MinThreads 1
    MaxThreads 8
    Personal on
    <P>
    For one's own personal use, this is fine. But if CGIs are called, or
    if the site sees more traffic, then the flag may need to be removed,
    to look like:
    <P>
    MaxProcs 1
    MinThreads 1
    MaxThreads 8
    <P>
    A significant performance gain, (and a corresponding increase in
    memory used), would be seen by increasing the MaxThreads, as well.
    <P>
    For more complete tuning recommendations on SGI/IRIX, please see SGI's
    Tuning IRIX 6.2 for a Web Server page

    That's a comment in the file. It has no effect at all.

  • LDOMs Manager v3.1: vnet performance gain

    Hi all,
    From this blog:  https://blogs.oracle.com/raghuram/entry/ldoms_virtual_network_performance_greatly1 I can see that there are certain prerequisites to be met before I can get the vnet performance gain using LDOMs Manager v3.1. I cannot find anything anywhere about the patches mentioned for Solaris 10 except a patch for the control domain. Are the driver changes incorporated in a S10 kernel patch or is it sufficient to set extended-mapin-space=on in guest domains to get the advertised performance gain ?

    Hi, DGC,
    You'll need the setting and the kernel patch, because the patch replaces network drivers with the faster ones.
    regards, Jeff

  • What EBS performance gains can I expect moving non-x86 (sun?) to x86?

    Hi,
    I was hoping some of you would please share any general performance gains you encountered by moving your EBS from non-x86 to x86. I'm familiar with the benchmarks from tpc.org and spec.org. The users however measure performance on how long it takes for a request to complete. For example, when we moved from our EBS from a two node sun E3500 (4*450 sparc II 8GB memory) to a two node v440 (4*1.28ghz sparc IIIi 8GB memory), performance doubled accross the board with a three year pay back.
    I am trying to 'guesstimate' what performance increase we might encounter, if any, moving from sun sparc to x86. We'll be doing our first dev/test migration the first half of '08, but I thought I'd get a reading from all of you about what to expect.
    Right now we're planning on going with a single-node, 6 cpu dual core 3Ghz x86, 16GB ram. The storage is external RAID 10. We process approximately 1000 payroll checks bi-weekly. Our 'Payroll Process' takes 30min to complete. Similarly, 'Deposit Advice' takes about 30min to complete. Our EBS database is a tiny 200GB, we have a mere 80 concurrent users, and we run HRMS, PAY, PA, GL, FA, AP, AR, PO, OTL, Discoverer.
    Thanks for your feedback. These forums are great.
    L5

    Markus and David,
    First let me thank you for your posts. :-).
    Markus:
    Thank you for the tip. However, I usually do installations with a domain adm user. It does a lot of user switching, yes, but then it only switches to users created by SAPINST, that is most of the time it is switching to <sid>adm, which sounds perfect. At the time of my post I had been setting some environment variables so as to try to get the procedure to distribute the various pieces and bits (saparch, sapbackup, saptrace, origlogs and mirror logs, datafiles, etc. exactly where I wanted them and not where the procedure wants them) so I ended up by using <sid>adm to perform the DB instance installation and not the domain adm user I had installed CI with (I forgot to change back). When I noticed I figured it wouldn't make a difference since it usually switches to <sid>adm anyway. However, for the next attempts I settled on ny initially created dom adm user and no change to the results. OracleService<SID> usually logs on as a system account so the issue doesn't arise, I think.
    and
    David:
    The brackets did it. Thank you so much. It went further and only crashed later, I don't usually potter around sdn, so I'm not familiar with the workings of this, I don't know how to reply separately to the posts and I don't know how to include a properly formatted post (I've seen the Plain Text help but I hate to bother with sidetrack details) so I apologize to all for the probably-too-compact jumble that will come out when I post this. I am now looking at the following problem (same migration to 64) so I fear I may have to close this post and get back with a new one if I can't solve this next issue.

  • RAID 1 for performance gain??

    in contrast of what i would think, it seems that the OSX implementation of RAID 1 doesn't give a performance gain according to this article:
    http://docs.info.apple.com/article.html?artnum=106594
    since the article is very old, does anyone know whether this the info is still up to date? posts i've found in the archives indicate not..
    thanks
    arri

    Hi arri;
    If you are looking to improve performance for an application that is limited by how quickly it is able to get data from or to a disk then you want RAID 0 or striping is the solution. For those applications that are limited by disk I/O RAID 0 can help because it allows the system to do reads/writes to multiple disks in sequence overlapping them for increased performance. This is only true if the performance of the application is limited by it's access to data.
    Allan

  • Performance gain 7800 compared to 6600 card

    Hi,
    It seems that some people here have upgraded from the 6600 card to the 7800 card. I would like to know about the performance gain using Aperture 1.5 compared to the 6600 card.
    Is is worthwhile to upgrade the card or is it better to wait for the announced (rumoured?) x1900 g5 edition? Let's assume that the x1900 g5 arrives of course.
    It looks like welovemacs.com is selling the 7800 cards. That seems like a more stable option then buying one of the reflashed cards.
    Some info on how the 6600 card performs:
    -D2x nef images: sliders and cursor are not close to real-time.Some sliders like shadow-highlight tool are very slow.
    -D70 nef images: sliders still not real-time, but usable.
    Previews are all turned off
    Jochem

    We're talking about performance now (visible FPS), not tearing. In fact, I hadn't witnessed any perceptible tearing until I turned off VSync, and with SNA enabled I still couldn't watch 1080p without some lag. This is all with the environment variable you mentioned.
    Everyone I know with a SandyBridge GPU has perfect stability and performance- I'm sure you're having an amazing time with it, SNA or not. The premise of this post is that even the previous generation can play 1080p videos and run GNOME Shell smoother than silk. :\ Unfortunately, it seems Intel on Linux hasn't had as much attention with Ironlake. I really don't want to shell out even more money for an Ivy Bridge laptop this year, but I may just end up doing that if the situation doesn't improve. It was a big letdown considering Intel's amazing support in the past.
    I guess it's really not too bad, but as a Multimedia Designer, I need my desktop compositing to be totally slick, on principle. I figured an i5 with higher specs than my old laptop would be enough. Oh well, I guess I can't always have my GNOME and eat it. I will continue to investigate and maybe get in contact with the mailinglist.
    UPDATE: Tried to use JHBuild with no success. Aside from the not-so-necessary packages that needed Python2, there are some utterly necessary packages that I just couldn't build due to unrelated build errors. I don't even think a GNOME developer could make out how to fix them very easily, so I'm gonna' give up on that road for the moment. I'm going to see if there's a way to upgrade just gnome-shell, mutter, and clutter through the AUR.
    Last edited by ScionicSpectre (2012-02-16 05:39:28)

  • Add FW 800 PCI to Quicksilver 2002 -- Performance Gain over BuiltIn 400?

    Greetings!
    I just got an external Firewire 800 enclosure and installed 2ea Seagate IDE ATA harddrives in it (and am running it off its FW 400 6-pin connection).
    I am wondering if adding FW 800 PCI card will give me much read/write performance over my built-in FireWire 400? It's common for me to move 4GB video files across internal ATA to my external FireWire drives.
    Quicksilver 2002, 10.4.3.
    Any comments welcome especially if you've compared my options on your own hardware.
    Finally:
    Can you recommend a FW 800 PCI card that works (I only need one FW 800 connection on the card).
    I am using a new enclosure PDS2GO35D, FireATA Enhance FWB2ATA35D with Oxford 912 chip, 1.2 firmware (that was recommeded on another thread and it seems to work fine).

    FW800 is about twice as fast as FW400
    Thanks for your help!
    Still, running a FW 800 harddrive off a PCI card -- on a slow PCI bus speed (33MHz) -- adds mystery to the formula over running the same harddrive off built-in FW 400 (with other FW 400 HDs attached)...
    I would guess some performance gain is likely, the question then becomes how much improvement (and is it worth the time, effort and money to find out for myself)?
    I'm sure someone has done it and could save me from a firedrill if the gain is not that much or if it doesn't work.
    Dual 1GHz Quicksilver 02    

  • What is the performance gain on MBP (1) 1GB SODIMM vs (2) 1GB SODIMMS

    I just added a second 1 GB SODIMM to my 1 GB MacBook Pro (for a total of 2 GB of ram), but I am not sure that I gained any performance.
    I ran several benchmarks (XBench 1.2, CineBench 9.5, and GeekBench) but I only detected a small change in performance with (2) 1GB modules vs (1) 1GB module. With CineBench no difference was detected.
    Is there a special firmware update that will enable additional performance (i.e. Dual Channel support) with (2) matched SODIMMs installed?
    What program will best show the performance increase?
    How much of a performance increase is expected?
    MacBook Pro 2.0GHz   Mac OS X (10.4.6)   Upgrade 1 GB to 2 GB of ram

    Matching or paired RAM modules will provide about a 3-5 percent improvement in RAM based operations. Graphic operations may show little or no improvement because operations are performed in VRAM rather than core RAM.

  • No performance gain when using local interfaces

    Hello,
    I'm doing some tests to compare performances between remote ejb interfaces and local ejb interfaces.
    I have two stateless session beans EJB1 and EJB2, EJB1 calls a method on EJB2, this method receives one object as the only parameter and returns it immediately. The parameter is a big object (~700ko). My test consists simply of making 1000 calls from EJB1 to EJB2, one time with remote interfaces, one time with local interface. For both tests, the EJBs run in the same container, same VM.
    The results show absolutely no differences between the remote and the local interface !
    As I found these results a bit surprising, I changed the serialization method of my parameter object this way:
    private void writeObject(java.io.ObjectOutputStream out) throws IOException {
    System.out.println("writeObject(MyBigObject)");
    out.defaultWriteObject();
    just to check if my object is serialized when using remote interface. And the response is no.
    So question is: is there an "undocumented optimization" of the stub/skel generated by weblogic which make local calls when calling a remote method inside the same VM ?
    Some precisions:
    - I'am using weblogic 8.1sp2
    - When calling remotely my EJB2 from an external batch (running in a separate VM), I see the message "writeObject(MyBigObject)" so the serialization is done in this case.

    <Fr?d?ric Chopard> wrote in message news:[email protected]..
    So question is: is there an "undocumented optimization" of the stub/skel generated by weblogic which make local calls when callinga remote method inside the same VM ?
    >
    Some precisions:
    - I'am using weblogic 8.1sp2
    - When calling remotely my EJB2 from an external batch (running in a separate VM), I see the message "writeObject(MyBigObject)" sothe serialization is done in this case.
    WebLogic 5.x, 6.x and 7.x do call by reference for co-located EJBs by default. 8.1 has this behavior turned off by default. You may
    experience call-by-reference optimization in 8.1 only if it has been turned on explicitly in the deployment descriptor.
    Hope this helps.
    Regards,
    Slava Imeshev

  • Any performance gains after upgrade to Oracle 10 ?

    Hello,
    we have been using EBS 11.5.9 (with Oracle 9). Does anybody see any improvements from performance point of view after upgrade to Oracle 10r2 (but without upgrade of EBS to R12) ?
    Regards

    Somethings have been faster for us and somethings have been slower. In general it seems the same or better. When we get a chance to look at the jobs that are performing poorly I'm sure we will be able to make them perform. Make sure you find and apply all of the performance patches for the modules you have implemented, we are missing one right now and it is hurting our Configurator developers instance. I have actually set the the optimizer_features_enable back to 9i. This is something I would not recommend doing in a production environment, and we are only doing it until we can get the patch that fixes the optimizer in 10g applied.
    One surprise for us was the increase in memory required, in our test and development instances we have found a 1GB SGA is as small as you can go with 10g and still have reasonable performance. We had a number of 9i ebus databases around 300-400MB SGA that performed just fine. This limited the number of environments we could squeeze on a single server (not really a bid deal if you only have a few dev, test and training environments).
    In our production environment in order to have the same size datacache more memory was consumed. Were we could get away with a shared pool of 4GB under 9i, it grows to over 6GB in 10g. Looks like more PGA is also being used. 10g along with RUP5 and upgrading the java from 1.3 to 1.5 has increased our memory consumption. We increased memory from 64GB to 96GB and we are using a large portion of the additional 32GB. We were right at the limit with 64GB in our 9i production instance but we had never experienced the paging that we did with 10g. Now at 96GB we nolonger see any paging.

  • Are there large performance gains in labview with a dual processor?

    I have an application that is very cpu intensive using 4 ni6052e's with lots of data aquisition going on. Does anyone have experience with dual processors and labview? What processors? What OS? What are the complications to switching?

    You might be interested in the following documents:
    Using LabVIEW to Create Multithreaded Applications for Maximum Performance and Reliability
    Optimizing Test with Multiprocessor Machines
    LabVIEW and Hyperthreading
    You'll want to look at multithreading along with multiprocessing. On multiple processor systems, multithreading is almost always beneficial, because it allows for each multiple i
    ntensive threads to run simultaneously. However, it is the operating system's responsibility to schedule threads on the processors, and may not always schedule the threads of your application on separate processors.
    On either single or multiple processor systems, there is often a benefit if you are sharing time between CPU-intensive threads and I/O-intensive threads. While one thread is reading or writing to the network (or GPIB, or hard drive, or DAQ device) other CPU-intensive threads can keep running. In a single-threaded system, the CPU would sit idle at times, waiting for the I/O to complete.
    Hope this helps.

  • Maximum Performance gain for 10g Rel 2 on Win2003

    Hi All,
    For a 4 CPU Widows 2003 Adv Server m/c with 10g Rel 2 , 8G RAM, SGA_TARGET = 1.4G,PGA_TARGET= 1.4G which runs Procs involving expensive queries and lots of INSERTS (ETL jobs), what should be the value of PARALLEL_MAX_SERVERS , currently I have given it as 4 (as against 80 which is the default), through the 10g DB Control dashboard I can see only 26% CPU utilization.All my data files are in just 1 partition, do I need to have more disk partitions etc.
    Do advise ....how can I have the maximum perf. gain out of the setup.
    Regards,
    Gaurav S.

    Just to add , we have implemented RAID 6 on the Server m/c (its a 8 disk 1TB hard disk)

  • Performance gain at merging parent/child - any experience?

    Table A has PK=(ID) and one attribute Type to tell in which child.tables to get the record.
    Child.tables B1, B2..B4 have PKy=(TABLE_A_ID_FK, By_ID) and other attributes(around 10, no LOBs or long strings)
    I know B1 is about to have 8mio recs, B2 will have 3mio recs.
    I do not know the reason to have a table A because this ID is hidden from final user.
    The tables C1,..Cx references TABLE_A_ID_FK.
    Do you think it would be better to add "Type" attribute and "TABLE_By_ID_FK" into the tables C1,..Cx (replacing TABLE_A_ID_FK) and then drop table A.
    The situation is not yet in Prod. and I think the Table A will be a bottle neck as B1, ..B4 are frequently accessed and A will have plenty of rows. On the top, it is not very meaningful to keep this table (it came out from "abstract" class in UML diagram of application developers)

    I thought about that as an option. Create several regions on page 0 that were defaulted to not display and then have them appear as needed. That may be the answer.
    My level of sophitication with Apex is good when it is db desing, app design, ui, but I am not a WEB developer and so the finder points of HTML or java are somewhat a mystery to me.
    I saw some writeups on htmldb_get as being able to call shared processes or application pages. I know I can redirect to a ApEx url, I wondered if there was a way to call a region as an option.
    Thanks for the input,
    Sam

Maybe you are looking for

  • Apple ID Security questions. Help me please

    Okay so I have had this email address for about 2-3 years now and it's been my apple email since then. I have added iTunes codes to it every so often but just yesterday I bought one and decided to redeem it on my iPad. It was successfully redeemed an

  • Something wrong with the Profile page of all users in OIM

    Hi All, I am not sure what is happening but every time I login with some user id and password in to the OIM and click on Profile for the user in the Self Service Console, I get this error thrown: java.lang.NullPointerException on the UI. Also in the

  • Simple way to change color of field in an interactive report

    Is there a simple way with css to change the color of a field in an interactive report. I want certain fields to stand out. Can you provide an example and where the code is inserted in APEX? TY.

  • Lack of Transparency

    In the middle of November, my wife and I decided it was time to switch cell phone providers; we left AT&T and headed for Verizon.  However, one of the main reasons we decided to make the switch is that through doing some due diligence, I realized we

  • Audition update fails, unable to extract downlaod files. (U44M1I210)

    When I try to update Audition, I get the following error message: "Update Failed Unable to extract the downloaded files. Press Retry to download again (U44M1I210)" This is on a PC, Windows 7 Pro, 64 bit. I have other CC application installed and they