The plot gets sicker (hardwired performance issues)

I figured something else out that may be of use to a bunch of people. The Radio Channel has an affect on performance. I was able to get near-normal throughput from my airport extreme draft-n gigabit, in b/g compatability mode, with an airport express client for airtunes. The secret: Channel 11. For some bizzare reason, the other channels limit the hardwired throughput to sub-megabit. If someone told me their hardware performed like this, i would think they were on drugs. The effects are repeatable and reversible.
I should clarify, i'm talking about hardwired performance not wireless performance.
Message was edited by: Simon Shapiro

Still not working-got the device authorized to be used on the campus network, had it checked out to make sure it was set up correctly. Worked on my friend's computer fine when they were in my room-picked up my signal and connected just fine. My computer, on the other hand, still gets the blinking signal indicator, both in the airport express icon on the toolbar and when I go to internet connect, and because it sees the signal as blinking on and off, it's not online long enough to connect. So we've narrowed the problem down to "not the college network" and "not the airport express" and therefore "just my computer," but I can't figure out for the life of me what the problem is. I also downloaded the airport express firmware update to make sure that wasn't the problem. Not helpful.

Similar Messages

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • How to resolve most of the Oracle SQL , PL/SQL Performance issues with help of quick Checklist/guidelines ?

    Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
    Checklist for Quick Performance  problem Resolution
    ·         get trace, code and other information for given PE case
              - Latest Code from Production env
              - Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
              - Program parameters & their frequently used values
              - Run Frequency of the program
              - existing Run-time/response time in Production
              - Business Purpose
    ·         Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
    ·         Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
    ·         Identify most time consuming operation(s) using Row Source Operation section
    ·         Study program parameter input directly mapped to SQL
    ·         Identify all Input bind parameters being used to SQL
    ·         Is SQL query returning large records for given inputs
    ·         what are the large tables and their respective columns being used to mapped with input parameters
    ·         which operation is scanning highest number of records in Row Source operation/Explain Plan
    ·         Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
    ·         Check the time consuming index on large table and measure Index Selectivity
    ·         Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
    ·         Is correct index being used for all large tables?
    ·         Is there any Full Table Scan on Large tables ?
    ·         Is there any unwanted Table being used in SQL ?
    ·         Evaluate Join condition on Large tables and their columns
    ·         Is FTS on large table b'cos of usage of non index columns
    ·         Is there any implicit or explicit conversion causing index not getting used ?
    ·         Statistics of all large tables are upto date ?
    Quick Resolution tips
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    2) Use Data Caching Technique/Options to cache static data
    3) Use Pipe Line Table Functions whenever possible
    4) Use Global Temporary Table, Materialized view to process complex records
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    8) Follow Oracle PL/SQL Best Practices
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    12) Review Join condition on existing query explain plan
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    14) Avoid applying SQL functions on index columns
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    Thanks
    Praful

    I understand you were trying to post something helpful to people, but sorry, this list is appalling.
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    No, use pure SQL.
    2) Use Data Caching Technique/Options to cache static data
    No, use pure SQL, and the database and operating system will handle caching.
    3) Use Pipe Line Table Functions whenever possible
    No, use pure SQL
    4) Use Global Temporary Table, Materialized view to process complex records
    No, use pure SQL
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    No, use pure SQL
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    Makes no sense.
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    What about using the execution trace?
    8) Follow Oracle PL/SQL Best Practices
    Which are?
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    You mean design your database and queries properly?  And table scanning is not always bad.
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    It depends if that is necessary or not.
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan.  There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
    12) Review Join condition on existing query explain plan
    Well, if you don't have your join conditions right then your query won't work, so that's obvious.
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    No.  Oracle recommends you do not use hints for query optimization (it says so in the documentation).  Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general.  Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
    14) Avoid applying SQL functions on index columns
    Why?  If there's a need for a function based index, then it should be used.
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    See 13.
    In short, there are no silver bullets for dealing with performance.  Each situation is different and needs to be evaluated on its own merits.

  • Getting map extent / performance issues

    Hi together!
    We're considering migration to Oracle Maps and MapViewer. Testing the system I found two major Problems, regarding which any help would be appreciated:
    1. Our map page makes use of the MapViewer and the Oracle Maps JavaScript API. Is it possible by any way to get the extent of the current map view? Let's say, the user zooms somewhere or drags the map around. How can I then find out the coordinates auf e. g. the lower left and upper right corner of the map image (only the center coordinates are not enough)?
    2. Some of our ThemeBasedFOIs are consisting of numerous (>500) objects. Switching them on (.setVisible(true)) takes a long while, which is not acceptable for our needs. Especially point and polgyon shapes are concerned. Is there any way of improving the performance at that point? I tried using the undocumented .setMaxWholeImageLevel method, but this doesn't help as I'm often getting errors ("image size is too big"). Displaying them as additional BaseMaps is not an option too, as we want to be able to catch clicks on individual objects.
    I hope, my questions were not too annoying...
    Thanks in advance,
    Scotty

    Hi Scotty,
    on this post:
    Re: thema MVThemeBasedFOI
    you may find some information including an undocumented method MVMapView.getMapWindowBBox to retrieve the map area.
    Joao

  • Performance Issue in Dashboard using SAP BW NetWeaver Connection

    HI Experts ,
    We developed Dashboard which is based on BW Queries, However it is taking considerable amount time while executing.
    We are  using BO Dashboard SP2 version ,BO 4.1 and BI system 7.01 .
    We are  looking for few clarifications on SAP BO Xcelsius Dashboards.
    Though we know limitations on component number and data volumes which could badly affect performance of the dashboard, We do have a requirement to handle huge data volumes and multiple components. Our source data lies in SAP BI system and we are using BICS connectivity/ Webi with QAAWS for updating data in BO dashboard .
    Our requirement is too complex where we should be in a position to meet user expectations for complete view 75 KPI’s in a single Dashboard.
    We have scenarios like YTD ,QTD and MTD other complex calculations in Bex Query.
    Here are my questions,
    Is there any way to provide complete functionality using large data sets to the users with the current architecture without any performance issues?
    Are there any third party tools which can be used with BO Dashboard for the performance improvement and handling huge volumes?
    Do you suggest any alternate solution for complete functionality?
    Many thanks for your inputs in advance!
    Regards
    Venkat

    Hi Rajesh,
    Thank you so much for your response.
    I have tried the way you suggested. But here my issue is , I have a prompt in my webi report based on months selection and it is a single valued prompt.
    so I was able to schedule my report only for one month;whereas my dashboard needs to show values for one year data based on the months the user will select in the dashboard.
    Though i use the WebI instance data in the dashboard , i getting only one month value, also I am not able to associate the selected month for the WebI instance in dashboard.
    Is there any option to schedule webI report for different months and the dashboard has to pick the 12 months instance and the combo box selection in the dashboard must associate with it??
    Please help me with your thoughts.

  • How should I report forum performance issues?

    The forums rely heavily on the caching features of browsers to improve the speed of page rendering. Performance of these forums should greatly improve after a few pages because more and more of the images, css and javascript is cached in the browser. As a consequence, when reporting forums performance issues the report should include some information on the state of the browser cache to determine whether the issue is a browser issue or a server issue. Such detailed information is generally not available from just watching the browser screen, but needs to come from specialized tools such as performance monitor plugins and recording proxies.
    The preferred report method for performance issues is to use the speed reporting features build into or available as a plugin for a browser for both the page you want to report a problem with and several refence pages in the site. Detailed instructions are listed below separated out for different browsers. If possible, please use Firefox for submitting the report because it provides an export format that can be read back electronically.
    Known performance issues
    The performance issues with any screen with a Rich Text Editor, such as the Reply window and the compose Private Message window have been acknowleged and improvements are being implemented.
    Mozilla Firefox (preferred)
    Warning: it is currently not recommended to generate a speed report when logged in. The speed report has enough detail for somebody else to hijack your session and impersonate you on the forums. If you really must report while logged in, make sure you log out your browser after generating the speed report and wait at least 4 hours before posting.
    Install the Firebug plugin
    Install the NetExport 0.6 extension for Firebug
    Enable all Firebug panels
    Switch to the "Net" panel in Firebug
    Click on this link
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Browse to the page where you are experiencing the performance problem.
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Browse to the page where you are experiencing the performance problem.
    Export the data from the Firebug Net panel
    When you report a performance problem please attach the 6 exports from the Firebug Net panel and an explanation of how you are experiencing the issues (for instance how much slower it is then normal) and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting. If you have non-standard tweaks to your Firefox configuration (such as pipelining enabled) or are running any plugins please include that information in your report as well.
    Google Chrome
    Open the Developer Tools (Ctrl-Shift-J)
    Navigate to the resources tab
    Enable resource tracking.
    Click on this link
    Export the resource loading data.
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Navigate to the page where you experience the performance problem
    Export the data
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Navigate to the page where you experience the performance problem
    Export the data
    Since Google Chrome does not have an export format for the Resource Tracking information best current practice is to take a screenshot and note the hover details for any resource with a tail that is longer then 25% of the total load time. When you report a performance problem please attach the screenshots and an explanation of how you are experiencing the issues (for instance how much slower it is then normal)  and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting.
    Apple Safari
    The Apple Safari Web Inspector has a Resources panel similar to the Resources panel in the Google Chrome developer tools.To get there, follow these steps:
    Show the menu bar.
    Go to preferences
    Go to the Advanced Tab
    Check “Show  Develop menu in menu bar”.
    From the Develop menu select “Show Web  Inspector”. 
    Collecting the performance information and exporting works exactly the same as in Google Chrome. Please refer to the instructions for Google Chrome.
    Microsoft Internet Explorer
    IE does not have native features to analyze web traffic. No plugins have been found that produce the required information (please let us know if we missed any). For now, please reproduce the issue with Firefox, Chrome or Safari.
    Please note that due to the reliance on Javascript for the interactive effects the performance of these forums will be much better on MS IE 8 then on previous versions of MS IE.

    Hi
    It works, check once again...
    regards
    Swami

  • Dashboard Performance Issue

    HI Ingo,
    Thanks a lot for the wonderful postings in SDN and for your blogs on SAP BI/BO Solution architecture.
    I  am looking for few clarifications on SAP BO Xcelsius Dashboards. 
    Though I know limitations on component number and data volumes which could badly affect performance of the dashboard, We do have a requirement to handle huge data volumes and multiple components. For drilling down and complete analysis for the users. Our source data lies in SAP BI system and we are using BICS connectivity/ Webi with Live office for updating data in Xcelsius.
    Our requirement is too complex where we should be in a position to meet user expectations for complete multidimensional analysis by different criterion.
    We have scenarios like, Delivery performance where we need to get Case fill rate, Line fill rate, OTIF (On Time In Full),  etc .along with that alerts and based on that we should be providing short term, long term and medium term analysis.  (also we have scenarios from Sales, Inventory , Supply chain which are interlinked to each other).
    Here are my questions,
    1.     Is there any way to provide complete functionality using large data sets to the users with the current architecture without any performance issues?
    2.     Are there any third party tools which can be used with Xcelsius for the performance improvement and handling huge volumes?
    3.     Do you suggest any alternate solution for complete functionality?
    Awaiting your response.
    Thanks & Regards,
    Paramesh Kumar Bada

    hi,
    Generally, it would be better to display summarized information in Xcelsius.
    So at database/Source level, maintain aggregated information which will serve as input to Xcelsius.
    You can use Crystal Reports and Xcelsius together.
    Because Crystal reports has the capability of embedding Xcelsius components within the report.
    So Charts can be built using Xcelsius and embed in Crystal. whereas Table related content can be developed within Crystal report directly. Also, try to provide URLs(Open Document concept) from Xcelsius to Crystal reports to drill-down from  Summary data to detailed data.
    Regards,
    Vamsee

  • Performance issue: Adding a new product line to existing Quote calls pricing engine for existing line a well

    Hi All,
    I need some assistance for my below query...
    If there are already existing some product/quote lines on the quote and then we try to add another new product/quote line to this quote , then  it is taking more time to add the product. As per my understanding it is calling the Pricing engine for the existing line as well. How can we avoid the pricing engine call for the existing lines.
    There are some parameters which we are setting as mentioned below :
    l_control_rec.header_pricing_event := 'BATCH' -- What does this mean when we set to batch
    l_control_rec.price_mode := 'ENTIRE_QUOTE'; -- (possible values could be CHANGE_LINES , QUOTE_LINE)
    l_header_rec.pricing_status_indicator := 'C';
    l_control_rec.calculate_freight_charge_flag := 'Y';
    l_control_rec.calculate_tax_flag := 'Y';
    l_header_rec.tax_status_indicator := 'C';
    Question :Could someone please help us with this whether is there any way with these parameters could be altered or changed to some other value ( like for PRICE_MODE we see this parameter could have some other values like : CHANGE_LINES , QUOTE_LINE etc other than ENTIRE_QUOTE).
    means lets say we do the Pricing Engine call only for the Newly Added quote line but not do it for the Entire Quote again and again..
    Now the other question here could be how do we finally synch the line level price values for all the quote lines upto the Quote header level in form of Totals (TOTAL_LIST_PRICE,TOTAL_TAX, TOTAL_SHIPPING_CHARGE, SURCHARGE, TOTAL_QUOTE_PRICE in aso_quote_headers_all table) ??
    Also is there a way that we don't do the Freight Charge calculation and Tax calculation (means we skip this completely) while adding products to the quote but do it at a later point when doing the Submit to Order functionality.
    Could someone please help with these pricing related parameters and modes to be used in order to get around this performance issue
    Thanks
    Mithun

    Dear Expert,
    Activate your Controlling area as usual and Cost Centers and Profit Center , You can assign an internal order for the particular product line for what you are seeing and can collect the costs of that particular product line exclusively.
    Regards,
    Shankar K B

  • Performance issue adding a new product line to existing Quote pricing issue

    Hi All,
    Morning , need some assistance with this as we are currently stuck with this...
    Using the Seeded API call mentioned here : aso_quote_pub.update_quote we are trying to add a new product/item lines to an existing quote in Sales Online Module but it is taking lot of time ( means performance issue is there ).
    Also if there are already existing some product/quote lines on the quote and then we try to add another new product/quote line to this quote , then also it more and more of the time..
    There are some parameters which we are setting as mentioned below :
    l_control_rec.header_pricing_event := 'BATCH' -- What does this mean when we set to batch
    l_control_rec.price_mode := 'ENTIRE_QUOTE'; -- (possible values could be CHANGE_LINES , QUOTE_LINE)
    l_header_rec.pricing_status_indicator := 'C';
    l_control_rec.calculate_freight_charge_flag := 'Y';
    l_control_rec.calculate_tax_flag := 'Y';
    l_header_rec.tax_status_indicator := 'C';
    Question :Could someone please help us with this whether it there any way these parameters could be altered or changed to some other value ( like for PRICE_MODE we see this parameter could have some other values like : CHANGE_LINES , QUOTE_LINE etc other than ENTIRE_QUOTE).
    means lets say we do the Pricing Engine call only for the Newly Added quote line but not do it for the Entire Quote again and again..
    Question : Now the other question here could be how do we finally synch the line level price values for all the quote lines upto the Quote header level in form of Totals (TOTAL_LIST_PRICE,TOTAL_TAX, TOTAL_SHIPPING_CHARGE, SURCHARGE, TOTAL_QUOTE_PRICE in aso_quote_headers_all table) ??
    2.Also is there a way that we don't do the Freight Charge calculation and Tax calculation (means we skip this completely) while adding products to the quote but do it at a later point when doing the Submit to Order functionality.
    Could someone please help with these pricing related parameters and modes to be used in order to get around this performance issue
    Thanks

    Dear Expert,
    Activate your Controlling area as usual and Cost Centers and Profit Center , You can assign an internal order for the particular product line for what you are seeing and can collect the costs of that particular product line exclusively.
    Regards,
    Shankar K B

  • Performance issues with Bapi BAPI_MATERIAL_AVAILABILITY...

    Hello,
    I have a Z program to check ATP which is working with Bapi BAPI_MATERIAL_AVAILABILITY....
    As I am in the retail system we have performance issues with this bapi due the huge amount of articles in the system we need to calculate the ATP.
    any  way to  improve  the  performance?
    Thanks and best regards
    L

    The BAPI appears to execute for only one plant/material, etc., at a time, so I would have to concentrate on making data retrieval and post-bapi processing as efficient as possible.  In your trace output, how much of your overall time is consumed by the BAPI?  How much by the other code?  You might find improvements there...

  • Performance issue with report

    Hello
    i am making a change to an existing custom report.I have to pull all the orders except CANCELLED Orders for a parameters passed by user.
    I made a change as FLOW_STATUS_CODE<>'CANCELLED' in rdf.The not equal is causing performance issues...and it is taking lot of time to complete.
    can any one sujjest what will be the best to use in place of not equal.
    Thanks

    Is there an index on column FLOW_STATUS_CODE?
    Run your query in sqlplus through explain plan, and check the execution plan if a query has performance issues.
    set pages 999
    set lines 400
    set trimspool on
    spool explain.lst
    explain plan for
    <your statement>;
    select * from table(dbms_xplan.display);
    spool offFor optimization questions you'd better go to the SQL forum.

  • Performance issue with FDM when importing data

    In the FDM Web console, a performance issue has been detected when importing data (.txt)
    In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
    At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
    It seems a performance issue when system tries to show the imported data in the web page.
    It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
    Thx in advance!
    Cheers
    Matteo

    Hi Matteo
    How much data is being imported / displayed when users are interacting with the system.
    There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
    I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
    The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
    Hope this helps
    Stuart

  • SQL Performance issue: Using user defined function with group by

    Hi Everyone,
    im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
    Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
    CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
    IS
    tz_name VARCHAR2(100);
    date_out date;
    BEGIN
    SELECT
    to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
    TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
    INTO date_out
    FROM dual;
    RETURN date_out;
    END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
    If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
    select
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
    But if I execute the following statement, it takes only ~90ms ...
    select
    fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
    noi
    from
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stamp
    )The execution plan for all three statements is EXACTLY the same!!!
    Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
    My questions are:
    Why is the second statement sooo much slower than the third?
    and
    Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
    I would really appreciate some help on this really weird issue.
    Thanks in advance,
    Andi

    Hi,
    The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
    drop table t cascade constraints purge;
    create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
    exec dbms_stats.gather_table_stats(user, 't');
    create or replace function test_fnc(p_int number) return number is
    begin
        return trunc(p_int);
    end;
    explain plan for select id from t group by id;
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from t group by test_fnc(id);
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from (select id from t group by id);
    select * from table(dbms_xplan.display(null,null,'advanced'));Output:
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "TEST_FNC"("ID")[22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$F5BB74E1
       2 - SEL$F5BB74E1 / T@SEL$2
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
          OUTLINE(@"SEL$2")
          OUTLINE(@"SEL$1")
          MERGE(@"SEL$2")
          OUTLINE_LEAF(@"SEL$F5BB74E1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    37 rows selected.

  • Oracle Advance Compression Deletion Performance issue in 11g R1

    Hi,
    We have implemented OAC in our datawarehouse environment to enable table and index compression. We tested in our Test machine and we gained almost 600GB due to advance compression without any issues and all the informatica loads are running fine. And hence we implemented the same in our production but unfortunately two sessions which are involving deletion of data are taking more time (3 times of actual timing) for completion which affects our production environment.
    The tables creating issue are all non partitioned tables.
    I need to know whether Oracle Advance Compression will decrease delete performance? and is there any way to disable advance compression on those particular tables?
    Our environment details:
    DB earlier version: 11.1.0.6
    DB current version : Oracle 11.1.0.7
    Applied PSU: 11.1.0.7.6
    Operating system: Solaris 5.9
    Syntax used for compression:
    ALTER TABLE TABLE_NAME MOVE COMPRESS FOR ALL OPERATIONS;
    Thanks in Advance.

    Hi,
    Thanks for your reply.
    The note is for update performance issue and also I have applied necessary patches for improving update performance.
    The update sessions are all working fine. only the deletion sessions are creating problem.
    Could someone help me out to clear this problem.
    Thanks,
    VBK

  • Performance Issue With Displaying Candidate Details for iRecruitment

    We are in EBS R12.1.3 when we trying to display the Candidate Details page in iRecruitment iRecruitment >vacancies>applicants> click on applicant>   the page spins for  a quite whille to show the results and some times we see the 500 errors..
    We are in R12.1.3 and also applied the  Patch.10427777:R12.IRC.B patch.. is there any  tunnign steps for the iRecruitment  page=/oracle/apps/irc/candidateSearch/webui/CmAplSrchPG

    You have already applied the patch mention in note: Performance Issue With Displaying Candidate Details Page in 12.1.3 (Doc ID 1293164.1)
    check this note also Performance Issue when Clicking on Candidate Name (Doc ID 1575164.1)
    thanks

Maybe you are looking for

  • Fast Method to Install Oracle Client in multiple machines

    Hi all, I need to install Oracle Data Provider for NET (and all depending Oracle client SW components) in multiple client machines. Exist any other options (faster) without installing oracle client on each of the machines? Many thanks in advance.

  • Conversion of @@IDENTITY in SQL Server

    Hi, Just wanted to know if anyone has run into this. The migration tool warns that @@IDENTITY has be treated as a normal variable. It looks like: select DEFINED_SEQ.CURRVAL should be the equivalent, not sure how foolproof it will be in a multiuser en

  • Error in Structure

    I always get this error when I try to generate web page in ViewController "http://java.sun.com/jsf/html" is not a registered TLD namespace

  • Download Full CS5.5 not using Cloud

    Where can I download CS5.5 without going through Cloud. I don't want to ugrade to cloud. I have a CS5.5 registration key which was purchased in 2012 and have just upgraded my computer but no longer have the downloaded CS5.5 files.

  • My ibook tells me its full and it shouldn't be

    I only have 1 user installed, my user account info says the folder is 8.64 GB, and my apps folder is 5.54 GB. My hard drive shows I only have 8 GB left on my 40 GB heard drive... it just doesn't add up. Please help! I love my iBook.