Ncomp'ed code is SLOW

Hi there,
we have a JSP which converts text to PDF
reading the text from a database CLOB. It
works fine but is slow.
We have the successfully ncomp'ed the code
but found the JSP being even 10% slower
then before. That is a surprise since we
actually expected a significant boost in
performance.
We are using Oarcle 8.1.7.1.0 with SUNS
"WorkShop Compilers 5.0 98/12/15 C 5.0"
compiler which I believe is certified.
We would be interested to hear if anyone
has got this ncomp code working and what
the performance is.
Anyone encountered the same problem ?
Thx.
- Stefan

This is an example of faster code to calculate the weekday.
It just take less that 1 seconds on my pc
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.GregorianCalendar;
public class Test {
        public static void main(String [] args) {
                try {
                        long a_day = 24L * 60 * 60 * 1000;
                        SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");
                        long date1 = sdf.parse("01-10-1992").getTime();
                        long date2 = sdf.parse("19-03-2004").getTime();
                        GregorianCalendar gc = new GregorianCalendar();
                        gc.setTimeInMillis(date1);
                        int day = gc.get(gc.DAY_OF_WEEK);
                        int no_of_day = 0;
                        while (date1 <= date2)
                                if (day > 1 && day < 7)
                                        no_of_day ++;
                                if (++day > 7)
                                        day = 1;
                                date1 += a_day;
                        System.out.println("No of Weekday = " + no_of_day);
                catch (Exception ex)
                        System.out.println(ex);
}

Similar Messages

  • Query in PL/SQL code is slower

    I have a very strange issue within my development database, I'm using Oracle 10g.
    I created a query to read some data from a large collection of tables -OFDM tables- I have tested and tuned the performance of the query from (PL/SQL Developer 7 ), it takes only 2 minutes to finish but when I use the same query as a cursor in a database procedure it consume a lot of time around 15 min to get the data.
    Your help is appreciated

    [url http://groups.google.de/group/comp.databases.oracle.server/browse_frm/thread/df893cf9be9b2451/54f9cf0e937d7158?hl=de&tvc=1&q=%22Beautified%22+code+runs+slower#54f9cf0e937d7158]Recently somebody complained about slow performance after code was beatified in PL SQL Developer, after recompilation without flag "Add Debug Information" it run faster...
    (just a guess)
    Best regards
    Maxim

  • Sample code for slow moving material report?

    Hi all,
    I have a requirement to display slow moving mateial report on a particular goods issue date and material wise.
    The basic requirements are
    1. Movement type '261'  and material, plant are select-options.
    2.last goods movement date based on  parameter.
    3. Display the quantity available
    but here i am getting a problem that i couldn't find the last goods movement for that particular material with specific storage location as one material document containing more than one item. So please help? Also what logic i have to use to display quantity available?
    Please specify the sample code or logic for above requirement?
    Any custom slow moving report codes also very helpful?
    Thanks,
    Vamshi

    >
    VAMSHI KRISHNA wrote:
    > Solved.
    ....Shaved.

  • What changed in v32, that makes emscripten compiled JS code awfully slow?

    Today I updated to v32.0.1 of Firefox. I'm currently working on the JS version of our face tracking library (Beyond Reality Face), that is emscripten compiled code. In v31 I got 36 FPS, while in v32 and v33 (beta) I got 10 to 12 FPS.
    Does anyone know, if there was a change, that could course that slowdown?
    Thanks in advance.

    Edit: After a lot more of testing I got:
    Emscripten + Google Closure Compiler (--compilation_level 'SIMPLE')
    seems to generate something, that Firefox v32 handles differently than v31 (and thus slower). Must be something basic, like Array access or whatsoever.

  • Collaboration On Shared Code File Slow

    I heard about all of the hype of collaboration, and I had a Java project that my friend and I had been working on. We both set up Java Studio Enterprise 8 on Fedora Core 5 workstations, and one of us ran the collaboration server.
    After signing in and beginning a shared code file, we noticed that the code line synchronization is extremely slow, with update times in lines of 5,000 - 10,000 milliseconds. Is this normal? Is there any way to configure the server such that changes made by one user are reflected quicker to the other users? Or is it just "Java slow" and there isn't anything that can be done? I personally was expecting something a bit more realtime (e.g. 50-250 msecs) than a few thousand milliseconds.

    The following are known performance issues that may be of relevant to your situation:
    http://collab.netbeans.org/issues/show_bug.cgi?id=62291
    http://collab.netbeans.org/issues/show_bug.cgi?id=69392
    You can try using later versions of the IDE:
    NetBeans 5.0: http://www.netbeans.org/downloads/index.html (Collab is not bundled with NB5 product; you will need to connect to the update center and download and install the module).
    Or Sun Java Studio Enterprise 8.1 Beta (http://developers.sun.com/prodtech/javatools/jsenterprise/downloads/index.jsp)

  • Crystal report run from ASP code significantly slower than when run in CRS

    We have CRS XI R2.  I developed a report that contains several on-demand sub-reports.  The report and sub-reports are very fast when run directly from CRS.  However, when I run from ASP (users run a link from the intranet), it takes 4 times longer (1 second on CRS, vs. 5-10 seconds on intranet).  The report takes longer, bringing up a sub-report takes longer and paging through sub-reports take longer.  What can be done to improve the speed of the report that is using the ASP code?  I used a sample program provided at this support site to develop the report and it is pretty basic code.  The only time I have a problem is when I have many sub-reports.  Since they are on-demand, I do not know why this would matter.

    This has been created as an incident with SAP support.  Things that you will want to check is making sure that you handle the postback to the code as you do not need to run your entire code on every postback.  This will help performance after the original load. 
    One other thing is to compare performance with the viewer within Infoview that uses the same backend server as your application, ie PSReportFactory uses the page server, so you'll want to test with the DHTML viewer.  RAS code (reportclientdocument) uses the Report Application Server so you will want to test with the Advanced DHTML viewer.

  • Matlab code much slower in Labview MatlabScript than in Matlab

    I was using an algorithm written in Matlab that executed in less than 2 seconds. Now I need to use it within a Labview program and I am using Matlab script with the same algorithm, but it lasts 10 times longer than in Matlab. Is this normal? Thank you!

    How do you know that it is 10 times? Did you create a benchmark?
    What are the conditions for the benchmark? Can you provide the code for it?
    I am asking this because if you create the VI and put the m-script in the script node, and then press run (looking on the watch) and wait for the VI to finish, you "benchmarked" compile times, memory allocation as well as the complete VI's execution (including the script node of course). I doubt that this will ever be comparable to the "benchmark" in MatLab.
    If you provide your VI, we might also be able to give some hints to increase performance.
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • A remote server runs code terribley slow

    Hello,
    I'm in an odd development situation. I write code on my local
    dev server, which gets handed off to someone else's dev server that
    tests it before putting it up live. I have NO direct access to the
    remote servers and upload my files by emailing them over
    individually with instructions on where to place them. (I warned
    you it was odd).
    I've written a small search page which does the following:
    does a verity search, if a result is not cached then the file is
    parsed then the information is cached. It sorts the results, then
    displays them. It caches by serializing a struct of structs using
    wddx.
    On my local server this works almost instantaneously,
    literally half a second for the largest "all" search, but on the
    remote server it TIMES OUT! Doing searches (that run near instantly
    locally) for less items takes upwards of 2 full minutes.
    The error that coldfusion throws is that cffile write is
    timing out. (It rewrites the cache if it finds new files in the
    results).
    The code is identical on both my local, and the remote
    server.
    I'm a little clueless as to why this is happening, and my
    background has never been in coldfusion before. If anyone could
    shed any light on why this is happening, or where to look to figure
    out why it's happening, it would be greatly appreciated!
    Thank you so much for any help you can give.

    Kronin, you bring up a good point, and that has been the main
    focus I've had so far. In my local environment I am searching
    through roughly 1100 files. The Cache file i'm making comes out to
    about 800KB. In the remote environment I'm only searching through
    roughly 2000 files, and I am assuming that the cache is growing at
    the same rate.
    These aren't terribly big numbers I'm working with, even the
    oldest hardware should be able to pull this off without a hitch.
    Even if cffile loads it into memory first, there would surely be
    1MB of memory available, and if not a swap shouldn't take long
    enough to cause a time out.
    Keep the ideas coming though :)

  • AppleScript code runs slower as an application than running from the editor

    I discovered today that AppleScrip runs much faster when I run the code from the run button in Applescript Editor 2.3 than when I save the source code as an application and then run the application. I really expected the opposite. I would prefer not having to open Applescript Editor 2.3 each time I run a program, but the time difference is significant (see below). What causes this? Am I doing something wrong? I save the source code by going to File -> Save As -> File format: Application. I then run the application.
    Examples (both programs manipulate data in lots of Excel spread sheet files):
    Program 1- run time is 5 min, 40 sec when selecting run in Applescript Editor 2.3, and 23m, 25s when running as an application.
    Program 2- run time is 7m, 12s when selecting run in Applescript Editor 2.3, and 12m, 54s when running as an application.

    I tried the code "Hiroto came up with as solution that was essentially 100% effective" provided in the link. My run time went from 5m 30s when run from source code to 5m 50s when run as an application (differences in seconds could easily be my crude method of measuring time). Previously, it took 23m 25s when run as an application. Thanks, this is the solution I was looking for.
    However, in searching this issue, I found similar code in a discussion topic titled "Applet speed -- Hiroto's solution followed by issues" posted Jul 27, 2008. That code would randomly end with "The variable o is not defined". I hope the new code solves this problem. The new code certainly worked great the first time I used it.

  • JDBC inside db slower than outside

    Why would running a java app inside the database be over 5x slower than running outside?
    Connection code sets thin client when running outside db, set defaultConnection when running inside
    JavaPool is 100Meg
    Connection statement cache is set to 30,000; cache is enabled
    When running outside db, autoCommit is false - same as inside db
    - java application calls 5300 pl/sql procs in a package (all bound variable calls, statements are closed after execution to eliminate cursor issues)
    Round trip for 5300 calls outside the database takes 20 seconds
    Round trip for 5300 calls inside the database takes over 100 seconds
    There is no question it has something to do with either the JVM itself or JDBC stored procedure calls when running in the db...same code runs in 18 seconds (outside the db) with no JDBC calls executed...
    The doc's say that JDBC inside the db is supposed to be remarkably faster??? any clues?
    Last resort will be to ncomp the code.

    Id like to follow up with some questions:
    1.)     Oracle documentation states that PL/SQL is the preferred tool for data intensive jobs. And Java Stored Procedures are preferred for more algorithmic jobs. This seems contrary to what you state. Can you clearify?
    2.)     When you say gen. purpose and cpu bound Java code, do you mean algorithmic code?
    3.)     I assume the points 1 and 2 in your response suggest that it takes so long to get the Java environment up and running that it is killing the performance of my simple algorithm...is that what you mean?
    4.)     My real goal is to use intelligent agents (use AI algorithms that are very expensive computationally) and are invoked upon updates and continually train on newly inserted data. The fact these guys can live in the Oracle JVM, be invoked upon an update using a trigger and call an EJB client in warning of a particular condition is what made me fall in love with this solution that Oracle provides. But do you think I am asking too much or Im going a bit beyond what the JVM is for?

  • Report in Designer fast, in Viewer extremely slow

    Hi.
    I have a report which connects to a SQL Server backend, calling 3 stored procs which deliver the data needed for the report. However, when I execute the report in the Designer (the web app uses CR 9, but I'm testing it with CR 2008 that came with VS 2008) it takes approx. 20 seconds to return with the data - yes, the query takes rather long...
    When I run our web application and call up the same report, using the same parameters and connected to the same database, the Viewer sits there for about 10 minutes before finally showing the report. I've been trying to determine the cause of this but have come up empty so far.
    The report itself is a fairly simple report: headers, a parameter overview (the report uses parameterized queries), the data, and no subtotals, no subreports, no formulas.
    Why is this taken so long using the Viewer? Apparently it can be fast(er) since the Designer comes within 20 secs WITH the correct data!
    I've tried a couple of things to see if I could determine the cause of the bad performance, but so far I've had no luck in improving performance whatsoever. The only thing left would be redesigning the underlying stored proc, but this is a rather complex stored proc and rewriting it would be no small task.
    Anybody has any idea on what to do next? Our customers are really annoyed by this (which I can understand) since they sometimes need to run this report a couple of times a day...

    Ludek Uher wrote:>
    >
    > Troubleshooting slow performance
    >
    > First thing to do with slow reports would be consulting the article u201COptimizing Reports for the Webu201D. The article can be downloaded from this location:
    >
    > https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/701c069c-271f-2b10-c780-dacbd90b2dd8
    >
    Interesting article. Unfortunately, trying several of the suggestions made, it didn't improve the report's performance. No noticeable difference in either Designer or Viewer.
    >
    > Next, determine where is the performance hit coming from? With Crystal Reports, there are at least four places in your code where slow downs may occur. These are:
    >
    > Report load
    > Connection to the data source
    > Setting of parameters
    > Actual report output, be it to a viewer, export or printer
    >
    This part is not relevant. Loading the report isn't the problem (first query being executed under 0.5 seconds after starting the report); as I'll explain further at the end of this reply.
    > A number of report design issues, report options and old runtimes may affect report performance. Possible report design issues include:
    >
    > u2022 OLE object inserted into a report is not where the report expects it to be. If this is the case, the report will attempt to locate the object, consuming potentially large amounts of time.
    The only OLE object is a picture with the company logo. It is visible in design time though, so I guess that means it is saved with the report?
    > u2022 The subreport option "Re-import when opening" is enabled (right click the subreport(s), choose format subreport, look at the subreport tab). This is a time consuming process and should be used judiciously.
    The report contains no subreports.
    > u2022 Specific printer is set for the report and the printer does not exist. Try the "No printer" option (File | Page setup). Also, see the following resources regarding printers and Crystal reports;
    Tried that. It was set to the Microsoft XPS Document writer, but checking the 'No printer' option only made a slight difference (roughly 0.4 seconds in Designer).
    > u2022 The number of subreports the report contains and in which section the subreports are located will impact report performance. Minimize the number of subreports used, or avoid using subreports if possible. Subreports are reports within a report, and if there is a subreport in a detail section, the subreport will run as many time as there are records, leading to long report processing times. Incorrect use of subreports is often the biggest factor why a report takes a long time to preview.
    As stated before, the report has no subreports.
    > u2022 Use of "Page N of M", or "TotalPageCount". When the special field "Page N of M" or "TotalPageCount" is used on a report, it will have to generate each page of the report before it displays the first page. This will cause the report to take more time to display the first page of the report
    The report DOES use the TotalPageCount and 'Page N of M' fields. But, since the report only consists of 3 pages, of which only 2 contain database-related (read further below) I think this would not be a problem.
    > u2022 Remove unused tables, unused formulas and unused running totals from the report. Even if these objects are not used in a report, the report engine will attempt to evaluate the objects, thus affecting performance.
    > u2022 Suppress unnecessary report sections. Even if a report section is not used, the report engine will attempt to evaluate the section, thus affecting performance.
    > u2022 If summaries are used in the report, use conditional formulas instead of running totals when ever possible.
    > u2022 Whenever possible, limit records through Record selection Formula, not suppression.
    > u2022 Use SQL expressions to convert fields to be used in record selection instead of using formula functions. For example, if you need to concatenate 2 fields together, instead of doing it in a formula, you can create a SQL Expression Field. It will concatenate the fields on the database server, instead of doing in Crystal Reports. SQL Expression Fields are added to the SELECT clause of the SQL Query send to the database.
    > u2022 Using one command table or Stored Procedure or a Table View as the datasource can be faster if you returns only the desired data set.
    > u2022 Perform grouping on the database server. This applies if you only need to return the summary to your report but not the details. It will be faster as less data will be returned to the reports.
    > u2022 Local client as well as server computer processor speed. Crystal Reports generates temp files in order to process the report. The temp files are used to further filter the data when necessary, as well as to group, sort, process formulas, and so on.
    All of the above points become moot if you know the structure of the report:
    3 pages, no subreports, 3 stored procs used, which each return a dataset.
    - Page 1 is just a summary of the parameters used for the report. This page also includes the TotalPageCount  field;
    - Page 2 uses 2 stored procs. The first one returns a dataset consisting of 1 row containing the headings for the columns of the data returned from stored proc 2. There will always be the same number of columns (only their heading will be different depending on the report), and the dataset is simply displayed as is.
    - The data from stored proc 2 is also displayed on Page 2. The stored proc returns a matrix, always the same number of columns, which is displayed as is. All calculations, groupings, etc. are done on the SQL Server;
    - Page 3 uses the third stored proc to display totals for the matrix from the previous page. This dataset too will always have the same number of columns, and all totaling is done on the database server. Just displaying the dataset as is.
    That's it. All heavy processing is done on the server.
    Because of the simplicity of the report I'm baffled as to why it would take so much more time when using the Viewer than from within the Designer.
    > Report options that may also affect report performance:
    >
    > u2022 u201CVerify on First Refreshu201D option (File | Report Options). This option forces the report to verify that no structural changes were made to the database. There may be instance when this is necessary, but once again, the option should be used only if really needed. Often, disabling this option will improve report performance significantly.
    > u2022 u201CVerify Stored Procedure on First Refreshu201D option (File | Report Options). Essentially the same function as above, however this option will only verify stored procedures.
    Hm. Both options WERE selected, and deselecting them caused the report to run approx. 10 seconds slower (from the Designer)...
    >
    >
    > If at all possible, use the latest runtime, be it with a custom application or the Crystal Reports Designer.
    >
    > u2022 The latest updates for the current versions of Crystal reports can be located on the SAP support download page:
    >
    > https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/bobj_download/main.htm
    >
    I've not done that (yet). Mainly because CR 10.5 came with VS2008, so it was easier to test to see if I can expect an improvement regarding my problem. Up till now, I see no improvement... ;-(
    > u2022 Crystal Report version incompatibility with Microsoft Visual Studio .NET. For details of which version of Crystal Reports is supported in which version of VS .NET, see the following wiki:
    >
    > https://wiki.sdn.sap.com/wiki/display/BOBJ/CrystalReportsassemblyversionsandVisualStudio+.NET
    >
    >
    According to that list I'm using a correct version with VS2008. I might consider upgrading it to CR 12, but I'm not sure what I would gain with that. Because I can't exactly determine the cause of the performance problems I can't tell whether upgrading would resolve the issue.
    > Performance hit is on database connection / data retrieval
    >
    > Database fine tuning, which may include the installation of the latest Service Packs for your database must be considered. Other factors affecting data retrieval:
    >
    > u2022 Network traffic
    > u2022 The number of records returned. If a SQL query returns a large number of records, it will take longer to format and display than if was returning a smaller data set. Ensure you only return the necessary data on the report, by creating a Record Selection Formula, or basing your report off a Stored Procedure, or a Command Object that only returns the desired data set.
    The amount of network traffic is extremely minimal. Two datasets (sp 1 and 3) return only 1 row containing 13 columns. The sp 2 returns the same number of columns, and (in this clients case) a dataset of only 22 rows, mainly numeric data!
    > u2022 The amount of time the database server takes to process the SQL query. Crystal Reports send the SQL query to the database, the database process it, and returns the data set to Crystal Reports.
    Ah. Here we get interesting details. I have been monitoring the queries fired using SQL Profiler and found that:
    - ALL queries are executed twice!
    - The 'data' query (sp 2) which takes the largest amount of time is even executed 3 times.
    For example, this is what SQL profiler shows (not the actual trace, but edited for clarity):
    Query                  Start time         Duration (ms)
    sp 1 (headers)      11:39:31.283     13
    sp 2 (data)            11:39:31.330     23953
    sp 3 (totals)          11:39.55.313     1313
    sp 1 (headers)      11:39:56.720     16
    sp 2 (data)            11:39:56.890     24156
    sp 3 (totals)          11:40:21.063     1266
    sp 2 (data)            11:40:22.487     24013
    Note that in this case I didn't trace the queries for the Viewer, but I have done just that last week. For sp2 the values run up to 9462 seconds!!!
    > u2022 Where is the Record Selection evaluated? Ensure your Record Selection Formula can be translated to SQL, so that the data can be filter down to the server. If a selection formula can not be translated into the correct SQL, the data filtering will be done on the local client computer which in most cases will be much slower. One way to check if a formula function is being translated into a SQL is to look at u201CShow SQL Queryu201D in the CR designer (Database -> Show SQL Query). Many Crystal Reports formula functions cannot be translated into SQL because there may not be a standard SQL for it. For example, control structure like IF THEN ELSE cannot be translated into SQL. It will always be evaluated on the client computer. For more information on IF THEN ELSE statements see note number 1214385 in the notes database:
    >
    > https://www.sdn.sap.com/irj/sdn/businessobjects-notes
    >
    Not applicable in this case I'm afraid. All the report does is fetch the datasets from the various stored procs and display them; no additional processing is taking place. Also, no records are selected as this is done using the parameters which are passed on to the stored procs.
    > u2022 Link tables on indexed fields whenever possible. While linking on non indexed fields is possible, it is not recommended.
    Although the stored procs might not be optimal, that is beside the point here. The point is that performance of a report when run from the Designer is acceptable (roughly 30 seconds for this report) but when viewing the same report from the Viewer the performance drops dramatically, into the range of 'becoming unusable'.
    The report has its dataconnection set at runtime, but it is set to the same values it had at design-time (hence the same DB server). I'm running this report connected to a stand-alone SQL Server which is a copy of the production server of my client, I'm the only user of that server, meaning there are no external disturbing factors I have to deal with. And still I'm experiencing the same problems my client has.
    I really need this problem solved. So far, I've not found a single thing to blame for the bad performance, except maybe that queries are executed multiple times by the CrystalReports engine. If it didn't do that, the time required to show the report would drop by approx. 60%.
    ...Charles...

  • Flash Builder 4 super slow when Workspace and Project is on Network Drive

    Hi I was wondering if maybe it is not common practice to create a workspace and project in a Network Drive? When I do this Flash Builder runs extremely slow ( build workspace slow, saving slow, code hinting slow, undo slow, etc, etc, etc). Very painful!
    When I place the same project locally on my C drive everything runs fine. So should I not be running things off the network? I would really like to since all my files are on the network with backup system. Can anyone offer some advice? Or maybe I need to optimize something? Thanks
    - D

    Hi.
    Having your workspace and project on a network drive is not recommended as Flash Builder writes and reads a lot from the workspace and that will be slow.
    A version control system such as svn or git with a repository hosted on your network drive should be perfectly fine. (You only take the hit when you check in and check out)
    -Anirudh

  • Avoid Distributed query in PL/SQL cursor code

    Hi,
    I have to avoid a distributed qry in my cursor code in PL/SQL.
    The query follows like this,
    cursor c1
    is
    select a.test,b.test1,a.test2
    from apple a,
    [email protected] b,
    bat c
    where a.listid = b.listid
    and a.list_name = c.list_name;
    Now i need to split the above cursor into two .
    (1)I need to query appl and bat which is from local database into one and
    (2)Have to do something for the value from [email protected] is stored in a temp. table or PL/SQL table.So that ,i can use the PL/SQL table or temp table in my join in cursor ,instead of having a distributed query.
    By doing so,will the performance hit badly ?
    [Note: Imagine this scenario is taking place in Oracle 11i Apps]
    Regards,
    Prasanna Natarajan,
    Oracle ERP Tech Team.

    [url http://groups.google.de/group/comp.databases.oracle.server/browse_frm/thread/df893cf9be9b2451/54f9cf0e937d7158?hl=de&tvc=1&q=%22Beautified%22+code+runs+slower#54f9cf0e937d7158]Recently somebody complained about slow performance after code was beatified in PL SQL Developer, after recompilation without flag "Add Debug Information" it run faster...
    (just a guess)
    Best regards
    Maxim

  • Same select max is very slow in one program but fast in another

    Hi,
    I have a report that becomes very slow these few months. I used SQL trace for the report and found out its these codes that slow down the report:
    SELECT MAX( mkpf~budat )
                  FROM mkpf
        INNER JOIN mseg
                       ON mseg~mblnr = mkpf~mblnr AND mseg~mjahr = mkpf~mjahr
                    INTO posting_date
               WHERE mseg~werks  = w_matl-batch_reservations-werks
                     AND mseg~charg  = w_matl-batch_reservations-charg
                     AND mseg~bwart  IN ('261', 'Z61').
    The thing is these codes have been used in different system, DEV, QAS, and PRD. But only in PRD it is very slow, other systems are pretty fast.
    I even created a local copy of that report in PRD, with the same code. The local file runs fast and perfectly. But the original code is just slow.
    Just wondering if anybody met this problem too? any ideas??

    Hi Liu,
    Index creation is not a advisable solution.Please follow the existing indexes by adding Mandt field.
    Try like this
    SELECT MAX( mkpf~budat )
                  FROM mkpf
        INNER JOIN mseg
                       ON mseg~mblnr = mkpf~mblnr AND mseg~mjahr = mkpf~mjahr
                    INTO posting_date
               WHERE mseg~mandt = sy-mandt
                      AND mkpf~mandt = sy-mandt
                      AND mseg~werks  = w_matl-batch_reservations-werks
                     AND mseg~charg  = w_matl-batch_reservations-charg
                     AND mseg~bwart  IN ('261', 'Z61').
    Hope it will be helpful.
    Regards,
    Kannan

  • Performance: call packages from APEX is slow

    Hi
    we have a complexe authorization, authentication and policy concept. We have encapsulate a lot of functions into a seperate package. Functions and procedures of this package gets called at different places in three different applications. I know if a package function ie. get's called in a report query it could be very slow because it get's called not one time for one row, it sometimes get's called 5 or 10 times for one row. It's always better to join tables and information into reports.
    Problem is, we have two instances and in one instance, the application is "running". In the other instance, the application is f*****g slow...
    So I read about pinning of packages and problems with session pooling and session cached cursors.
    First question is, what excatly happens if I pin a package into memory and what are the prerequisites to do that? Could this realy improve the performance dramatically?
    Second question is, could there be some problems of session pooling or can we increase the amount of session that could be pooled so that packages get be chached?
    Do you have some more information, links or ideas to increase performance? or caching of packages? or some other tricks like overwriting V(xxx) syntax?
    Thanks in advance!

    Greetings,
    As mentioned in the above post, pinning packages can help with the overhead of packages being loaded and reloaded after being swapped out. If you follow the link and read the blog post, you'll get a good idea of what happens when you Pin a package and how to actually do it.
    But more to the point you need to know where your code is "failing to be performant". Luckily there are a number of things that you can do to pinpoint what portion of the code is slow and why.
    DBMS_PROFILER allows you to get information about the execution of your PL/SQL packages, including how many times a particular line of code was executed and how long it took to run. This is very useful information when trying to diagnose PL/SQL. However DMBS_PROFILER doesn't give you a 100% picture of what is going on, as it lumps the execution of SQL in with the execution of PL/SQL.
    DBMS_HPROF (available on 11g) is similar to DMBS_PROFILER in that it tells you what your code is doing, but it adds the ability to look at the data hierarchically (with parents and children) and to split the execution of PL/SQL from the execution of SQL queries.
    There is an Open Source project called the Instrumentation Library for Oracle (ILO) that allows you to instrument your code and collect data about it's response time as well as capture accurately scoped 10046 trace data. Originally authored at the performance experts at HOTSOS (Where I should say, I used to work), the project is now stewarded by Method-R and continues to be expanded.
    Speaking of 10046 trace, APEX is actually instrumented so that with a small addition to the URL, you can generate a 10046 Level 12 trace of the session that processes and renders a page. Run that through TK-PROF, SQL-DEV or The Method-R/Hotsos Profiler, and you'll see a full accounting of the time taken.
    Another thing you can do is to make sure you're not executing the same code over and over to get the same result. If the result of a particular function or query will be the same for the life of a session, then execute it only once and store the result in a Package Variable. Accessing the package variable is far less overhead than continually executing a query or function even though the result might be cached.
    Although there are myriad approaches to tuning code, it all basically starts with identifying where the problem is. The above methods will definitely help you get to the root cause and once you have something more specific to attack, please let us know and I'm sure many of us would be happy to jump in and help.
    Hope That Helps,
    - Doug Gault -
    http://sumnertech.com/

Maybe you are looking for

  • User Exit/BADI after database commit in VA01/VA02

    Hi All, Can you tell me an user exit/BADI in VA01/VA02 which I can utilize AFTER the database commit but still utilize the tables/structures like xvbap, xvbkd etc? Thanks. Best Regards, Avimanyu

  • AnyOne has experience with pharmacies and Pharmaceutical companies apps .

    hi , i am developing an application for pharmacies and small Pharmaceutical companies , the situation is : the pharmacy can buy a specific medicine ("Cetal" for example with item_id "1") from a store with different expiry dates (2016 and 2017 i.e.) .

  • App store / iTunes account conflict

    Hi, Here's my problem: Whenever I try to download or upgrade an App, my iPhone asks me my password to my old account, let's name it A. Once I log in to that old account, it says that "There is a billing problem with a previous purchase, tap continue

  • Which transaction is used to view bank statment?

    Hello, i have created a manual bank statement with transaction ff67, which is the transaction used to view this bank statement and its accounting documents? Thanks,

  • Transport Organizer SE10: perform transport of copies

    Hi, I would like to make a "transport of copies" from production system to development system. I collected my objects in a "standard" transport request via RSA1 -> Transport Connection. Then I created a "transport of copies" request in SE10. So I now