Performance Problem Issue

Hi Experts,
    I am working in a developement project.Our project is gone live for 2 stores but for that also  we are facing major performance issue.Al most all the reports are getting timed out for a store data.We have already done performance tuning from ABAP side.Is there any oother parameters like buffer or database tuning can be done. I initially checked with ST02 where swaps  is very high.Is there any problem with that? Is there any role of buffer tuning in this peoblem.

We are using SAP version ECC 6 and OS Windows XP 2002 with service pack 3.How ever i don know the database version.
While i am checking in st02  below mentioned  parameter for swaps i found and other than 0 s all are in red color.Is there any thing wrong.If it is  wat steps i can suggest the basis person to perform.
Buffer                              Swaps
Nametab (NTAB)                         0
  Table definiti                     29,289    
  Field definiti                     159,353
  Short NTAB                               0
  Initial record                    102,282
program                            455,804
CUA                                  402,312
Screen                                  6,403
Calendar                                       0
OTR                                              0
Tables
  Generic Key                         2,587
  Single record                          224
Export/import                   3,616,944
Exp./ Imp. SHM                             0
Edited by: cenosure on Nov 3, 2010 6:26 AM
Edited by: cenosure on Nov 3, 2010 6:27 AM

Similar Messages

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • JRC 2: Performance Problem

    Hi.
    Our reporting component used JRC 1.x before we upgraded to JRC 2.x. We got two issues after upgrading.
    First issue I solved already with a workaround which I published on stackoverflow.com. (1) Does anyone knows where I will find the issue management system to report this issue?
    Second issue occurs big performance problem within our project. We opened a report with 6 subreports (which includes 1 upto 3 tables) in 2-4 seconds using JRC 1. If we will open same report using JRC 2, we wait upto 60 seconds.
    This methods requires more time with JRC 2 comparing to JRC 1:
    ReportClientDocument#open(String, int);
    SubreportController#setTableLocation(String, ITable, ITable)
    DatabaseController#setTableLocation(ITable, ITable)
    Each invocation of one of these methods requires 2-4 seconds.
    Thank you in advance.
    Best regards
    Thomas
    (1) http://stackoverflow.com/questions/479405/replace-a-database-connection-for-subreports-with-jrc

    hello ....
    my report is  ''crystal report 11'' => "OLE DB"  => "Add Command(select * from table) " .
    code(JRC) : eclipse + crystal report for eclipse version 2 =>  "cr4e-all-in-one-win_2.0.1.zip"
    <%@ page contentType="text/html; charset=UTF-8"
    import="
    com.crystaldecisions.report.web.viewer.CrystalReportViewer,
    com.crystaldecisions.reports.sdk.ReportClientDocument,
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKExceptionBase,
    java.sql.Connection,
    java.sql.DriverManager,
    java.sql.ResultSet,
    java.sql.SQLException,
    java.sql.Statement" %>
    <%
         try {
              String reportName = "report.rpt";
              ReportClientDocument clientDoc = new ReportClientDocument();
              clientDoc.open(reportName, 0);
              String tableAlias = "Command";
              clientDoc.getDatabaseController().setDataSource(myResult("SELECT * FROM table"), tableAlias,tableAlias);
              CrystalReportViewer crystalReportPageViewer = new CrystalReportViewer();
              crystalReportPageViewer.setReportSource(clientDoc.getReportSource());
              crystalReportPageViewer.processHttpRequest(request, response, application, null);
         } catch (ReportSDKExceptionBase e) {
              e.printStackTrace();
             out.println(e);
    %>
    I simplified the code, *myResult("SELECT * FROM table") *  is absolutely no problem ,
    and this code is absolutely no problem in the "crystal report for eclipse "version 1
    but in  version 2 run error:
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4---- Error code:-2147467259 Error code name:failed
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.call(Unknown Source)
         at com.crystaldecisions.reports.common.ThreadGuard.syncExecute(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.for(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.int(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.a(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent$a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.CommunicationChannel.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.if(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.new(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.b9.onDataSourceChanged(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.setDataSource(Unknown Source)
         at org.apache.jsp.No_005f1.Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp._jspService(Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp.java:106)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
         at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.crystaldecisions.reports.common.QueryEngineException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4
         at com.crystaldecisions.reports.queryengine.Connection.bf(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.z3(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.bL(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.zM(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Connection.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.if(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.try(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.u7(Unknown Source)
         at com.crystaldecisions.reports.datafoundation.DataFoundation.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.DFAdapter.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.CheckDatabaseHelper.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.datafoundation.CheckDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.VerifyDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.f.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.do(Unknown Source)
         ... 39 more
    Please help me and tell me why....

  • Report painter performance problem...

    I have a client which runs a report group consists of 14 reports... When we  run this program... It takes about 20 minutes to get results... I was assigned to optimize this report...
    This is what I've done so far
    (this is a SAP generated program)...
    1. I've checked the tables that the program are using... (a customized table with more than 20,000 entries and many others)
    2. I've created secondary indexes  to the main customized table with (20,000) entries - It improves the performance a bit(results about 18 minutes)...
    3. I divided the report group by 4... 3 reports each report group... It greatly improves the performance... (but this is not what the client wants)...
    4. I've read an article about report group performance that it is a bug. 
    (sap support recognized the fact that we are dealing with a bug in the sap standard functionality)
    http://it.toolbox.com/blogs/sap-on-db2/sap-report-painter-performance-problem-26000
    Anyone have the same problem as mine?
    Edited by: christopher mancuyas on Sep 8, 2008 9:32 AM
    Edited by: christopher mancuyas on Sep 9, 2008 5:39 AM

    Report painter/Writer always creates a prerformance issue.i never preffred them since i have a option with Zreport
    now you can do only one thing put more checks on selection-screen for filtering the data.i think thats the only way.
    Amit.

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Performance problems with abap report SAPLPRGN_STRUCTURE during users login

    Hello,
    after patchimport of SAP BASIS level 21 and SAPABAP level 21 our SAP Production System (ECC6-UNICODE KERNEL 240) denote a performance problems. A lot of ours users during logon phase must wait until 10-12 minutes before that the navigation menu is displayed. The problem is generated by the slow performance of the standard SAP REPORT SAPLPRGN_STRUCTURE that require a lot time.  Any idea?
    Thanks

    Hi Pat,
    I would stop SAP and use DLTR3PKG, If this doesn't help, you need to investigate, if it is a CPU or DB-Time issue in ST03. Then you have to handle accordingly ...
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.net - http://www.4soi.de - http://www.easymarketplace.de

  • Performance problems with SAP GUI 7.10 and BEx 3.5 Patch 400?

    Hi everybody,
    we installed SAP GUI 7.10 and BEx 3.5 Patch 400 and detected hugh performance problems with this version in comparison to the SAP GUI 6.40 and BEx 3.5 or BEx 7.0 Patch 800.
    Does anybody detect the same problems?
    Best regards,
    Ulli

    Most important question when you are talking about performance-issues:
    which OC are you working on and which excel version?
    ciao
    Joke

  • Performance problems with DFSN, ABE and SMB

    Hello,
    We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
    Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
    The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
    We have only one AD-site and don't use DFS-Replication.
    Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
    Network is a LAN.
    Our DFSN looks like this:
    \\contoso.com\home
        Contains 10.000 Links
        Drive mapping on clients to subfolder \\contoso.com\home\username
    \\contoso.com\group
        Contains 2500 Links
        Drive mapping on clients directly to \\contoso.com\group
    On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
    We have to use ABE, so that users see only accessible Links (folders)
    We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
    After six weeks of researching and analyzing we were able to identify the exact problem.
    Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
    dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
    dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
    This is done a few times a day.
    There is no possibility to create the folder and set the permissions in one step.
    DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
    At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
    Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
    The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
    All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
    This was identified by an network trace on our DFSN-servers and different clients.
    Due to ABE the servers have to compute the folder listing for each request.
    DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
    Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
    Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
    Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
    Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
    Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
    Problem also occurs with Windows Server 2012 R2 as DFSN-server.
    There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
    This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
    Is there a possibility to disable the SMB change notification on server side ?
    TIA and regards,
    Ralf Gaudes

    Hi,
    Thanks for posting in Microsoft Technet Forums.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Performance problems with File Adapter and XI freeze

    Hi NetWeaver XI geeks,
    We are deploying a XI based product and encounter some huge performance problems. Here after the scenario and the issues:
    - NetWeaver XI 2004
    - SAP 4.6c
    - Outbound Channel
    - No mapping used and only the iDocs Adapter is involved in the pipeline processing
    - File Adapter
    - message file size < 2Ko
    We have zeroed down the problem to Idoc adapter’s performance.
    We are using a file channel and  every 15 seconds a file in a valid Idoc format is placed in a folder, Idoc adapter picks up the file from this folder and sends it  to the SAP R/3 instance.
    For few minutes (approx 5 mins) it works (the CPU usage is less then 20% even if processing time seems huge : <b>5sec/msg</b>) but after this time the application gets blocked and the CPU gets overloaded at 100% (2 processes disp_worker.exe at 50% each).
    If we inject several files in the source folder at the same time or if we decrease the time gap (from 15 seconds to 10 seconds) between creation of 2 Idoc files , the process blocks after posting  2-3 docs to SAP R/3.
    Could you point us some reasons that could provoke that behavior?
    Basically looking for some help in improving performance of the Idoc adapter.
    Thanks in advance for your help and regards,
    Adalbert

    Hi Bhavesh,
    Thanks for your suggestions. We will test...
    We wonder if the hardware is not the problem of this extremely poor performance.
    Our XI server is:
    •     Windows 2003 Server
    •     Processors: 2x3GHZ
    •     RAM: 4GB (the memory do not soak)
    The messages are well formed iDocs = single line INVOICES.
    Some posts are talking 2000 messages processed in some seconds... whereas we got 5 sec per message.
    Tnanks for your help.
    Adalbert

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • Curious performance problem

    Hello,
    I have a very curious performance problem. I have a query which returns 0 rows and takes around 9 secs to execute in TopLink. If I try to execute generated SQL for that ReadAllQuery (taken from log) directly through JDBC, it takes only 70ms. I use TopLink 9.0.3 with Oracle9i 9.2.0.3. I've traced through sources and identified that the problem is not in Toplink directly but in call to Oracle JDBC driver. But then I don't understand why in my JDBC case it is so fast. The problem is the same no matter if I use thin or OCI driver.
    I've prepared a little test to show it up:
    import com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO;
    import java.util.Vector;
    import oracle.toplink.expressions.Expression;
    import oracle.toplink.expressions.ExpressionBuilder;
    import oracle.toplink.queryframework.ReadAllQuery;
    import oracle.toplink.queryframework.SQLCall;
    import oracle.toplink.sessions.DatabaseSession;
    import oracle.toplink.sessions.DefaultSessionLog;
    import oracle.toplink.sessions.Project;
    import oracle.toplink.tools.profiler.PerformanceProfiler;
    import oracle.toplink.tools.workbench.XMLProjectReader;
    * @author mstraka
    public class ToplinkTest {
    public static void main(String[] args) {
    try {
    // Pure JDBC test
    String sql =
    "SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, " +
    "VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE " +
    "FROM POTMESSAGELOG " +
    "WHERE " +
    "((((TIMESTAMP >= TO_DATE('2003-07-21 15:00:00', 'YYYY-MM-DD HH24:MI:SS')) " +
    "AND (TIMESTAMP <= TO_DATE('2003-07-21 16:00:00', 'YYYY-MM-DD HH24:MI:SS'))) " +
    "AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND " +
    "(object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) " +
    "ORDER BY TIMESTAMP ASC";
    Class.forName("oracle.jdbc.driver.OracleDriver");
    java.sql.Connection con = java.sql.DriverManager.getConnection("jdbc:oracle:oci8:@katka", "sco", "sco");
    long time = System.currentTimeMillis();
    java.sql.PreparedStatement ps = con.prepareStatement(sql);
    java.sql.ResultSet rs = ps.executeQuery();
    int rows = 0;
    while (rs.next()) {
    rows++;
    System.out.println("*** Pure JDBC test ****");
    System.out.println("Rows: " + rows);
    System.out.println("JDBC Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    rs.close();
    ps.close();
    con.close();
    // TopLink test
    XMLProjectReader xmlReader = new XMLProjectReader();
    Project project = xmlReader.read("./config/bc/tlproject.xml");
    project.getLogin().setUserName("sco");
    project.getLogin().setPassword("sco");
    DatabaseSession dbSession = project.createDatabaseSession();
    dbSession.logMessages();
    DefaultSessionLog log = (DefaultSessionLog) dbSession.getSessionLog();
    log.logDebug();
    log.logExceptions();
    log.logExceptionStackTrace();
    log.printDate();
    dbSession.login();
    java.util.Calendar cal = java.util.Calendar.getInstance();
    cal.set(java.util.Calendar.YEAR, 2003);
    cal.set(java.util.Calendar.MONTH, 6);
    cal.set(java.util.Calendar.DAY_OF_MONTH, 21);
    cal.set(java.util.Calendar.HOUR_OF_DAY, 15);
    cal.set(java.util.Calendar.MINUTE, 0);
    cal.set(java.util.Calendar.SECOND, 0);
    cal.set(java.util.Calendar.MILLISECOND, 0);
    ExpressionBuilder eb = new ExpressionBuilder();
    Expression ex = eb.get("timestamp").greaterThanEqual(new java.sql.Date(cal.getTimeInMillis()));
    cal.set(java.util.Calendar.HOUR_OF_DAY, 16);
    ex = ex.and(eb.get("timestamp").lessThanEqual(new java.sql.Date(cal.getTimeInMillis())));
    Expression pot = eb.get("potOrder").greaterThanEqual(1);
    pot = pot.and(eb.get("potOrder").lessThanEqual(172));
    dbSession.setProfiler(new PerformanceProfiler());
    ReadAllQuery rq = new ReadAllQuery(PotMessageLogJDO.class);
    rq.setSelectionCriteria(ex.and(pot));
    rq.addAscendingOrdering("timestamp");
    time = System.currentTimeMillis();
    Vector result = (Vector)dbSession.executeQuery(rq);
    System.out.println("*** TopLink ReadAllQuery test ****");
    System.out.println("Rows: " + result.size());
    System.out.println("TopLink Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    time = System.currentTimeMillis();
    result = (Vector)dbSession.executeSelectingCall(new SQLCall(sql));
    System.out.println("*** TopLink direct SQL test ****");
    System.out.println("Rows: " + result.size());
    System.out.println("TopLink SQL Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    } catch (Exception e) {
    e.printStackTrace();
    ...and here is output from run:
    *** Pure JDBC test ****
    Rows: 0
    JDBC Time: 62 ms
    2003.07.21 06:07:44.127--DatabaseSession(30752603)--Connection(20092482)--TopLink, version:TopLink - 9.0.3 (Build 423)
    2003.07.21 06:07:44.736--DatabaseSession(30752603)--Connection(20092482)--connecting(DatabaseLogin(
         platform => OraclePlatform
         user name => "sco"
         datasource URL => "jdbc:oracle:oci8:@katka"
    2003.07.21 06:07:44.799--DatabaseSession(30752603)--Connection(20092482)--Connected: jdbc:oracle:oci8:@katka
         User: SCO
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.3.0 - Production
         Driver: Oracle JDBC driver Version: 9.2.0.1.0
    2003.07.21 06:07:44.971--DatabaseSession(30752603)--#executeQuery(ReadAllQuery(com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO))
    Begin Profile of{ReadAllQuery(com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO)
    2003.07.21 06:07:45.002--DatabaseSession(30752603)--Connection(20092482)--SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE FROM POTMESSAGELOG WHERE ((((TIMESTAMP >= {ts '2003-07-21 15:00:00.0'}) AND (TIMESTAMP <= {ts '2003-07-21 16:00:00.0'})) AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND (object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) ORDER BY TIMESTAMP ASC
    Profile(ReadAllQuery,
         class=com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO,
         total time=9453,
         local time=9453,
         query prepare=15,
         sql execute=9422,
    } End Profile
    *** TopLink ReadAllQuery test ****
    Rows: 0
    TopLink Time: 9468 ms
    2003.07.21 06:07:54.439--DatabaseSession(30752603)--#executeQuery(DataReadQuery())
    Begin Profile of{DataReadQuery()
    2003.07.21 06:07:54.439--DatabaseSession(30752603)--Connection(20092482)--SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE FROM POTMESSAGELOG WHERE ((((TIMESTAMP >= TO_DATE('2003-07-21 15:00:00', 'YYYY-MM-DD HH24:MI:SS')) AND (TIMESTAMP <= TO_DATE('2003-07-21 16:00:00', 'YYYY-MM-DD HH24:MI:SS'))) AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND (object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) ORDER BY TIMESTAMP ASC
    Profile(DataReadQuery,
         total time=0,
         local time=0,
    } End Profile
    *** TopLink direct SQL test ****
    Rows: 0
    TopLink SQL Time: 16 ms
    Thanks a lot!
    Marcel

    Marcel,
    TopLink supports native SQL generation that will use the TO_DATE operators. You can turn on native SQL in a couple of ways.
    1. SESSIONS.XML
              <login>
                   <platform-class>oracle.toplink.internal.databaseaccess.OraclePlatform</platform-class>
                   <user-name>user</user-name>
                   <password>password</password>
                   <uses-native-sequencing>true</uses-native-sequencing>
    2. Through DatabaseLogin API:
    After the project is read in or instantiated:
    project.getLogin().useNativeSQL();
    This should get the SQL you need and address your performance issue.
    Doug

  • Performance problem with Mavericks.

    Performance problem with Mavericks. My Mac is extremly slow after upgrading to Mavericks. What can i do to solve that?

    If you are still experiencing slow down issues, it maybe because of a few other reasons.
    Our experience with OS X upgrades, and Mavericks is no exception, is that users have installed a combination of third party software and/or hardware that is incompatible and/or is outdated that causes many negative performance issues when upgrading to a new OS X version.
    Your Mac's hard drive maybe getting full.
    Do you run any antivirus software on your Mac? Commercial Antivirus software can slow down and negatively impact the normal operation of OS X.
    Do you have apps like MacKeeper or any other maintenance apps like CleanMyMac 1 or 2, TuneUpMyMac or anything like these apps, installed on your Mac? These types of apps, while they appear to be helpful, can do too good a job of data "cleanup" causing the potential to do serious data corruption or data deletion and render a perfectly running OS completely dead and useless leaving you with a frozen, non-functional Mac.
    Your Mac may have way too many applications launching at startup/login.
    Your Mac may have old, non-updated or incompatible software installed.
    Your Mac could have incompatible or outdated web browser extensions, plugins or add-ons.
    Your Mac could have connected third party hardware that needs updated device drivers.
    It would help us to help you if we could have some more technical info about your iMac.
    If you so choose, please download, install and run Etrecheck.
    Etrecheck was developed as a simple Mac diagnostic report tool by a regular Apple Support forum user and technical support contributor named Etresoft. Etrecheck is a small, unobstrusive app that compiles a static snapshot of your entire Mac hardware system and installed software.
    This is a free app that has been honestly created to provided help in diagnosing issues with Macs running the new OS X 10.9 Mavericks.
    It is not malware and can be safely downloaded and installed onto your Mac.
    http://www.etresoft.com/etrecheck
    Copy/paste and post its report here in another reply thread so that we have a complete profile of your Mac's hardware and installed software so we can all help continue with your Mac performance issues.
    Thank you.

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

Maybe you are looking for

  • Can I set up a Non .mac Email Pop mail client Alias for sending SMTP mail?

    I have a yahoo.com POP mail account - [email protected] When I'm lgged into Yahoo mail, this account can receive and send mail from serveral aliases - [email protected], [email protected], etc. Using Tiger's mail.app, I can log into the Yahoo smtp se

  • How Do I Share Mac Volumes on Windows 7?

    Out of necessity in my business, I have a new Dell with Windows 7 on my network. I have networked it with two Macs running 10.4 and two running 10.5. The user volume for all 4 Macs shows up on the Windows machine, but the main volume that contains ev

  • Audio only works half the time

    Product name: HP 2000 Notebook PC Product number: C2N26UA#ABA Windows 8 64-bit  Sometimes my audio works, sometimes it doesn't. It was just working fine on a youtube video, then when I went to a different video I suddenly had no audio. Itunes doesn't

  • Downloaded a Song that doesnt work after 1 min

    I downloaded a song and it just cuts off and stops playing about 1 minute into the song. The song is over 3 mins long. What can i do to get this fixed?

  • Leadtec v1.07 Nforce driver query

    Hi all, I recently installed the Leadtec v1.07 drivers as I believe they are the newest Nforce drivers around and they are all WHQL'd. However, now in Device Manager under the System Devices section, all the Nvidia items (Memory controllers, AGP to P