Strange performance issue

Hi,
I've been programming Java professionally for 10 years and I've seen som weird sh*t over the years, but this one takes the cake:
I have a huge complex application that does all kinds of funny stuff with doubles. In a specific test case, one method is called ~3000 times and covers 25% of total execution time, so I figured I'd optimize it a bit.
To cut a long story short, by accident, I discovered that ADDING one single line to this one method consistently reduced total test time from 3.9sek to 2.6sek:
m_Limits =new double[1];
m_Limits is a private membership variable that is NEVER accessed anywhere else - this is the only time this member is ever touched.
changing it to
if(m_Limits==null) m_Limits =new double[1];
Took off an additional 0.2sek, but moving it to the constructor removed the optimization entirely. The same happens if m_Limits is declared as a local variable.
Have I run in to some kind of hotspot fluke - has anyone else ever experienced anything like this?

The code is not all that big, but it would take me quite a while to isolate it - time that I haven't really got. Anyway, I doubt you'd see the same behaviour out of context.
The thing is that this is part of a larger optimization and it needs to be there for other reasons, so I don't really need to know why it's faster. I'm happy for whatever speed increases I can get.
Not sure why I even posted this as I haven't got neither time nor motivation to pursue it. Spur of the moment I guess.
Thanks for reading though ;)

Similar Messages

  • Strange performance issue with 3510/3511 SAM-FS disk cache

    Hi there!
    I'm running a small SAM-QFS environment and have some strange performance issue on the disk storage part, which somebody here might be able to explain.
    Configuration: one 3510, dual controller, RAID-5 9+1, one hot spare and one disk not configured for whatever reason. The R5 logical drive hosts a 150GB LUN for SAM-QFS metadata (mm in SAM-FS speak) and a 1TB LUN for data (mr in SAM-FS speak). Further, there are two small LUNs (2GB, 100GB) for some other purpose. Those two LUNs have nearly no I/O. All disks are SUN146G. Host connection is 2GBit, multipathing enabled and working.
    Then the disk cache became too small, and the customer added a 3511 expansion unit with SUN300G disks. One logical drive is a RAID-1, 1+1, used for NetBackup catalog. The other is a RAID-5, 8+1, providing two LUNs: 260GB SAM-FS metadata (mm) and 1.999TB SAM-FS data (mr).
    For SAM-FS, the LUNs form two file systems: one "residing" in the 3510, the other "residing" in the 3511 expansion. Cabling is according to the manual and checked several times by several independant people. Operating system is Solaris 10, hardware is a V880.
    The problem we observe: SAM-FS I/O on LUNs on disks inside the 3510 is fine. With iostat, I see 100MB/s read and 50MB/s write at the same time. On the SAM-FS file system which is running on the two LUNs in the 3511, the limit seems to be at 40MB/s read/write. Both SAM-FS file systems are configured the same in regards of block size.
    In case I have activity on both SAM-FS file systems, I see 100MB/s+ on the LUN running inside the controller shelf and another 40MB/s on the disk runnin in the 3511 expansion chassis. So, the controller is easily capable of handling 150MB/s.
    Cache settings in the 3510 controller are default I think (wasn't installed by me), batteries are fine.
    Is this 40MB/s we experience a limitation by the expansion shelf? Don't think so. Anybody has any ideas on this? What parameters to check or to change? Any hint appreciated. I can also provide further details if needed. Thank you.
    wolfgang

    SUN300G disks sound like 300GB FC disks.
    Depending on how many files are in the SAMFS file system, sharing the mm and mr devices on the same RAID array can be a pretty horrible idea. In my opinion and experience, it's almost always better to NEVER put more than one LUN on a RAID array. Period. Putting more than one LUN on an array results in IO contention on that array. And large, unnaturally configured (9+1? Why?) RAID arrays will have problems from the start.
    What are the block sizes used on the RAID arrays? It wouldn't surprise me to see that the RAID array on the expansion tray has a very large block size. Larger block sizes are, in general, not better. Especially for SAMFS metadata - which IIRC is something like 8k or 16k blocks.
    I suspect what is happening is most of the metadata updates are going to the mm device on the new array, contending with the IO operations on the file data.
    How much space is left on each mm device? What does "iostat -sndxz 2" show when you're having the IO problems?

  • Can't access root share sometimes and some strange performance issues

    Hi :)
    I'm sometimes getting error 0x80070043 "The network name cannot be found" when accessing \\dc01 (the root), but can access shares via \\dc01\share.
    When I get that error I also didn't get the network drive hosted on that server set via Group Policy, it fails with this error:
    The user 'W:' preference item in the 'GPO Name' Group Policy Object did not apply because it failed with error code '0x80070008 Not enough storage is available to process this command.' This error was suppressed.
    The client is Windows Server 2012 Remote Desktop and file server is 2012 too. On a VMware host.
    Then I log off and back on, and no issues.
    Maybe related and maybe where the problem is: When I have the issue above and sometimes when I don't (the network drive is added fine) I have some strange performance issues on share/network drive: Word, Excel and PDF files opens very slowly. Offices says
    "Contacting \\dc01\share..." for 20-30 sec and then opens. Text files don't have that problem.
    I have a DC02 server also 2012 with no issues like like this.
    Any tips how to troubleshoot?

    Hi,
    Based on your description, you could access shares on DC via
    \\dc01\share. But you couldn’t access shares via \\dc01.
    Please check the
    Network Path in the Properties of the shared folders at first. If the network path is
    \\dc01\share, you should access the shared folder by using
    \\dc01\share.
    And when you configure
    Drive Maps via domain group policy, you should also type the Network Path of the shared folders in the
    Location edit.
    About opening Office files very slow. There are some possible reasons.
     File validation can slow down the opening of files.
     This problem caused by the issue mentioned above.
    Here are a similar thread about slow opening office files from network share
    http://answers.microsoft.com/en-us/office/forum/office_2010-word/office-2010-slow-opening-files-from-network-share/d69e8942-b773-4aea-a6fc-8577def6b06a
    For File Validation, please refer to the article below,
    Office 2010 File Validation
    http://blogs.technet.com/b/office2010/archive/2009/12/16/office-2010-file-validation.aspx
    Best Regards,
    Tina

  • Strange performance issue in bex report

    Hello Experts,
    I have a performance issue on my bex report.
    I'm running the report with below selection criteria and getting 'too much data' error.
    Country :  equals EMEA
    Category: not equlas 13
    Date : 02/2010 to 12/2010.
    But when I ran the report for smaller date ranges the number of records are not exceeding 13000.
    02.2010 - 06.2010 - 6,555 rows
    07.2010 - 09.2010 - 3,671 rows
    10.2010 - 12.2010 - 2,780 rows
    I know excel can't fit more than 65000 records, but I'm expecting 13000 records for my wide date range which excel can easily fit.
    Any ideas on this one will be appreciated.
    Regards,
    Brahma Reddy

    Hi,
    For Question 1:
    In query designer Go to the Query properties and select the tab "Variable Sequence", here you can set the order of variables as per you requirement.
    For Question 2:
    There will be a option "Hide Repeated Key values", if you uncheck this option then you will have the values for each row even though the material values are same.
    Note; if you are viewing the report in web or WAD report you need to make the same changes in the Web template also because the settings in the query designer will be overridden when you run the query in web.
    Hope this helps.
    Regards,
    Rk.

  • Experiencing strange performance issues after a hard drive failure - Help!

    I bought my mid-2012 i5 Macbook Pro in December of 2012. I realized when shopping for computers that I wanted an SSD installed, but that it would be a lot cheaper if I bought the SSD and installed it rather than customizing it in the Apple Store. So I bought a nice Samsung 128GB SSD (820 or 840 - can't remember which) and did the installation. I went ahead and installed two 4GB sticks of RAM while I was at it. Everything was just dandy: my boot time was just under 9 seconds, and all of my data-heavy apps booted in no-time at all. Then all **** broke loose.
    About two weeks ago, I opened my computer and I got the dreaded "? File Folder" notification with a gray screen. I immediately thought hard drive failure. No matter how many times I tried to boot, the computer just would not talk to the SSD anymore. I used Internet Recovery to get into my Disk Utility, and the entire partition was gone. I assumed the worst but wanted to be sure - I bought a hard drive enclosure and hooked the SSD up to an older Macbook, and lo and behold: it worked perfectly. I was not only able to recover data, but I could write data to the drive. Nothing appeared wrong with the drive when I plugged it into the old Macbook, but my newer Macbook still would not recognize it. Even my fiance's Windows 7 PC recognized the drive as "?" (since it was formatted for Mac, but hey - it recognized that it existed!).
    I decided to re-install the original HDD that came with the 2012 Macbook Pro (the one I removed in favor of the SSD). I was able to re-install the OS and I can boot up at will, but everything is different. The performance issues are extremely noticeable. I can't have more than two programs running at one time without the spinning wheel of death appearing. My boot time went from 9 seconds to 2 minutes. I know that SSDs increase performance, so there is some slight performance downgrade to be expected since I am using a mechanical drive now -- but these are not normal issues. Sometimes I can't even type a web address into Safari without the wheel appearing. iTunes, and specifically the App Store, take minutes to open - and I have no media is on iTunes.
    Here's the thing: I have tried just about anything to fix this problem that Google can pull up. I've verified the HDD, I've booted into Safe Mode, reset RAM and cache, run benchmarks and other performance tests, entered all sorts of weird language into Command Prompt, and studied Activity Monitor - I can't find a single red flag that would indicate anything being wrong. It appears to be a perfectly functioning, updated computer.
    I'm thinking a piece of hardware failed that triggered the error with the SSD. I'm not really sure though since all of my performance tests indicate perfectly functioning hardware. I'm a little afraid to take it to the Apple store because I know they'll tell me it's my fault for opening the computer and replacing the hard drive in the first place.
    Any ideas? At this point anything to salvage this computer would be helpful.

    Spin Cycle,
    were those other computers which were able to recognize your SSD in its external enclosure also Macs? Do you know if your SSD has its most recent firmware revision installed? (If it doesn’t, its installer can be downloaded from the Samsung SSD firmware page for burning onto a bootable DVD.) I haven’t used the 830 myself, so I don’t know what its reputation is with Macs. I have an 840 PRO in my MacBook Pro, which has been trouble-free for me, but my understanding is that the 840 EVO has had trouble with Macs in its earlier firmware revisions — so I’m wondering if the 830 has a known track record with Macs, good or bad.

  • Very strange performance issue

    Hi!
    I've got a very very strange performance problem and I have no more ideas what it might be:
    The following code fragment contains the useless line int count=0;
    However, deleting the line reduces(!) the execution speed of the program by ~30%. This is not only weird because the line is absolutely useless, but also because the whole function only takes <1% total execution time in the program.
        public void simuliere_bis_soll_erreicht()
            int count=0;  //????
            while(welt.soll_simulationszeit>welt.ist_simulationszeit && simu_active)
                simuliere_einen_schritt();
        }The problem occurs both under Java 1.5 and 1.4.2, using the HotSpot-Client.
    Cleaning and rebuilding the project does not help.
    It occurs on different computers in the same way, too.
    Thank you very much in advance! :-)
    Mirko Seithe

    well, this is what you get:
    1.) run totally interpreted since compilation threshhold is reached (1500 (client jvm) - 10000 (server-jvm)
    2.) start background compilation, run further in interpreted mode till compilation finishes (depends on code, but about 1000-10000)
    3.) run compiled code
    so this is what you get:
    ~20.000 invokations interpreted (about 70-90%n of total time)
    ~80.000 invokations compiled (30-10% of total time)
    maybe your int takes much longer to optimize, so the loop executes much longer in interpreted code.
    I would not bet on such microbenchmarks, since they do not tell you real stories under real circumstances. Believe me, I am the programmer working on performance optimizations in the company I am employed.
    lg Clemens

  • Strange performance issue in SSRS SharePoint integrated / Kerberos

    Hi,
    current setup / situation:
    Four server farm for a BI Portal:
    SQL Server instances
    SSAS instances
    SharePoint apps (SSRS, Central Admin ...)
    SharePoint frontend
    Three different environments (Dev, Test, Prod) with the same configuration (servers, versions, CPU/RAM,...)
    Kerberos properly configured (as far as we can see) pushing the current user credentials forward from the SharePoint frontend to the SSRS integrated service app to the SSAS OLAP cube.
    Problem:
    When we connect the report on dev to the dev cube, report execution time is about 7 seconds. In the execution log we can see in the additional info field, that we have a ConnectionOpenTime of a couple of milliseconds (~ 20-40).
    When we connect the report on test to the test cube OR to the dev cube OR the dev report to the test cube, we have runtimes of about 60 seconds with ConnectionOpenTime of a Little bit more than 2000 ms.
    When we change the data source connection to a fixed Windows account, the runtime drops down to about 7 seconds. When we switch back to "Use current user credentials" the runtime stays at the 7 seconds for about 1-2 hours, then in drops back to
    60 seconds.
    Question:
    What can cause this huge ConnectionOpenTime? Is it AD/DNS related? Claims To Windows Token Service? Any other caching/OS issue?
    Thanks for any input.
    KR
    Rainer
    Rainer

    Hi Rainer,
    The issue is little complex. I suggest that you could try to narrow down the issue deeper with below steps:
    1. Enable the verbose log on Sharepoint side.
    2. Capturing the SQL Server profiler trace on both test(slow) and dev(fast) environment.
    3. The same, RS exectionlog3 table
    Then reproduce the issue on both fast and slow environment, we need to compare both the Sharepoint ULS log and RS exection log to find out the slowness part.
    Could you please follow the below article for how to narrow down the issue with all logs?
    http://blogs.msdn.com/b/psssql/archive/2013/07/29/tracking-down-power-view-performance-problems.aspx
    Since you have found out the slowness is at ConnectionOpenTime , so we should focus on the Sharepoint frontend and SQL Server reporting service side. The ConnectionOpenTime should not reach the SSAS instance.
    Regards,
    Doris Ji

  • SSAS Strange Performance Issues (Long running with NO read or write activity) - UAT Test Environment

    Hi All,
    Im looking for some pointers, my team and I have drawn a blank as to what is going on here.
    Our UAT is a virtual machine.
    I have written a simple MDX query which on normal freshly processed cube executes in under 15 seconds, I can keep running the query.
    Run 1. 12 secs
    Run 2. 8 Secs
    Run 3. 8 Secs
    Run 4. 7 Secs
    Run 5. 8 Secs
    Run 6. 28 MINUTES!!
    This is on our test environment, I am on the only user connected and there is no processing active.
    Could anyone please offer some advice on where to look, or tips on what the issue may be.
    Regards,
    Andy

    Hi aown61,
    According to your description, you get long time processing after executing processing several times. Right?
    In this scenario, it's quite strange that a processing take long time. I suggest you using SQL Profiler to monitor the event during processing. It can track engine process events, such as the start of a batch or a transaction, you can replay the events captured
    on the Analysis Services instance to see exactly what happened. For more information, please refer to link below:
    Use SQL Server Profiler to Monitor Analysis Services
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Performance issue with Jdeveloper

    Hi Guys,
    I am experiencing strange performance issue with Jdeveloper 10.1.3.3.0.4157. There are many other threads regarding the performance issue in this forum, but the problem I have is a little bit different.
    I have two computers: one is Athlon 3200+ with Vista and another one is P4 dual core 6400 with XP (service pack 2). Both of them have 2GB memory.
    I am running the same simple project on both computer, but only the one with Vista has the problem. The problem is very similar to the problem mentioned in the thread:
    Re: IDE has become extremely slow?
    But it's much worse. It only happens only on JSF pages. Basically, any operations on the JSF pages are very slow. Loading the page, changing the attributes of a button in source editor, or even clicking the items in the design view take forever to run.
    The first weird thing is that it may use 100% CPU, but it never recover, which means the 100% CPU usage never stops or when it stops, the Jdeveloper stops responding.
    The second weird thing is that the project is not big. Actually, it's very small. The problem started to happen since last week. There are not big changes during the period. The only thing I can say is that we created two more JSF pages.
    The third weird thing is that the same project never happened on the P4+XP box. When I open the project on the P4+XP box, it’s always fast and no CPU spike.
    Any advises are welcome!
    Thanks,
    Steven

    Hi Guys,
    I re-made a simple test project for this problem and now I now always reproduce the problem in JDeveloper on both system (XP & Vista). Everytime I open this jspx file in the source editor and try to scroll up/down the source file, or manually delete an attribute, JDeveloepr will hang and the CPU usage is 0%.
    Here is the content of the test file:
    <?xml version='1.0' encoding='windows-1252'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:af="http://xmlns.oracle.com/adf/faces"
    xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
    <jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
    doctype-system="http://www.w3.org/TR/html4/loose.dtd"
    doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
    <jsp:directive.page contentType="text/html;charset=windows-1252"/>
    <f:view>
    <afh:html binding="#{backing_streettypedetail.html1}" id="html1">
    <afh:head title="streettypedetail"
    binding="#{backing_streettypedetail.head1}" id="head1">
    <meta http-equiv="Content-Type"
    content="text/html; charset=windows-1252"/>
    </afh:head>
    <afh:body binding="#{backing_streettypedetail.body1}" id="body1">
    <af:messages binding="#{backing_streettypedetail.messages1}"
    id="messages1"/>
    <h:form binding="#{backing_streettypedetail.form1}" id="form1">
    <af:panelForm binding="#{backing_streettypedetail.panelForm1}"
    id="panelForm1">
    <af:inputText value="#{bindings.streetTypeID.inputValue}"
    label="#{bindings.streetTypeID.label}"
    required="#{bindings.streetTypeID.mandatory}"
    columns="#{bindings.streetTypeID.displayWidth}"
    binding="#{backing_streettypedetail.inputText1}"
    id="inputText1">
    <af:validator binding="#{bindings.streetTypeID.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.description.inputValue}"
    label="#{bindings.description.label}"
    required="#{bindings.description.mandatory}"
    columns="#{bindings.description.displayWidth}"
    binding="#{backing_streettypedetail.inputText2}"
    id="inputText2">
    <af:validator binding="#{bindings.description.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.abbr.inputValue}"
    label="#{bindings.abbr.label}"
    required="#{bindings.abbr.mandatory}"
    columns="#{bindings.abbr.displayWidth}"
    binding="#{backing_streettypedetail.inputText3}"
    id="inputText3">
    <af:validator binding="#{bindings.abbr.validator}"/>
    </af:inputText>
    <f:facet name="footer">
    <h:panelGroup binding="#{backing_streettypedetail.panelGroup1}"
    id="panelGroup1">
    <af:commandButton text="Save"
    binding="#{backing_streettypedetail.saveButton}"
    id="saveButton"
    actionListener="#{bindings.mergeEntity.execute}"
    action="#{userState.retrieveReturnNavigationRule}"
    disabled="#{!bindings.mergeEntity.enabled}"
    partialSubmit="false">
    <af:setActionListener from="#{true}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    <af:commandButton text="Cancel"
    binding="#{backing_streettypedetail.cancelButton}"
    action="#{userState.retrieveReturnNavigationRule}"
    id="cancelButton">
    <af:setActionListener from="#{false}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    </h:panelGroup>
    </f:facet>
    </af:panelForm>
    </h:form>
    </afh:body>
    </afh:html>
    </f:view>
    <!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_streettypedetail-->
    </jsp:root>
    Can anybody take a look at the file and let me know what's wrong with it?
    Thanks in advance.
    Steven

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Cube Refresh Performance Issue

    We are facing a strange performance issue related to cube refresh. The cube which used to take 1 hr to refresh is taking around 3.5 to 4 hr without any change in the environment. Also, the data that it processes is almost the same before and now. Only these cube out of all the other cubes in the workspace is suffering the performance issue over a period of time.
    Details of the cube:
    This cube has 7 dimensions and 11 measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 2480261 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed with the below script
    DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Has anyone faced similar issue? Please can advise on what might be the cause for the performance degradation.
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
    AWM - awm11.2.0.2.0A

    Take a look at DBMS_CUBE.BUILD documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube.htm#ARPLS218 and DBMS_CUBE_LOG documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    You can also search this forum for more questions/examples about DBMS_CUBE.BUILD
    David Greenfield has covered many Cube loading topics in the past on this forum.
    Mapping to Relational tables
    Re: Enabling materialized view for fast refresh method
    DBMS CUBE BUILD
    CUBE_DFLT_PARTITION_LEVEL in 11g?
    Reclaiming space in OLAP 11.1.0.7
    Re: During a cube build how do I use an IN list for dimension hierarchy?
    .

  • Performance issue - application running on front

    Hi, I have a Strange performance issue:
    - when I launch my app from flash builder without touching anything, it is slow,
    - when I launch it from flash builder and immediately open another window and keep it in front of it, it is really fast
    it is a windowed application, full screen, displaying multiple objects moving around
    had anyone already had the issue?
    thanks,
    YAnn

    For monitoring Azure using SCOM 2012R2, you can refer below link
    http://blogs.technet.com/b/dcaro/archive/2012/05/02/how-to-monitor-your-windows-azure-application-with-system-center-2012.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"Mai Ali | My blog:
    Technical | Twitter:
    Mai Ali

  • Can someone help me diagnose a strange stored procedure performance issue please?

    I have a stored procedure (posted below) that returns message recommendations based upon the Yammer Networks you have selected. If I choose one network this query takes less than one second. If I choose another this query takes 9 - 12 seconds.
    /****** Object: StoredProcedure [dbo].[MessageView_GetOutOfContextRecommendations_LargeSet] Script Date: 2/18/2015 3:10:35 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE PROCEDURE [dbo].[MessageView_GetOutOfContextRecommendations_LargeSet]
    -- Parameters
    @UserID int,
    @SourceMessageID int = 0
    AS
    BEGIN
    -- variable for @HomeNeworkUserID
    Declare @HomeNeworkUserID int
    -- Set the HomeNetworkID
    Set @HomeNeworkUserID = (Select HomeNetworkUserID From NetworkUser Where UserID = @UserID)
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON
    -- Begin Select Statement
    Select Top 40 [CreatedDate],[FileDownloadUrl],[HasLinkOrAttachment],[ImagePreviewUrl],[LikesCount],[LinkFileName],[LinkType],[MessageID],[MessageSource],[MessageText],[MessageWebUrl],[NetworkID],[NetworkName],[PosterEmailAddress],[PosterFirstName],[PosterImageUrl],[PosterName],[PosterUserName],[PosterWebUrl],[RepliesCount],[Score],[SmallIconUrl],[Subjects],[SubjectsCount],[UserID]
    -- From View
    From [MessageView]
    -- Do Not Return Any Messages That Have Been Recommended To This User Already
    Where [MessageID] Not In (Select MessageID From MessageRecommendationHistory Where UserID = @UserID)
    -- Do Not Return Any Messages Created By This User
    And [UserID] != @UserID
    -- Do Not Return The MessageID
    And [MessageID] != @SourceMessageID
    -- Only return messages for the Networks the user has selected
    And [NetworkID] In (Select NetworkID From NetworkUser Where [HomeNetworkUserID] = @HomeNeworkUserID And [AllowRecommendations] = 1)
    -- Order By [MessageScore] and [MessageCreatedDate] in reverse order
    Order By [Score] desc, [CreatedDate] desc
    ENDThe Actual Execution Plan Shows up the same; there are more messages on the Network that is slow, 2800 versus 1,500 but the difference is ten times longer on the slow network.Is the fact I am doing a Top 40 what makes it slow? My first guess was to take the Order By Off and that didn't seem to make any difference.The execution plan is below, it takes 62% of the query to look up theIX_Message.Score which is the clustered index, so I thought this would be fast. Also the Clustered Index Seek for the User.UserID take 26%which seems high for what it is doing.
    I have indexes on every field that is queried on so I am kind of at a loss as to where to go next.
    It just seems strange because it is the same view being queried in both cases.
    I tried to run the SQL Server Tuning Wizard but it doesn't run on Azure SQL, and my problem doesn't occur on the data in my local database.
    Thanks for any guidance, I know a lot of the slowness is due to the lower tier Azure SQL we are using, many of the performance issues weren't noticed when were on the full SQL Server, but the other networks work extremely fast so it has to be something to
    with having more rows.
    In case you need the SQL for the View that I am querying it is:
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE VIEW [dbo].[MessageView]
    AS
    SELECT M.UserID, M.MessageID, M.NetworkID, N.Name AS NetworkName, M.Subjects, M.SubjectsCount, M.RepliesCount, M.LikesCount, M.CreatedDate, M.MessageText, M.HasLinkOrAttachment, M.Score, M.WebUrl AS MessageWebUrl, U.UserName AS PosterUserName,
    U.Name AS PosterName, U.FirstName AS PosterFirstName, U.ImageUrl AS PosterImageUrl, U.EmailAddress AS PosterEmailAddress, U.WebUrl AS PosterWebUrl, M.MessageSource, M.ImagePreviewUrl, M.LinkFileName, M.FileDownloadUrl, M.LinkType, M.SmallIconUrl
    FROM dbo.Message AS M INNER JOIN
    dbo.Network AS N ON M.NetworkID = N.NetworkID INNER JOIN
    dbo.[User] AS U ON M.UserID = U.UserID
    GO
    The Network Table has an Index on Network ID, but it non clustered but I don't think that is the culprit.
    Corby

    I marked your response as answer because you gave me information I didn't have about the sort. I ended up rewriting the query to be a join instead of the In's and it improved dramatically, about one second on a very minimal Azure SQL database, and before
    it was 12 seconds on one network. We didn't notice the problem at all before we moved to Azure SQL, it was about one - three seconds at most.
    Here is the updated way that was much more efficient:
    CREATE PROCEDURE [dbo].[Procedure Name]
    -- Parameters
    @UserID int,
    @SourceMessageID int = 0
    AS
    BEGIN
    -- variable for @HomeNeworkUserID
    Declare @HomeNeworkUserID int
    -- Set the HomeNetworkID
    Set @HomeNeworkUserID = (Select HomeNetworkUserID From NetworkUser Where UserID = @UserID)
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON
    ;With cteMessages As
    -- Begin Select Statement
    Select (Fields List)
    -- Join to Network Table
    From MessageView mv Inner Join NetworkUser nu on MV.NetworkID = nu.NetworKID -- Only Return Networks This User Has Selected
    Where nu.HomeNetworkUserID = @HomeNeworkUserID And AllowRecommendations = 1
    -- Do Not Return Any Messages Created By This User
    And mv.[UserID] != @UserID
    -- Do Not Return The MessageID
    And mv.[MessageID] != @SourceMessageID
    ), cteHistoryForThisUser As
    Select MessageID From MessageRecommendationHistory Where UserID = @UserID
    -- Begin Select Statement
    Select Top 40 (Fields List)
    -- Join to Network Table
    From cteMessages m Left Outer Join cteHistoryForThisUser h on m.MessageID = h.MessageID
    -- Do Not Return Any Items Where User Has Already been shown this Message
    Where h.MessageID Is Null
    -- An Order By Is Needed To Get The Best Content First
    Order By Score Desc
    END
    GO
    The Left Outer Join to test for null was the biggest improvement, but it also helped to join to the NetworkUser table instead of do the In sub query.

  • Date Performance issue

    hi Guru's,
    I am using 11.1.6.8 OBIEE. One of my report is having performance issue when i dig into that i found that the date filter not applied in the SQL generated to send DB, due to that it is doing table scan, But strange thing is that when it displaying  data based on the Date range filter. It is only happening with the date dimension all other dimensions are working fine. I am not sure what it is missing.
    Thanks In advance.
    regards
    Mohammed.

    hi Saichand,
    Thanks for taking time and looking into.
    The filter is applied on the logical query but the physical query send to the DB is not having the filter. Due to that it is doing full table scan of the fact table and almost 30 minutes to display data. I am not sure why the physical query is not having the date filter. when i add the location or other type of filter it added to the Physical Query send to DB.
    regards
    @li

  • Macbook screen cracked, HD and performance issues

    I am definitely disappointed with my computer.
    I have a Macbook 13.3" 2.4Ghz Intel Core 2 Duo 4Gb 160Gb HD, Serial#: W8**VM0P5, and within 1,5 year I have faced many issues, which are related here:
    September 24th, 2008
    O.S. CRASHES
    I bought it in late September 2008, in Cologne, Germany, at the reseller "Compustore PC Gmbh", and also asked 4Gb Ram, and a Wireless mighty mouse.
    After changing the Ram memory, i went home; and a few days later, the problems came. Sometimes, the system crashed suddenly, showing the grey screen: "You must restart your computer". I thought it was normal, and continued to use, when it became more frequently (Picture: http://picasaweb.google.com/gstorck/MacbookProblems#5476986272817173138). It was little strange for a new laptop, and then I took it back to the store to fix it. The seller just took a look, changed some configurations, and gave me back again. I did this twice, and had to get back to the store for the third time, due to new crashes and some lack of performance. Then the seller replaced the Ram memory, with new 4Gb.
    After that, the system worked more stable; the problem happened some two or three times after some months, but it was ok.
    November 9th, 2009
    HD Problems
    One year later, in late 2009, I purchased a Mac OS update, the Snow Leopard. I was back to Brazil, where I live till today. During the OS installation, something went wrong and it couldn't be completed. Mac asked me to restart the computer, but then the system didn't start anymore; some files were corrupted. I guess there was not enough space to install the update, so it was not completed; but it should be no reason to the corruption of the OS.
    November 10th, 2009
    I took my computer to the local Apple technical support, the "Omni Informatica", in Curitiba, PR, Brazil, to format the HD and reinstall the Snow Leopard. As I needed more space, I also changed my original 160Gb HD for a new 250Gb Western Digital HD. The service was finished in Nov 17th.
    I noticed also little cracks in the laptop case, but I had no time to insist with the bad local technical support to repair it (I know Apple already acknowledged this problem.) They said, at first, that they would have to ask the Apple USA to send the body, or something like that.
    (Pictures:http://picasaweb.google.com/gstorck/MacbookProblems#5476986477312032226)
    January 12th, 2010.
    SCREEN CRACKED
    The reinstallation was ok, and the laptop was running well. Suddenly, when I take the Macbook from the case and open it on my desk, something pretty wrong appears: a large crack on the lower right corner of the screen.
    It was not dropped, it was not crashed, and I did not "close it with a pen inside".
    The screen simply cracked. And as so many similar problems with another users I could easily find on the web, I really expect Apple to admit it as a Manufacture's fault. The local technical support said it is no other thing except I hit the laptop, or dropped it, so it's all my fault. (See Picture 3)
    The only thing they can offer me is to repair the screen for R$1200. I am definitely not going to pay for this; a "normal" and good pc laptop costs this price (And a new basic macbook costs today R$2400).
    If the problems were just it, it would be not so bad.
    (Pictures: http://picasaweb.google.com/gstorck/MacbookProblems#5476986361400585122)
    February 11th, 2010
    HD FAILURE
    After 1 month, on Feb 11th, my new 250Gb HD simply stopped working. Different from the first time, that the HD was just corrupted, this time it has really broken, it died. I had it all backed up, but I have lost the HD.
    Back again to the technical assistance, they said to me the problem was the same that cracked my screen: my fault, on letting the laptop fall, or hitting it with some kind of pressure. They couldn't do anything, and even the Western Digital 3-month Warranty would not cover this type of problem ("user's misuse").
    May 28th, 2010
    DECREASE IN PERFORMANCE
    I'm working again with the original 160Gb HD (formatted and reinstalled), and some other external drives to store my files.
    It's been quite difficult to deal with all this Data and Screen problems, and now the Macbook's performance is decreasing every day. It's been slower to process operations and to launch the applications. Sometimes, the system does not even sleep anymore when the laptop is closed, or it takes so long time to sleep that it just sleeps when I open it again. Besides, sometimes the mouse pointer disappear, and some bizarre graphics take its place.
    Don't forget: It's a 1,5-year-old Macbook, with 2.4GHz, 4Gb RAM and the O.S. is installed for no longer than 4 months. It is supposed to work better, I guess.
    I saved a lot of money to buy a good computer, and to get rid of the endless issues I had with PCs. That issues never were so serious with the ones I face now with Macbook.
    Even if I pay R$1200 to replace the screen, with the money I don't have, I would still stand these performance issues, which seem to have come with this problematical laptop. I hope to have some solution from Apple, if it want to deliver what it promises and want to keep a customer.
    Otherwise, I will have to change to another system platform and another computer manufacturer, to see if I stop throwing money away with these devices. I think that maybe at least, the cost-benefit ratio will be higher.
    Did anybody receive contact from Apple support about these screen problems?
    Guilherme R. Storck
    Apple user since 2008
    <Edited By Host>
    gstorck (at) gmail (dot) com

    Well, your first issue is pretty clearly just some bad RAM, and it sounds like even the replacement RAM was bad. Who knows if the people who put it in had any kind of clue what they were doing. I've seen repair shops where people are smoking in the same room they do repairs.
    Second problem sounds typical of a failing HDD. It's a fairly common problem with laptops... Apple, Dell, Lenovo, HP, Acer... Probably the single most common problem with laptops no matter who makes them. People get this notion in their heads that the "portable" aspect of laptops means you can pick them up and carry them all over while turned on. Which you can... If you don't mind dramatically shortening the life of the HDD. What "portable" REALLY means with laptops, is that they are easy to move if you put them into a powered down state. They should NEVER be moved around when in normal operation. And since you seem to do a bit of traveling, if you're on a plan, and there's turbulence, you should shut the laptop off until it smooths out.
    The screen cracking could have something to do with the rather sudden change in climate. Germany isn't exactly the frozen tundra, but it is a pretty different climate from Brazil, and changes in things like humidity could cause problems. The laws of thermodynamics tell us that things expand as they heat up, so if there's already a crack somewhere, it could easily get worse as a result of increased heat.
    And since your initial hard drive was already starting to fail, it hasn't magically stopped failing since being removed from the system, so obviously performance is going to get worse.
    At this point, it's pretty much impossible to tell where the damage was done, so you're probably just out of luck. The people in Germany could have screwed something up, or the people in Brazil, it could have been environmental damage, and it's possible the defect was always there, it just didn't manifest until recently. There's just no way to tell for sure. So Apple is unlikely to do anything for you.

Maybe you are looking for