Oracle CPU Jan 2009 cause performance issue

I did installed Oracle CPU Jan 2009 on HP-UX machine. But once the installation is completed, users complaint saying that it takes more than 1 minutes to open a new ticket on that application. With this, it cause a backlog processess increase tremendously.
Will the CPU Jan 2009 patch cause any of the network performance issue:
Server: HP-UX Itanium 64 bit
Database: Oracle 10.2.0.3.0
Instances: 2 instances running on this server.
Edited by: user3858134 on Oct 26, 2009 9:30 PM

I believe the latest CPU Patch for Oracle 10.2.0.3 on HP is CPU Jan 2009 only.Don't you think your database should be on 10.2.0.4?
Anyways, do you have any benchmark statspack/AWR report? Can you compare that with the latest one? Do you see any difference?
Regards,
S.K.

Similar Messages

  • WILL BIG INDEX WILL CAUSE PERFORMANCE ISSUE?

    In an index table, if there are a lot of insert then data will grow and/or if the index is
    huge then can it really cause performance issue?
    Is there a document in metalink that says if index is 50% of data then we have to rebuild it? What are the basis and threshold of rebuilding index?

    A big index by itself won't cause a performance issue. There are other circumstances you should consider for the index.
    First of all, which kind of index are you talking about, there are several kind of indexes in Oracle. On the other hand, assuming you are talking about a regular B*Tree index, you should consider factors such as selectivity and cardinality. If the indexed column has evenly distributed values, then the index will be highly selective, and if the indexed column is highly skewed, in order for the index not to become a real bottleneck you should gather histograms, so selectivity can be calculated at execution time and in case the query retrieves a highly selective data range the index won't slow performance, otherwise a full table scan will be considered a best data access path.
    Rebuilding indexes is an operation performed when the index becomes invalid, or when migrating the index to a new tablespace, but not when you suspect the index has become 'fragmented' in this case you should use the Coalesce command. Oracle provides efficient algorithms to maintain the index balanced.
    ~ Madrid
    http://hrivera99.blogspot.com/

  • Oracle 10.2.0.3 performance issue

    Hi all,
    I have a performance issue in the database I currently maintain.
    Here's the specifications:
    - Windows 2003 Server 64bit
    - Oracle 10g 10.2.0.3 patch 31
    - Application Server 10gR2 OC4J (Forms and Report Services)
    The server was re-installed about 3 weeks ago after it got viruses.
    I believe my setup on the memory parameters were fine because for two weeks the system ran fine and the end of day process is also increasing in terms of time.
    However, starting 2 days ago in the middle of no where, the end of day process started to get really slow (from 11 minutes became 1 hour).
    The IT standby in my client will normally do an analyze schema and after that every thing will be back to normal again.
    I turned off the GATHER_STATS_JOB 2 weeks ago and replaced it with DBMS_STATS.GATHER_DATABASE_STATS which I scheduled to execute every Friday.
    This problem occurred even before the server was re-installed and as usual, analyze schema (GATHER_SCHEMA_STATS) will fix it.
    I don't have much options currently, and analyzing the schema seems to be the only solution which normally we never do for other clients (at not everyday analyzing schema).
    I hope any of you could help me out with any solution on how to trace the exact problem on this.
    Thank you,
    Adhika

    Hi Satish,
    This end of day process basically will insert some values to certain tables and also generating reports afterward.
    I managed to get the AWR report within the time of the end of day process.
    sql execute elapsed time is on the top of Time Model Statistics.
    What I don't understand here is why is this issue happened only after 2 weeks?
    I suspected that this might be because Oracle pick the wrong execution plan, and normally a wrong execution plan caused by outdated statistics on the indexes and tables, but what I cannot understand is why this is happening on Monday morning where in Friday night GATHER_DATABASE_STATS ran successfully.
    Yesterday when the analyze schema was executed again, this morning I got another email saying that the performance issue occurred again.
    The top most wait events are: db file sequential read, db file scattered read, log file parallel write, LNS Wait on SENDREQ, and log file sequential read.
    Thank you,
    Adhika

  • SetAttribute causing performance issue.

    Hi ,
    I am using 11.1.1.4.0
    Code:::
            DCIteratorBinding itr=ADFUtil.findIterator(iterator);
            RowSetIterator rsi=itr.getRowSetIterator();
            Row currRow=rsi.getCurrentRow();
            currRow.setAttribute(id,null);
    If i call setAttribute multiple times(like 10-20 times) ,it causes severe performance issue .
    Is there any reason for it ??
    Should we avoid using setAttribute() ??If so then what we should use?
    Any help is appreciated .
    Thanks
    Sazz

    usecase is user wld see a existing vacancy record and able to update it.
    GEVacancyFromNotificationVO1()  is a query based vo and  getGETranVacancyVO1() is a updatable VO . Now using view criretia i am pulling the record in the updatable VO , this will have only 1 record at 1 time.
    GEVacancyFromNotificationVO1() gets the details and set then in attributes of the updatable VO as this VO includes many trasient attributes which are required in my jsff . Basicallly this data are not saved in DB but required to show in the UI.
    Anyways now the thing is setAttribute as called 20-30 times you see , the performance is slow and sometimes data is not set as well.
    I used attributeListImp class to create a name value pair and create a new row for this VO using createAndInitRow() and that works super fast . That is requied for another use case and works perfectly ok . Only when i want to update a existing record i have to update the same row. cant create another row, so facing this performance issue and sometime data doesnt set properly . i get null in DCiterator binding when i fetch the data in bean class.
    So my question is why does setAttribute of AttributeListImpl is much much faster than setAttribute of Row class.??
    public void initializeFromNotification(String role, String emp) {
            ViewObjectImpl notifyVO = this.getGEVacancyFromNotificationVO1();
            ViewObjectImpl transVO = this.getGETranVacancyVO1();
            ViewObjectImpl geLoginPersonIdVO = this.getGELoginPersonIdVO1();
            ViewObjectImpl autoPopulatevo =
                this.getGEAutopopulateHireSysforCopyVacanciesVO1();
            ViewObjectImpl geNextApproverVO = this.getGENextApproverVO1();
            ViewObjectImpl transHireVo = getGEHireSystemReqTeamTransVO1();
            ViewObjectImpl gejobdesc = getGEJobDescTransVO1();
            Row row = notifyVO.first();
            if (row != null) {
                //query the trx table
                transVO.setApplyViewCriteriaName("VacancyNumberVC");
                transVO.setNamedWhereClauseParam("p_vac_num",
                                                 row.getAttribute("VacancyNumber"));
                transVO.executeQuery();
                if (transVO.first() == null) {
                    return;
                } else {
                    transVO.setCurrentRow(transVO.first());
                Row currentRow = transVO.getCurrentRow();
                List<String> transColumns =
                    Arrays.asList(currentRow.getAttributeNames());
                //setting values from notification vo to transvacancy VO
                String arr[] = row.getAttributeNames();
                if (null != transVO.getCurrentRow()) {
                    // AttributeListImpl attrList = new AttributeListImpl();
                    for (String attr : arr) {
                        if (row.getAttribute(attr) != null) {
                            if (attr.equalsIgnoreCase("VacTrxId")) {
                            } else if (transColumns.contains(attr)) {
                                if (currentRow.getAttribute(attr) == null) {
                                    currentRow.setAttribute(attr,
                                                            row.getAttribute(attr).toString());
                if (role != null && role.startsWith("ORG_MGR")) {
                    transVO.getCurrentRow().setAttribute("userRole",
                                                         "INITIATOR_HM");
                    transVO.getCurrentRow().setAttribute("userRoleDisplay",
                                                         "Hiring Manager");
                } else if (role != null && role.startsWith("HRM")) {
                    transVO.getCurrentRow().setAttribute("userRole",
                                                         "INITIATOR_HRM");
                    transVO.getCurrentRow().setAttribute("userRoleDisplay",
                                                         "HR Manager");
                } else {
                    transVO.getCurrentRow().setAttribute("userRole",
                                                         "INITIATOR_RFO");
                    transVO.getCurrentRow().setAttribute("userRoleDisplay", "RFO");
                transVO.getCurrentRow().setAttribute("EmpNumber", emp);
                geLoginPersonIdVO.setNamedWhereClauseParam("sso", emp);
                geLoginPersonIdVO.executeQuery();
                transVO.getCurrentRow().setAttribute("userPersonId",
                                                     geLoginPersonIdVO.first().getAttribute(0));

  • WEBUTIL - Does adding it to all forms cause performance issues?

    If I add the webutil library and object library to all forms in the system (as part of a standard template) despite the fact most won't use it, will this cause any performance issues???
    Thanks in advance...

    The webutil user guide has a chapter on performance considerations. Have you looked at that?
    The number one point from that chapter is:
    1. Only WebUtil Enable Forms that actually need the functionality. Each form that is WebUtil enabled will generate a certain amount of network traffic and memory
    usage simply to instantiate the utility, even if you don’t use any WebUtil
    functionality.

  • Oracle CPU Jan 2012 Patch

    Hi:
    EBS: 12.1.3
    DB: 11.1.0.7.0
    APPS: 10.1.2.3, 10.1.3.4
    I need to apply CPU Jan 2012 patch. May I ask what Jave SE version supports that?
    Thank you and regards.
    Edited by: 907485 on Apr 9, 2012 1:13 PM

    Hi;
    Please check patch read me part. All related information cover in it
    Regard
    Helios

  • Reporting Services Unicode Parameters Cause Performance Issues

    When I create a report using string parameters,  reporting services sends the SQL to SQL Server with an N prefix on the string parameters.  This is the behavior even when the underlying data table has no unicode datatypes.  This causes SQL Server to do a scan instead of a seek on these queries.  Can this behavior be modified to send the parameters as non-unicode text?

    Work around to overcome SSRS report performance due to UNICODE conversion issue:
    I have used a new parameter (of type Internal) which collects/duplicates the original parameter values as comma separated in string.
    In the report Dataset query, parse the comma separated string into  a list into a vairable table using XML trick.
    Use the variable table in WHERE IN clause
    Steps:
    Create a new Internal parameter (call it InternalParameter1)
    Under Default Values -> Specify values : Add Exp : =join( Parameters!OrigParameter1.Value,",")
    Pass/Use the InternalParameter1 in your dataset query.
    Example code
    DECLARE @InternalParameter1 NVARCHAR(MAX)
    SET @InternalParameter1 = '100167600,
    100167601,
    4302853605,
    4030753556,
    4026938411
    --- Load comma separated string to a temp variable table ---
    SET ARITHABORT ON
    DECLARE @T1 AS TABLE (PARALIST VARCHAR(100))
    INSERT @T1 SELECT Split.a.value('.', 'VARCHAR(100)') AS CVS FROM
    ( SELECT CAST ('<M>' + REPLACE(@InternalParameter1, ',', '</M><M>') + '</M>' AS XML) AS CVS ) AS A CROSS APPLY CVS.nodes ('/M') AS Split(a)
    --- Report Dataset query ---
    SELECT CONTRACT_NO, report fields… FROM mytable
    WHERE CONTRACT_NO IN (SELECT PARALIST FROM @T1) -- Use temp variable table in where clause
    Mahesh

  • Latest CPU Jan 2009 Weblogic Server

    I am having problems finding the actual download for the latest Jan2009 Critical Patch Update for Weblogic 9, 10 server. Also wondering how necessary the patch is if app server is behind firewall. Any feedback would be appreciated.
    Thanks

    Start here:
    http://www.oracle.com/technology/deploy/security/alerts.htm
    Click on 2009 - Jan:
    http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpujan2009.html
    Click on the version you are interested in, say WLS 10gR3 (10.3):
    http://www.oracle.com/technology/deploy/security/beaarchive.html
    For each update, read the details on the advisory, this one explains to use smart update to get the patch:
    http://www.oracle.com/technology/deploy/security/wls-security/2808.html
    Smart update has options to use a proxy or install the patch from the file system locally, see the HELP or docs.
    Edited by: james.bayer on Mar 7, 2009 4:32 AM

  • 32 bit Agent causing performance issue on Win 64??

    My OS is Windows 2003 64 bit server.
    I am using Tomcat as my application server (version 6.0.20)
    Java home is set to jdk1.6.0_14 (32bit). My agent is built on a 32bit platform. [I use the agent to gather specific data for profiling on specific commands].
    Even if I simply enable the java_opts to load the agent, performance of my application slows down drastically. [Even if the agent does not do anything]. The same works fine when I run the agent on a 32bit machine.
    Is this a known issue? Are there any work around for the same?
    Any suggestions, help will be greatly appreciated.

    Do not cross-post.
    Closed.

  • Does CAST ( date as timestamp) causes performance issues

    Hi
    does casting from DATE to TIMESTAMP in a query or in a view causes the slow down of performance??
    Please guide...
    Thanks
    Rimpi

    Everything you do affect performance. How significantly is a question answered by testing not by asking questions on an OTN forum.
    Create a loop doing it 10,000 times and find out.
    And, in the future, always include your version number.

  • Changing the Database driver causing performance issue

    Hello Experts
    I am finding a strange issue where if I change the database to Oracle 11g ( you will find it if you click the database which you have created in physical layer) under Data Source definition the report takes a longer time to complete.
    Actually I have upgraded the 10g rpd and catlog to 11g but database under data source definition was still using - oracle 10gR1 ( i donno whether this is a driver or not) . Everthing was running perfectly fine until I change the database under data source definition to Oracle 11g , the report takes a longer time . Also I found that the query is changed when I changed the database under data source definition to 11g.
    Also to inform you that datawarehouse is Database 11g where the data lies.
    what is the significance of Data source definition as changing it is changing the whole query.
    Pls. help.

    Hello,
    Do you have the Full Oracle 11g DB Client installed on the same box where you are running your BI Server .? Also can you make sure you have copy of tnsnames.ora in following directories.
    1. C:\Middleware\Oracle_BI1 \network\admin directory
    2. C:\Middleware\oracle_common\network\admin
    Check if the SQL Features that the datasource has , sometimes when you disable SQL features the server could issue a less effcient query to the database.
    Thanks,
    -SVS

  • MDX calculated measure causing performance issue

    The calculated measure below against all product members is causing the excel pivot table to hang indefinitely. Any help on how to optimize the query for better performance?
    SCOPE ([MEASURES].[DIDaysInMonth]);
    THIS = CASE WHEN [Measures].[MonthDifference] < 0 THEN 0
    WHEN [MEASURES].[MonthDifference] >= 0 AND ProjectedEnd > 0 THEN [MEASURES].[DaysRemainingInMonth]
    WHEN [MEASURES].[MonthDifference] = 0 AND ProjectedEnd < 0 THEN
    [Measures].[Ordered Cases] / (([Measures].[Forecasted Sales]-[Measures].[Cases])/[measures].[DaysRemainingInMonth])
    WHEN [MEASURES].[MonthDifference] >= 0 AND ([Time Monthly].[Time Monthly].CurrentMember.PrevMember,[MEASURES].[ProjectedEnd]) <= 0 THEN 0
    WHEN [MEASURES].[MonthDifference] > 0 AND ([Time Monthly].[Time Monthly].CurrentMember.PrevMember,[MEASURES].[ProjectedEnd]) > 0 THEN
    ([Time Monthly].[Time Monthly].CurrentMember.PrevMember,[MEASURES].[ProjectedEnd]) /
    ([Forecasted Sales] / [daysInMonth]) END;
    END SCOPE;
    BI Developer

    Hi Abioye,
    According to your description, you create a calculated measure which against all products  in your AS cube, now the performance is poor when using this calculated measure in EXCEL Pivot table, right? In this case, here are some links which describe
    tips about performance tuning in SSAS, plesae see:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    http://sqlmag.com/t-sql/top-9-analysis-services-tips
    Hope this helps.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Sorting causing Performance issues

    Morning all,
    I have written a report based on a materialised view that returns 36,000 rows in about 10 seconds. As soon as I put a sort on just one of the fields, the performance degrades. I have tried sorting it after the results are returned and ogt fed of of waiting so have now written the report again with the sort included at the beginning.
    It has now been running for 30 minutes. If the query itself is exactly the same why is there such a hit on performance?
    Any help would be greatly appreciated! I am using Disco 10G Desktop
    Cheers
    Col

    Col.
    I'd check a couple of things at this point.
    a. first of all as you're checking a 'historic' view (I'm assuming this is an Oracle BIS view as I don't have access right now) that you might want to first be sure that it's being 'limited' by conditions correctly. I just looked at a Noetix HR help file - if you know Noetix you'll know what I mean - and although it's not what you're using directly, it has pertinent info on that basic set of tables. I can't paste what it says due to copyrights, etc. but essentially it mentions that there's a primary_flag, an assigment_current_flag and a current_employee_flag that you should make sure if the BIS view you're using let's you limit on. Additionally, it does say it returns records from multiple business groups so again, if your view has that column you might want to limit on that as well. That at least would make sure you're not returning multiple records for the same employee (and as many people do, just setting the tick mark in Disco to stop showing duplicate records!).
    b. The problem with having a calculation - I assume - calling a function is that it will do it for every row that fits the condition (ie: again why you want to make sure in a. above that you're getting the least number of records). Obviously I can't see the code of the function you're referring to - as I assume it's an in-house built function) but it could be written ... oh let's say ... not optimally!
    I know I recently read on this forum that there was someone - most likely Rod West as he's good in this area - who I thought talked about loading a function first? to maybe minimize, but you might want to search the forum.
    c. Only other idea I can think of that would definitely help - but may cross your DBA's happy state of mind - is to create a function that creates a table first and then your report runs against that table. I did this at one client - and again, I know Rod's talked about this on the forum as well - where the first worksheet in Discoverer was called something like: 'Create Information'. It had a simple condition and parameter that let the user either go ahead and hit the okay button to start the process, or to get out at that point. Once it was hit, it called a function that simple built an Oracle 'temp' table (ie: it was always there to report against but if wiped out would create and fill) with heavy processing data. The function could create the information you wanted to report against - and in your case in the sort order you would prefer - and in much quicker time that going through Disco record at a time. Once the process was finished - about 10 minutes - it would advise you in the worksheet by finishing.
    Then the 2nd./3rd., etc. worksheets were the actual, normal Disco reports retrieving and manipulating the data however wanted. As every worksheet was accessing this specific table, they screamed including sorting, etc.
    So ... if you can't get the GET_SIT_DATA function to be quicker and your data never comes back, this is another idea - but you may encur the wrath of the DBA (they don't like tables being built in PROD (but their own ... of course)).
    d. Of course, to check all this, I'd first turn delete the calculation call to the function GET_SIT_DATA, rerun the worksheet, sort and be sure this is the culprit.
    Russ

  • Digital Signatures causing performance issue

    I have created a form for a two-step process in which the first user will open, enter data, and submit the form for another user to approve.  The problem is that during the approver phase, they will digitally sign the form (there are two digital signature fields for each section of the form, the approver is required to sign at least once in order to submit) and then submit, but the process seems to lag for about 20-30 seconds after they click submit.
    I understand that the digital signatures may add to the size of the form overall, but is one or two signatures enough to cause a delay? 
    I have some javascript that I fire at the pre-submit event using AWS_Action to see what the user action is and if the user selects to submit then it will test to see if at least one signature field is signed and valid, if so then changes the signature field type to disabled (I have the signature fields become required type based on certain fields containg data) and submits the form.  Is it possible that this script could cause such a delay after the submit?  We aren't having any delays when the first user submits the form without digital signatures so I assume this is what is causing the problem.

    I expect that the lag may be related to the script you are executing, rather that specifically by the digital signature itself.  Are you able to test the form outside of the process to see if the lag remains?  Could you change the submit button to a mailto: uri and see how long it takes for the "pre-submit" code to execute?
    Regards
    Steve

  • Connect by cause performance issue in Table Based Value Set.

    Hi,
    In PO Requisition Distribution DFF we have added some segments.
    I have a value set for party details in first segment (Attribute1) and returns the party site id (due to some dependency i cant make it return party_id).
    Using the party_site_id i have to get all contact persons ( Need to Scan entire organization hierarchy to find contacts ) using organization relationship.
    My Table Type Value set is written like below.
    Table Name : HZ_PARTIES
    Value : party_name
    Id : party_id
    Select party_name, party_id from hz_parties
    where
    party_id IN
    (SELECT
      Object_id
    FROM hz_relationships hr
    WHERE 1                =1
    AND relationship_code  = 'CONTACT'
    AND subject_type       = 'ORGANIZATION'
    AND subject_table_name = 'HZ_PARTIES'
    AND object_type        = 'PERSON'
    AND object_table_name  = 'HZ_PARTIES'
    AND status             = 'A'
      START WITH object_id =
      (SELECT party_id
      FROM hz_party_sites
      WHERE party_site_id = :$FLEX$.XX_PROJ_COUNTERPART_INST
      CONNECT BY NOCYCLE PRIOR object_id = subject_id
    This is working as expected but has poor performance.  It's taking around 20 sec to 1 Min based data volume. Can this be tuned?
    Any help will be appreciated.
    Best Regards,
    Ram

    Hi Syed,
    BP is right.
    Just note: The phrase "i have passed most of the primary keys in the query..." does not mean the key is used for database access: Only key field in sequence starting with the first one will result in the use of an index, I.e. if the tables index fields are A B C D E F G, use of A, AB, ABC, ... will get the index used, CDE, BCD or EFG will not use the index at all.
    Regards,
    Clemens

Maybe you are looking for

  • Ipod Nano sync songs but when play song it stops

    HI, When I listen to the song in Itunes the whole song plays well. Once I sync the nano the same song plays for a few seconds and stops. Can anyone help me with this? What can I do to fix it?

  • What is the difference?

    I'm a bit confused between the difference between the following 3 ways of setting up PSTN fallback if WAN bandwidth is insufficient in GK scenario Say I want to call extn 3xxx on the remote office. The intercluster prefixs are *801, and the PSTN digi

  • Get OSX to retrieve users from a specific AD OU.

    Hi All, I work in a school with OSX and AD, two campuses (Secondary and Junior). On the Secondary campus we run the 'Golden Triangle' pretty well, but we just present a login box for users. On our Junior campus we run OSX as it's own directory master

  • Anyone knows how to set printer to economy mode from your ibook?

    I am using a HP 1320 laser printer for both my ibook G4 and a pc. With the pc there is no problem to set the printer to economy mode for toner saving, but I can't find a way to set up the same mode from Mac. Is it because of the version of the printe

  • Broken link for recording of hybris integration gem

    https://www.adobe.com/cfusion/event/index.cfm?loc=en_us&id=2232535&event=set_registered the above link is broken