Performance Issues when running 1.5.0_9 with servers

Hi. We have an application in Java. We run it using 1.5.0_9 on Win2003 and have no problems when running it on single Intel CPU PCs and Servers. However, whenever we are finding that some higher spec servers are running the application significantly slower... around 30%. It seems that it's either a problem with the Enterprise version of Win2k3 or perhaps with AMD Opteron CPUs.
Does anyone know of any known issues with Win2k3 Enterprise or AMD Opteron CPUs or simply any dual CPU technology?
Thanks
Edited by: gingerdazza on Jul 2, 2008 1:21 AM

I'm able to recreate the problem where a section of a screen goes black using with the following config.
JDK 1.7.0_06
2 graphics cards
3 monitors
Create a New JavaFX Application Project (Hello World) in NetBeans 7.2. Once executed position the window so it spans two monitors, ensure the monitors are on seperate graphics cards and then hover the mouse of the Hello World button and one half of the window goes black. Positioning the window so it doesn't span a monitor draws the window correctly.
Has anyone experienced this issue?
Thanks.

Similar Messages

  • Performance issue when opening DOCX Word Doc with Excel Links

    Is this a bug or is there a config setting that can prevent Word from needlessly opening Excel repeatedly (upon initial open of a DOCX Word doc) when pre-existing links should NOT be updated. The behavior is good using 2003 formats (DOC & XLS),
    but varies significantly (~100:1 range in performance) depending on the test scenario (see below) using DOCX with XLSX.
    We can't move our environment to the new XML formats until we can find a fix. We were led to believe Microsoft knew this to be a bug in Office 2007, but it has NOT been corrected in Office 2010. Omitting a detailed explanation for why we need to
    do this, here's our test scenarios:
    -Currently testing Office 2010 Pro Trial version on WIN XP SP3. (We have found similar results using Office 2007 with Win7 or XP).
    -Word Option Deselected/DISABLED: "Update Automatic Links at Open"
    -The Word Doc contains 100 linked Excel tables and is 130K in size (DOCX).
    -Each link points to the same 10x10 cell range in one Excel Workbook located in same directory as the Word Doc, although other tests using separate directories produced similar results.
    -The Excel Links were inserted into Word via Paste Special/Paste Link/MS Excel Worksheet Object.
    -The workbook (XLSX) is 15K in size.
    -All Links are set for "Manual" update, our testing showed no difference if links are set to "Auto".
    -Again, Word is set to NOT update Automatic links when the document is opened.
    -Results shown below are for local hard drive test.
    -Network response times are slightly greater on 100mb LAN
    -Word is completely restarted prior to each test to make sure there's no application caching involved.
    LOCAL DRIVE RESPONSE TIMES TO OPEN THE WORD DOC in Docx format:
    1. Open the Word Doc from Word with Excel closed - 175 seconds (Excel appears to be opened and closed once per link)
    2. Open the Word Doc from Word with Excel open & workbook closed - 33 seconds (Excel appears to be accessed once per link)
    3. Open the Word Doc from Word with the linked workbook already open - 7 seconds (Excel appears to be accessed once per link)
    4. Rename Excel XLSX workbook so that Word can't find it. Then open Doc the from Word. 1-2 seconds (Excel does not appear to be invoked)
    5. Repeat ALL of the above scenarios 1-4 using DOC and XLS files. These files are created with "Save AS" option based on original test files and then re-pasting the links.  1-2 seconds to open the Word Doc (Excel does not appear to be invoked)
    Summary
    Using DOCX/XLSX, if Word can't find the linked spreadsheet it gives up and performs nicely during file-open. Otherwise, Word performs needless, time-consuming access to Excel, and in the worst case, opens and closes Excel repeatedly in the background.
    Testing with DOC/XLS formats gives good performance across all scenarios.

    It is a known bug and one for which AFAIK, there has been no fix.  The only way that I could get around it in a particular application was to do something like save the linked information as document variables and then on opening the document recreate
    all the information.  My memory on exactly what I did is a bit hazy, but I can go back and check if you are interested.
    -- Hope this helps.
    Doug Robbins - Word MVP,
    dkr[atsymbol]mvps[dot]org
    Posted via the Community Bridge
    "galmcrantz" wrote in message news:[email protected]...
    Is this a bug or is there a config setting that can prevent Word from needlessly opening Excel repeatedly (upon initial open of a DOCX Word doc) when pre-existing links should NOT be updated. The behavior is good using 2003 formats (DOC & XLS), but varies
    significantly (~100:1 range in performance) depending on the test scenario (see below) using DOCX with XLSX.
    We can't move our environment to the new XML formats until we can find a fix. We were led to believe Microsoft knew this to be a bug in Office 2007, but it has NOT been corrected in Office 2010. Omitting a detailed explanation for why we need to do this,
    here's our test scenarios:
    -Currently testing Office 2010 Pro Trial version on WIN XP SP3. (We have found similar results using Office 2007 with Win7 or XP).
    -Word Option Deselected/DISABLED: "Update Automatic Links at Open"
    -The Word Doc contains 100 linked Excel tables and is 130K in size (DOCX).
    -Each link points to the same 10x10 cell range in one Excel Workbook located in same directory as the Word Doc, although other tests using separate directories produced similar results.
    -The Excel Links were inserted into Word via Paste Special/Paste Link/MS Excel Worksheet Object.
    -The workbook (XLSX) is 15K in size.
    -All Links are set for "Manual" update, our testing showed no difference if links are set to "Auto".
    -Again, Word is set to NOT update Automatic links when the document is opened.
    -Results shown below are for local hard drive test.
    -Network response times are slightly greater on 100mb LAN
    -Word is completely restarted prior to each test to make sure there's no application caching involved.
    LOCAL DRIVE RESPONSE TIMES TO OPEN THE WORD DOC in Docx format:
    1. Open the Word Doc from Word with Excel closed - 175 seconds (Excel appears to be opened and closed once per link)
    2. Open the Word Doc from Word with Excel open & workbook closed - 33 seconds (Excel appears to be accessed once per link)
    3. Open the Word Doc from Word with the linked workbook already open - 7 seconds (Excel appears to be accessed once per link)
    4. Rename Excel XLSX workbook so that Word can't find it. Then open Doc the from Word. 1-2 seconds (Excel does not appear to be invoked)
    5. Repeat ALL of the above scenarios 1-4 using DOC and XLS files. These files are created with "Save AS" option based on original test files and then re-pasting the links.  1-2 seconds to open the Word Doc (Excel does not appear to be invoked)
    Summary
    Using DOCX/XLSX, if Word can't find the linked spreadsheet it gives up and performs nicely during file-open. Otherwise, Word performs needless, time-consuming access to Excel, and in the worst case, opens and closes Excel repeatedly in the background.
    Testing with DOC/XLS formats gives good performance across all scenarios.
    Doug Robbins - Word MVP dkr[atsymbol]mvps[dot]org

  • Very slow performance jclient when running with remote server

    We have performance problems when running a JClient application, if the Application Server is on a different machine in the same 100mbit network. In our application we open 6 panels with about 15 TextFieldBindings each, on a tabbed pane. Each panel has it's own viewobject on the server. It takes the panel allmost two minutes to start up. Our own code seems to perform reasonable, but between the last line of code and the actual visibility of the panel there is a long period of low intensity network traffic between the client and the server machine, while both machines have low CPU usage. We tried setting the synchmode of the ApplicationModule to SYNC_LAZY and SYNC_IMMEDIATE, but this does not seem to make any difference.
    It seems as if the server starts throwing a lot of events after our code is executed, which are caught by the BC4J controlbinding listeners. The performance is a lot better if we have the server and the client on the same machine, and the database on a different one.
    This kind of performance is not acceptable for this application. Are we doing something that should not be done with BC4J, or are we missing something?

    We have performance problems when running a JClient application, if the Application Server is on a different machine in the same 100mbit network. In our application we open 6 panels with about 15 TextFieldBindings each, on a tabbed pane. Each panel has it's own viewobject on the server. It takes the panel allmost two minutes to start up. Our own code seems to perform reasonable, but between the last line of code and the actual visibility of the panel there is a long period of low intensity network traffic between the client and the server machine, while both machines have low CPU usage. We tried setting the synchmode of the ApplicationModule to SYNC_LAZY and SYNC_IMMEDIATE, but this does not seem to make any difference.
    It seems as if the server starts throwing a lot of events after our code is executed, which are caught by the BC4J controlbinding listeners. The performance is a lot better if we have the server and the client on the same machine, and the database on a different one.
    This kind of performance is not acceptable for this application. Are we doing something that should not be done with BC4J, or are we missing something? You must be hitting a performance issue regarding download of all property metadata for setting labels etc. on the UI (in case of remote-tier deployment).
    This issue has been resolved for our next release of JDeveloper. Basically a new api has been added that allows 3tier apps to "download" the set of "used" VO definitions, Attribute Definitions etc on the client, so that the UI comes up quick.
    Also a application/ui/binding load code-generation has been modified to allow for "lazy" loading of controls/lazy binding etc, quite like what's done in the JClient Control-bindings sample on otn.
    For 9.0.2, you may shorten the "load" time, by loading only the UI that's first displayed and pre-loading ViewObject definition. However it'll still be slower than what the above mentioned method would do in one roundtrip.

  • Issue when Run Report with  Hier selection   in the Portal

    Hi  Portal  BI Experts,
    we are finding a strange issue  when Running the Report.
    the following  Variables are in the  Report  Selection screen :
    Company code [optiona]
    Prod.Variance Type [ mandatory]
    Hierarchy Node Variable [optional]
    TheQuery  which I am Running  thro Bex Analyzer  with  the Hier  selection as below , is working  fine. But
    When I  run   thro portal with Hier selection value   with    00/50/G310/702258(0CUST_SALES
    It automatically  display as +00/50/G310/702258(0CUST_SALES    with + symbol. and  thro the error:
    Input "\+00/50/G310/702258(0CUST_SALES);\+00/51/G410/703096(0CUST_SALES)" for Ship-To Party (Sales has invalid format
    If I remove the plus symbol  report runnig fine.
    Your  immediate help highly appreciated.
    Thanks
    Hema
    Edited by: hemav on Mar 21, 2011 2:29 PM

    Hi Jaya,
    This is the Error message  I am facing when I Execute ithe query  report n the Web[Portal]
    Input "\+00/50/G310/702258(0CUST_SALES);\+00/51/G410/703096(0CUST_SALES)" for Ship-To Party (Sales has invalid format
    ie. In the variable screen the selected hier value  automaticvally display with plus symbol
    Actual   Hier  value :   00/50/G310/702258(0CUST_SALES)
    Once close the Hier  selection list window , the Hier value  turned to  +00/50/G310/702258(0CUST_SALES)
    If I give ok with this Plus symbol  it throwing the above error.  By  removing the Plus symbol manually  it is working fine.
    I unable locate the  settings.
    that too when I run the same report  in Bex Analyzer  woking fine , no issues in the Hier  value.
    Thanks ,
    Hema

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Issue when running acutal Off-cycle payroll in India!

    Dear experts!
    I have an urgent issue when running ACTUAL off-cycle payroll in India as below: "Payroll already performed once in future".
    In testing mode, everything is fine. I have tried to check payroll result in pc_payresult and pu01, and Infotype 3. All of them show the last payroll in 31.07.2011. But now I run off-cycle payroll on 20.08.2011, this error is displayed. So I cannot run actual payroll for them.
    Please help me! I am really appreciated.
    Regards!
    Woody.

    Hello
    Normally this error message reflects an inconsistency in the payroll
    directory cluster, that you can view via RPUDIR00. What you can do to
    restore the consistency is to run the report RPUDIR00 for this pernr
    with the following parameters on the selection screen:
    Payroll for country                           
    Personnel number             
    Compare directory            
    Payroll results structure   
    detailed log                
    Save new directory             X          <  this will write the new directory.  If you want to test first, do not select this parameter.
    Afterwards it should be possible to run payroll for this employee.
    Thanks and Kind Regards
    Ramana

  • Performance problem when running a personalization rule

    We have a serious performance problem when running a personalization rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType == deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType == tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences in the
    OS?
    Thank you,
    Luis Muñiz

    Luis,
    I think you are working through Support on this one, so hopefully you are in good
    shape.
    For others who are seeing this same performance issue with the reference CM implementation,
    there is a patch available via Support for the 3.2 and 3.5 releases that solves
    this problem.
    This issue is being tracked internally as CR060645 for WLPS 3.2 and CR055594 for
    WLPS 3.5.
    Regards,
    PJL
    "Luis Muniz" <[email protected]> wrote:
    We have a serious performance problem when running a personalization
    rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType ==
    deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType ==
    tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and
    cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and
    a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same
    time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences
    in the
    OS?
    Thank you,
    Luis Muñiz

  • Oracle Retail 13 - Performance issues when open, save, approving worksheets

    Hi Guys,
    Recently we started facing performance issues when we started working with Oracle Retail 13 worksheets from within the java GUI at clients desktops.
    We run Oracle Retail 13.1 powered by Oracle Database 11g R1 and AS 10g in latest release.
    Issues:
    - Opening, saving, approving worksheets with approx 9 thousands of items takes up to 15 minutes.
    - Time for smaller worksheets is also around 10 minutes just to open a worksheet
    - Also just to open multiple worksheets takes "ages" up to 10-15 minuts
    Questions:
    - Is it expected performance for such worksheets?
    - What is your experience with Oracle Retail 13 in terms of performance while working with worksheets - how much time does it normally take to open edit save a worksheet?
    - What are the average expected times for such operations?
    Any feedback and hints would be much appreciated.
    Cheers!!

    Hi,
    I guess you mean Order/Buyer worksheets?
    This is not normal, should be quicker, matter of seconds to at most a minute.
    Database side tuning is where I would look for clues.
    And the obvious question: remember any changes to anything that may have caused the issue? Are the table and index statistics freshly gathered?
    Best regards, Erik Ykema

  • Performance issue when a Direct I/O option is selected

    Hello Experts,
    One of my customers has a performance issue when a Direct I/O option is selected. Reports that there was increase in memory usage when Direct I/O storage option is selected when compared to Buffered I/O option.
    There are two applications on the server of type BSO. When using Buffered I/O, experienced a high level of Read and Write I/O's. Using Direct I/O reduces the Read and Write I/O's, but dramatically increases memory usage.
    Other Information -
    a) Environment Details
    HSS - 9.3.1.0.45, AAS - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
    OS: Microsoft Windows x64 (64-bit) 2003 R2
    b) What is the memory usage when Buffered I/O and Direct I/O is used? How about running calculations, database restructures, and database queries? Do these processes take much time for execution?
    Application 1: Buffered 700MB, Direct 5GB
    Application 2: Buffered 600MB to 1.5GB, Direct 2GB
    Calculation times may increase from 15 minutes to 4 hours. Same with restructure.
    c) What is the current Database Data cache; Data file cache and Index cache values?
    Application 1: Buffered (Index 80MB, Data 400MB), Direct (Index 120MB; Data File 4GB, Data 480MB).
    Application 2: Buffered (Index 100MB, Data 300MB), Direct (Index 700MB, Data File 1.5GB, Data 300MB)
    d) What is the total size of the ess0000x.pag files and ess0000x.ind files?
    Application 1: Page File 20GB, Index 1.7GB.
    Application 2: Page 3GB, index 700MB.
    Any suggestions on how to improve the performance when Direct I/O is selected? Any performance documents relating to above scenario would be of great help.
    Thanks in advance.
    Regards,
    Sudhir

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

  • Performance hit when running in ARCHIVELOG mode.

    What is the performance hit when running in ARCHIVELOG mode?
    Thank you,
    David

    I am not one to disagree with Tom Kyte (unless I think he is wrong :) ), and I am not going to disagree here. I do caution the simplistic answer that the hit is negligible. I commend the respondent who qualified that answer with a discussion of I/O.
    I have come across more than one situation where archive logging was a performance hit because of the associated I/O. Many want to put archive logs on cheaper storage and do not recognize that not only can this slow a system but that it could become a major issue resulting in a system that hangs until the logs are written. A better solution for these folks is to write to fast storage and have a secondary process that offloads those logs to the slower storage.
    Let us also not assume that the archive location is local disk. It might be that an archive location is remote, such as with log shipping or NFS. Network latency can become an issue.
    There are many things to consider as there always are. I suppose with any answer, even if simple, one could spin it with some obscure situation that makes the simple answer inappropriate. Having seen some burned by this issue, I chose to elaborate, and I appreciate your indulgence.
    Chris

  • Performance Issues when editing large PDFs

    We are using Adobe 9 and X Professional and are experiencing performance issues when attempting to edit large PDF files.  (Windows 7 OS). When editing PDFs that are 200+ pages, we are seeing pregnated pauses (that feel like lockups), slow open times and slow to print issues. 
    Are there any tips or tricks with regard to working with these large documents that would improve performance?

    You said "edit." If you are talking about actual editing, that should be done in the original and a new PDF created. Acrobat is not a very good editing tool and should only be used for minor, critical edits.
    If you are talking about simply using the PDF, a lot depends on the structure of the PDF. If it is full of graphics, it will be slow. You can improve this performance by using the PDF Optimize to reduce graphic resolution and such. You may very likely have a bloated PDF that is causing the problem and optimizing the structure should help.
    Be sure to work on a copy.

  • Performance issue - application running on front

    Hi, I have a Strange performance issue:
    - when I launch my app from flash builder without touching anything, it is slow,
    - when I launch it from flash builder and immediately open another window and keep it in front of it, it is really fast
    it is a windowed application, full screen, displaying multiple objects moving around
    had anyone already had the issue?
    thanks,
    YAnn

    For monitoring Azure using SCOM 2012R2, you can refer below link
    http://blogs.technet.com/b/dcaro/archive/2012/05/02/how-to-monitor-your-windows-azure-application-with-system-center-2012.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"Mai Ali | My blog:
    Technical | Twitter:
    Mai Ali

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Strange issue when running sql with false condition

    Hi,
    I'm running this query from the application:
    select 22
    from DUAL
    where ( to_number(0) <> 0 ) and exists (
    select 'X'
    from TABULA.macdent$FNCITEMS
    where ( macdent$FNCITEMS.QIV = to_number(0) ) );
    And it takes a long time because it is checking the second condition.
    Even though the first one is False.
    When running it through sqlplus it takes nothing. and of course no rows returns.
    The 0 in to_number is a bind variable
    it looks like this:
    select :intvar
    from DUAL
    where ( to_number(:QIV$) <> 0 ) and exists ( select 'X'
    from TABULA.macdent$FNCITEMS
    where ( macdent$FNCITEMS.QIV = to_number(:QIV$) ) )
    1 - filter predicate: ( IS NOT NULL AND 0<>TO_NUMBER(TO_CHAR(:QIV$)))
    But in sqlplus the filter predicates is different.
    1 - filter(NULL IS NOT NULL AND EXISTS (SELECT 0 FROM "TABULA"."MACDENT$FNCITEMS" "MACDENT$FNCITEMS" WHERE
    SQL> select 22
    from DUAL
    where ( to_number(0) <> 0 ) and exists (
    select 'X'
    from TABULA.macdent$FNCITEMS
    where ( macdent$FNCITEMS.QIV = to_number(0) ) );
    2 3 4 5 6
    no rows selected
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 883410849
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    Time | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 1 | | 0 (0)| | | | |
    |* 1 | FILTER | | | | | | | | |
    | 2 | FAST DUAL | | 1 | | 2 (0)|00:00:01 | | | |
    | 3 | PX COORDINATOR | | | | | | | | |
    | 4 | PX SEND QC (RANDOM)| :TQ10000 | 63M| 782M| 2 (0)|00:00:01 | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 63M| 782M| 2 (0)|00:00:01 | Q1,00 | PCWC | |
    |* 6 | TABLE ACCESS FULL| MACDENT$FNCITEMS | 63M| 782M| 2 (0)|00:00:01 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    1 - filter(NULL IS NOT NULL AND EXISTS (SELECT 0 FROM "TABULA"."MACDENT$FNCITEMS" "MACDENT$FNCITEMS" WHERE
    "MACDENT$FNCITEMS"."QIV"=0))
    6 - filter("MACDENT$FNCITEMS"."QIV"=0)
    ANY SUGGESTIONS?
    Thanks,

    912294 wrote:
    Hi,
    I'm running this query from the application:
    select 22
    from DUAL
    where ( to_number(0) <> 0 ) and exists (
    select 'X'
    from TABULA.macdent$FNCITEMS
    where ( macdent$FNCITEMS.QIV = to_number(0) ) );
    Why are you using "to_number(0)"? the zero is already seen by oracle as a number.
    The to_number function expects to get a character string as the argument: to_number('0')
    See the difference?
    As you currently have it coded, you are forcing oracle to do an implicit conversion of the number zero to the character string '0' in order to pass it to to_number to convert it back to the number zero.
    And what is the point of "where to_number(0) <> 0"
    When would that EVER be true?
    And it takes a long time because it is checking the second condition.
    Even though the first one is False.
    When running it through sqlplus it takes nothing. and of course no rows returns.
    The 0 in to_number is a bind variable
    it looks like this:
    select :intvar
    from DUAL
    where ( to_number(:QIV$) <> 0 ) and exists ( select 'X'
    from TABULA.macdent$FNCITEMS
    where ( macdent$FNCITEMS.QIV = to_number(:QIV$) ) )
    So the query you showed at the top is not the query we are really dealing with. What is the data type of QIV$? At least now we know that comparison to zero is not a fixed value.
    1 - filter predicate: ( IS NOT NULL AND 0<>TO_NUMBER(TO_CHAR(:QIV$)))
    This suggests QIV$ is already a number, else why would you pass it to to_char? And as I said above, if it is already a number, why pass it to to_number? Here you appear to be explicitly doing what I described oracle as implicitly doing in your first query above.
    But in sqlplus the filter predicates is different.
    1 - filter(NULL IS NOT NULL AND EXISTS (SELECT 0 FROM "TABULA"."MACDENT$FNCITEMS" "MACDENT$FNCITEMS" WHERE
    SQL> select 22
    from DUAL
    where ( to_number(0) <> 0 ) and exists (
    select 'X'
    from TABULA.macdent$FNCITEMS
    where ( macdent$FNCITEMS.QIV = to_number(0) ) );
    2 3 4 5 6
    no rows selected
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 883410849
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    Time | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 1 | | 0 (0)| | | | |
    |* 1 | FILTER | | | | | | | | |
    | 2 | FAST DUAL | | 1 | | 2 (0)|00:00:01 | | | |
    | 3 | PX COORDINATOR | | | | | | | | |
    | 4 | PX SEND QC (RANDOM)| :TQ10000 | 63M| 782M| 2 (0)|00:00:01 | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 63M| 782M| 2 (0)|00:00:01 | Q1,00 | PCWC | |
    |* 6 | TABLE ACCESS FULL| MACDENT$FNCITEMS | 63M| 782M| 2 (0)|00:00:01 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    1 - filter(NULL IS NOT NULL AND EXISTS (SELECT 0 FROM "TABULA"."MACDENT$FNCITEMS" "MACDENT$FNCITEMS" WHERE
    "MACDENT$FNCITEMS"."QIV"=0))
    6 - filter("MACDENT$FNCITEMS"."QIV"=0)
    ANY SUGGESTIONS?
    Thanks,

  • Performance issues when using AQ notification with one consumer

    We have developed a system to load data from a reservation database to a reporting database
    A a certain point in the proces, a message with the identifier of the reservation is enqueued to a queue (multi-consumer) on the same DB and then propagated to a similar queue on the REP database.
    This queue (multi-consumer) has AQ notification enabled (with one consumer) which calls the queue_callback procedure which
    - dequeues the message
    - calls a procedure to load the Resv data into the Reporting schema (through DB link)
    We need each message to be processed ONLY ONCE thus the usage of one single subscriber (consumer)
    But when load testing our application with multiple threads, the number of records created in the Reservation Database becomes quite large, meaning a large number of messages going through the first queue and propagating to the second queue (very quickly).
    But messages are not processed fast enough by the 2nd queue (notification) which falls behind.
    I would like to keep using notification as processing is automatic (no need to set up dbms_jobs to dequeue etc..) or something similar
    So having read articles, I feel I need to use:
    - multiple subscribers to the 2nd queue where each message is processed only by one subscriber (using a rule : say 10 subscribers S0 to S10 with Si processing messages where last number of the identifier is i )
    problem with this is that there is an attempt to process the message for each subscriber, isn't there
    - a different dequeuing method where many processes are used in parallel , with each message is processed only by one subscriber
    Does anyone have experience and recommendations to make on how to improve throughput of messages?
    Rgds
    Philippe

    Hi, thanks for your interest
    I am working with 10.2.0.4
    My objective is to load a subset of the reservation data from the tables in the first DB (Reservation-OLTP-150 tables)
    to the tables in the second DB (Reporting - about 15 tables at the moment), without affecting performance on the Reservation DB.
    Thus the choice of advanced queueing (asyncronous )
    - I have 2 similar queues in 2 separate databases ( AND Reporting)
    The message payload is the same on both (the identifier of the reservation)
    When a certain event happens on the RESERVATION database, I enqueue a message on the first database
    Propagation moves the same message data to the second queue.
    And there I have notification sending the message to a single consumer, which:
    - calls dequeue
    - and the data load procedure, which load this reservation
    My performance difficulties start at the notification but I will post all the relevant code before notification, in case it has an impact.
    - The 2nd queue was created with a script containing the following (similar script for fisrt queue)
    dbms_aqadm.create_queue_table( queue_table => '&&CQT_QUEUE_TABLE_NAME',
    queue_payload_type => 'RESV_DETAIL',
    comment => 'Report queue table',
    multiple_consumers => TRUE,
    message_grouping => DBMS_AQADM.NONE,
    compatible => '10.0.0',
    sort_list => 'ENQ_TIME',
    primary_instance => '0',
    secondary_instance => '0');
    dbms_aqadm.create_queue (
    queue_name => '&&CRQ_QUEUE_NAME',
    queue_table => '&&CRQ_QUEUE_TABLE_NAME',
    max_retries => 5);
    - ENQUEUING on the first queue (snippet of code)
    o_resv_detail DLEX_AQ_ADMIN.RESV_DETAIL;
    o_resv_detail:= DLEX_AQ_ADMIN.RESV_DETAIL(resvcode, resvhistorysequence);
    DLEX_RESVEVENT_AQ.enqueue_one_message (o_resv_detail);
    where DLEX_RESVEVENT_AQ.enqueue_one_message is :
    PROCEDURE enqueue_one_message (msg IN RESV_DETAIL)
    IS
    enqopt           DBMS_AQ.enqueue_options_t;
    mprop           DBMS_AQ.message_properties_t;
    enq_msgid           dlex_resvevent_aq_admin.msgid_t;
    BEGIN
    DBMS_AQ.enqueue (queue_name => dlex_resvevent_aq_admin.c_resvevent_queue,
    enqueue_options => enqopt,
    message_properties => mprop,
    payload => msg,
    msgid => enq_msgid
    END;
    - PROPAGATION: The message is dequeued from 1st queue and enqueued automatically by AQ propagation into this 2nd queue.
    (using a call to the following 'wrapper' procedure)
    PROCEDURE schedule_propagate (
    src_queue_name IN VARCHAR2,
    destination IN VARCHAR2 DEFAULT NULL
    IS
    sprocname dlex_types.procname_t:= 'dlex_resvevent_aq_admin.schedule_propagate';
    BEGIN
    DBMS_AQADM.SCHEDULE_PROPAGATION(queue_name => src_queue_name,
                                            destination => destination,
    latency => 10);
    EXCEPTION
    WHEN OTHERS
    THEN
    DBMS_OUTPUT.put_line (SQLERRM || ' occurred in ' || sprocname);
    END schedule_propagate;
    - For 'NOTIFICATION': ONE subscriber was created using:
    EXECUTE DLEX_REPORT_AQ_ADMIN.add_subscriber('&&STQ_QUEUE_NAME','&&STQ_SUBSCRIBER',NULL,NULL, NULL);
    this is a wrapper procedure that uses:
    DBMS_AQADM.add_subscriber (queue_name => p_queue_name, subscriber => subscriber_agent );
    Then notification is registered with:
    EXECUTE dlex_report_aq_admin.register_notification_action ('&&AQ_SCHEMA','&&REPORT_QUEUE_NAME','&&REPORT_QUEUE_SUBSCRIBER');
    - job_queue_processes is set to 10
    - The callback procedure is as follows
    CREATE OR REPLACE PROCEDURE DLEX_AQ_ADMIN.queue_callback
    context RAW,
    reginfo SYS.AQ$_REG_INFO,
    descr SYS.AQ$_DESCRIPTOR,
    payload RAW,
    payloadl NUMBER
    IS
    s_procname CONSTANT VARCHAR2 (40) := UPPER ('queue_callback');
    r_dequeue_options DBMS_AQ.DEQUEUE_OPTIONS_T;
    r_message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
    v_message_handle RAW(16);
    o_payload RESV_DETAIL;
    BEGIN
    r_dequeue_options.msgid := descr.msg_id;
    r_dequeue_options.consumer_name := descr.consumer_name;
    DBMS_AQ.DEQUEUE(
    queue_name => descr.queue_name,
    dequeue_options => r_dequeue_options,
    message_properties => r_message_properties,
    payload => o_payload,
    msgid => v_message_handle
    -- Call procedure to load data from reservation database to Reporting DB through the DB link
    dlex_report.dlex_data_load.load_reservation
    ( in_resvcode => o_payload.resv_code,
    in_resvHistorySequence => o_payload.resv_history_sequence );
    COMMIT;
    END queue_callback;
    - I noticed that messages are not taken out of the 2nd queue,
    I guess I would need to use the REMOVE option to delete messages from the queue?
    Would this be a large source of performance degradation after just a few thousand messages?
    - The data load through the DB may be a little bit intensive but I feel that doing things in parallel would help.
    I would like to understand if Oracle has a way of dequeuing in parallel (with or without the use of notification)
    In the case of multiple subscribers with notification , does 'job_queue_processes' value has an impact on the degree of parallelism? If not what setting has?
    And is there a way supplied by Oracle to set the queue to notify only one subscriber per message?
    Your advice would be very much appreciated
    Philippe
    Edited by: user528100 on Feb 23, 2009 8:14 AM

Maybe you are looking for