SQL Dev 2.1 - Performance Problem Filtering Users

Hi, we commonly want to apply a filter to our "Other Users" section in the connection window. We usually use where object count > 0 or >= 1. It appears to result in a query that looks like this:
<pre>
SELECT *
FROM (SELECT username
FROM sys.all_users au
WHERE au.username != USER)
WHERE (SELECT COUNT (1)
FROM all_objects
WHERE username = owner) >=1
</pre>
On our system, it is running for well over 3 minutes (sometimes 20 minutes). We have 2,400 rows in ALL_USERS and ALL_OBJECTS has 226,271 rows. The same roughly the same for DBA_* views as well.
Could the query that is generated be rewritten to be more like this?
<pre>
select *
from
(select username
from sys.all_users au
where au.username != USER)
where
username in (select owner
from all_objects
group by owner
having count(*) > 1)
</pre>
That query runs in less than 10 seconds on the same system and appears to be safe for any older versions of Oracle as well.
I realize that many systems don't have that many objects so maybe this is helpful feedback...

Yes - the SQL for these internal queries are located in XML files, which you can modify - although, this means you are accepting the risks and consequences of getting it wrong. The XML file you want is oracle\dbtools\raptor\navigator\oracle\schema.xml which is in <SQL Developer Home>\sqldeveloper\extensions\oracle.sqldeveloper.jar (you need to extract it from the jar, modify it and then put it back in the jar). Unfortunately, I don't know enough about the structure of the filter components of the XML to know if you can get a filter that does the query you want.
theFurryOne

Similar Messages

  • ADDM Report is not produced by SQL DEV 4.0 if logged in user does not have SYSDBA privilege

    Hello -
    In SQLDEV 4.0 (Early Adapter), I logged in as user with no SYSDBA privilege granted. But ADVISOR granted and  execute on DBMS_ADDM granted and "select any dictionary" granted, etc.
    Then i go to the menu View --> DBA --> Performance --> Automatic Database Diagnostic Monitor
    Select ADDM report from the available choices.
    Out of 3 menu options: Summary, Findings and ADDM Report, I can see the first two, but I am getting a grey screen for ADDM Report.
    ^^^^^^^^^^^^^^^^^^^^^
    If i do exactly the same but login as SYSDBA, then ADDM Report can be produced and can be seen. So the issue is in the missing grants and privileges.
    ^^^^^^^^^^^^^^^^^^^^^^
    The goal that we are pursuing in our shop is to empower developers to work with ADDM reports by having them login as non-SYSDBA user.
    ^^^^^^^^^^^^^^^^^^^^^^^^
    Please advise what privilege should be granted to get this working for non-SYSDBA login
    The same user can successfully execute $ORACLE_HOME/rdbms/admin/addmrpt.sql and produce file with complete ADDM report. But not able to get the ADDM from SQL DEV 4.0
    Thank you,
    vr

    Thank you, Phil.
    This worked for me in my quick test case.
    Great help, as DBA group wants to allow NON-privileged users (like Developers and QA) to execute ADDM on demand.
    Here is my observation for RAC database with two instances.
    I have followed your instructions:
    "Performance / Automatic Database Diagnostic Monitor -> Run ADDM"
    Then
    - Uncheck box "By Creating New Snapshot" (this box was checked by default)
    - Provide new "Task Name" (let's say "vr1")
    - Select "Start Snapshot" (this is where my first confusion is: each screen shot # is shown twice, may be because i have two instances of RAC)
    - Select "End Snapshot" (the same way: each SNAP_ID is shown twice)
    - and finally APPLY button is pressed
    This creates 3 reports:
    vr1
    vr1$1 (for instance_id=1)
    vr1$2 (for instance_id=2_
    So, my question is: this expected and normal behavior of the tool?
    What if my RAC database has 8 instances? Then I would generate 9 separate ADDM reports?
    Is this correct observation?
    Thank you,
    vr

  • SQL Dev result fetch performance

    I am using SQL Developer 1.2.0-Main 29.98
    My dept. is looking to save the cost of TOAD renewal, but evaluators have raised the following issue:
    Using a test query in both TOAD and SQL Developer, it takes 2x longer in SQL Dev to retrieve all rows (7864). Both tools return n first rows quickly, but using CTRL-A in SQL Dev to force the worksheet to display all the rows takes 88sec. with Sql Array Fetch Size=50, improving incrementally to a fastest 64 sec. with the Size = 5000. (It also lets me set the size even larger, even though the documentation says 5000 is the max, no improvement in performance though.) TOAD, using it's minimum memory threshhold method returns the data in 32 sec.
    Are there any additional configuration changes I can make? I am a DBA, not a Java developer, so I have no knowledge of memory settings, etc. that can be defined for the Java environment.
    I have seen in other threads that 2.0 is under development, is this performance issue already being addressed?
    Any and all advice is appreciated, otherwise it's back to evaluating PL/SQL Developer.

    If you want performance and no hassle, stick with the proven, robust TOAD. Period. The rest can be called wannabees.
    Sqldev is still too young, has performance issues, crashes, has bugs, uses a lot of mem, ... but promises to be really competitive with TOAD in the future.
    I do prefer sqldev's code editor and debugger over TOAD's and has a modern Java interface, easy customizable through user extensions in XML, and most important: it's Oracle and free. But you do need to put in a lot of patience and frustration...
    Regards,
    K.

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • Performance problems with abap report SAPLPRGN_STRUCTURE during users login

    Hello,
    after patchimport of SAP BASIS level 21 and SAPABAP level 21 our SAP Production System (ECC6-UNICODE KERNEL 240) denote a performance problems. A lot of ours users during logon phase must wait until 10-12 minutes before that the navigation menu is displayed. The problem is generated by the slow performance of the standard SAP REPORT SAPLPRGN_STRUCTURE that require a lot time.  Any idea?
    Thanks

    Hi Pat,
    I would stop SAP and use DLTR3PKG, If this doesn't help, you need to investigate, if it is a CPU or DB-Time issue in ST03. Then you have to handle accordingly ...
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.net - http://www.4soi.de - http://www.easymarketplace.de

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • How to avoid performance problems in PL/SQL?

    How to avoid performance problems in PL/SQL?
    As per my knowledge, below some points to avoid performance proble in PL/SQL.
    Is there other point to avoid performance problems?
    1. Use FORALL instead of FOR, and use BULK COLLECT to avoid looping many times.
    2. EXECUTE IMMEDIATE is faster than DBMS_SQL
    3. Use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.

    Susil Kumar Nagarajan wrote:
    1. Group no of functions or procedures into a PACKAGEPutting related functions and procedures into packages is useful from a code organization standpoint. It has nothing whatsoever to do with performance.
    2. Good to use collections in place of cursors that do DML operations on large set of recordsBut using SQL is more efficient than using PL/SQL with bulk collects.
    4. Optimize SQL statements if they need to
    -> Avoid using IN, NOT IN conditions or those cause full table scans in queriesThat is not true.
    -> See to queries they use Indexes properly , sometimes Leading index column is missed out that cause performance overheadAssuming "properly" implies that it is entirely possible that a table scan is more efficient than using an index.
    5. use Oracle HINTS if query can't be further tuned and hints can considerably help youHints should be used only as a last resort. It is almost certainly the case that if you can use a hint that forces a particular plan to improve performance that there is some problem in the underlying statistics that should be fixed in order to resolve issues with many queries rather than just the one you're looking at.
    Justin

  • Performance Problem - MS SQL 2K and PreparedStatement

    Hi all
    I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    // statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 961
    10 1061
    200 1803
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 1171
    10 2754
    100 18817
    200 36443
    The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
    I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
    Best regards
    Edmond

    Edmond,
    Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
    The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
    However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
    As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
    As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
    Alin.

  • MS SQL Server 2008 performance problem

    We use TopLink 10.1.3.5 to connect to MS SQL Server 2008.
    What we are seeing is that when a query is being run by TopLink a lot of cursors open up and remain open. Our database CPU usage goes up and it affects the whole application.
    Our DBA took a look at it and said the database shows FETCH_APICURSOR* being used for select statements.
    Is there a way to tell TopLink not to use cursors for queries?
    Thanks.

    Can you pin point a particular TopLink query tied to the " FETCH_APICURSOR* " call in the app and post how it is being created?
    My guess is that the application is specifying the TopLink query object to return a cursor or stream and not closing it in all cases, or keeping them open for a long period - did you say they were leaking, or is it just that a large number are open at a time leading to performance problems?
    This streams+cursors are described in the TopLink docs here
    http://docs.oracle.com/cd/E21764_01/web.1111/b32441/qryadv.htm#CJGJBHGJ
    or the 10g docs here:
    http://sqltech.cl/doc/oas10gR3/web.1013/b13593/qryadv010.htm
    If this is the case, you might want to use a different strategy such as pagination instead of cursors, described here:
    http://docs.oracle.com/cd/E17904_01/web.1111/b32441/optimiz.htm#CHDIBGFE
    Best Regards,
    Chris

  • CONNECT BY, Performance problems with hierarchical SQL

    Hello,
    I have a performance problem with the following SQL:
    table name: testtable
    columns: colA, colB, colC, colD, colE, colF
    Following hierarchical SQL:
    SELECT colA||colB||colC AS A1, colD||colE||colF AS B1, colA, colB, colC, level
    FROM testable
    CONNECT BY PRIOR A1 = B1
    START WITH A1 = 'aaa||bbbb||cccc'
    With big tables the performance of this construct is very bad. I can't use functional based indexes to create an Index on "colA||colB||colC" and "colD||colE||colF"
    Has anyone an idea how I can get a real better performance for this hierarchical construct, if I have to combine multiple colums? Or is there any better way (with an PL/SQL or view trigger) solve something like this ??
    Thanks in advance for your investigation :)
    Carmen

    Why not
    CONNECT BY PRIOR colA = colD
    and PRIOR colB = colE
    and ...
    ? It is not the same thing, but I suspect my version is correct:-)

  • Performance Problem with SQL Query

    Hi
    I have a SQL Query (say, SSS) which runs quite fast when I run it from TOAD.
    But when the same query is being used as
    INSERT INTO <table> VALUES <SSS>;
    It is taking hours to complete.
    The Original Query(OSQ) was changed to this SQL query(SSS) for Performance improvement.
    The Cost has also improved quite a lot from OSQ to SSS. But when SSS runs from inside a procedure it is taking the same long hours as OSQ used to take.
    Please help ASAP as it needs to get fixed.
    Thanks
    Arnab

    SELECT SRM.ID PROJECT_ID,
    SRM.UNIQUE_NAME PROJECT_CODE,
    ODFP.GS_FINANCE_PROJ_CODE,
    ODFP.GS_PRODUCT_CODE,
    ODFP.GS_CUSTOMER,
    ODFP.GS_LEGAL_ENT_REF,
    ODF_PRJ_TYPE.NAME GS_PROJ_TYPE,
    ODF_SRC_SYS.NAME GS_SOURCE_SYSTEM,
    RESM.FIRST_NAME||' '||RESM.LAST_NAME GS_PROJECT_MANAGER,
    ODFP.GS_LEG_PROJ_NUM,
    ODFP.GS_SUPP_DOC ,
    ODFP.GS_REQUEST_ID ,
    ODFP.GS_CONTRACT ,
    ODFP.GS_SIEBEL_REF,
    DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg)LAB_BDGT,
    DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)NLI_BDGT,
    (DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg))+(DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)) sum_lab_nonlab_budg,
    frct.LABOUR,
    frct.NON_LABOUR,
    frct.LABOUR+frct.NON_LABOUR FORCAST_TOT_SUM,
    gfr.code GFR_CODE,
    odfp.gs_rev_recognition,
    rev_rec.NAME gs_rev_recognition,
    DECODE(odfp.gs_prevent_reqtxn,1,'YES','0','NO','NO'),
    GS_PROGTYPE.NAME,
    COMM_APPR.UNIQUE_NAME COMM_APPROVER_EIN,
    COMM_APPR.FIRST_NAME||' '||COMM_APPR.LAST_NAME COMM_APPROVER_NAME,
    PR_APPR.UNIQUE_NAME PR_APPROVER_EIN,
    PR_APPR.FIRST_NAME||' '||PR_APPR.LAST_NAME PR_APPROVER_NAME,
    ODFP.GS_BILL_BUD,
    frct.FRCST_REVENUE TOTAL_FRC_REVENUE,
    actuals.LABOUR_ACTUALS,
    actuals.NL_ACTUALS,
    -- get_wip_totalcost(prjp.prid,'L') LABOUR_ACTUALS,
    -- get_wip_totalcost(prjp.prid,'M') NL_ACTUALS,
    ODFP.GS_ASS_PRODUCT_CODE,
    PROJ_TEMPLATE.ID GS_PROJ_TEMPLATE_ID,
    PROJ_TEMPLATE.UNIQUE_NAME GS_PROJ_TEMPLATE_CODE,
    PROJ_TEMPLATE.name GS_PROJ_TEMPLATE_NAME,
    DECODE( NVL(ODFP.GS_FIN_TEMPLATE,0) ,0,'NO',1,'YES') GS_FIN_TEMPLATE,
    ODF_PRJ_TYPE.LOOKUP_CODE GS_PROJ_TYPE
    FROM PROJECTS SRM,
    J_PROJECTS PRJP,
    A_PROJECT ODFP,
    RESOURCES RESM,
    s_gfr gfr,
    RESOURCES PROJ_ACCT,
    RESOURCES COMM_APPR,
    RESOURCES PR_APPR,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_PROJECT_TYPE'
    AND LOOKUP.ID =CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_PRJ_TYPE,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_SOURCE_SYSTEM'
    AND LOOKUP.ID =CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_SRC_SYS,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_PROG_TYPE'
    AND LOOKUP.ID =CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS')GS_PROGTYPE,
    (SELECT odf_parent_id
    ,SUM(gs_lab_budg) lab_budg
    ,SUM(gs_nl_budg) nli_budg
    FROM T_CHANGES
    WHERE gs_status='APPR'
    GROUP BY odf_parent_id) budg,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_REV_RECOGNITION'
    AND LOOKUP.ID = CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS') rev_rec,
    (SELECT FRCP1.project_id,
    NVL(SUM(CASE WHEN srmr.resource_type=0 THEN for_val.cost ELSE NULL END),0) AS LABOUR,
    NVL(SUM(CASE WHEN srmr.resource_type <> 0 THEN for_val.cost ELSE NULL END),0) AS NON_LABOUR,
    NVL(SUM(for_val.revenue),0) AS FRCST_REVENUE
    FROM T_PROPERTIES FRCP1,
    T_DETAILS FOR_DET,
    RESOURCES SRMR,
    T_VALUES FOR_VAL
    WHERE (FRCP1.project_id, revision) IN (SELECT project_id, MAX(revision)
    FROM T_PROPERTIES FRCP
    WHERE FRCP.status=2
    AND FRCP.PERIOD_TYPE='SEMI_MONTHLY'
    GROUP BY project_id)
    AND FRCP1.PERIOD_TYPE='SEMI_MONTHLY'
    AND FOR_DET.forecast_id=frcp1.id
    AND SRMR.id(+)=FOR_DET.detail_id
    AND FOR_VAL.currency_type='BILLING'
    AND FOR_VAL.forecast_details_id=FOR_DET.ID
    GROUP BY FRCP1.project_id) frct,
    (SELECT srmp.id
    ,NVL(SUM(CASE WHEN wip.transtype='L' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS LABOUR_ACTUALS
    ,NVL(SUM(CASE WHEN wip.transtype='M' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS NL_ACTUALS
    FROM A_WIP WIP
    ,om_periods biz
    ,P_VALUES WIPVAL
    ,PROJECTS SRMP
    ,A_PROJECT ODF
    WHERE biz.period_type ='SEMI_MONTHLY'
    AND TRUNC(wip.TRANSDATE) BETWEEN biz.start_date AND biz.end_date
    AND WIP.TRANSNO=WIPVAL.TRANSNO
    AND WIPVAL.CURRENCY_TYPE='BILLING'
    AND SRMP.UNIQUE_NAME=WIP.PROJECT_CODE
    AND SRMP.ID=ODF.ID
    AND UPPER(ODF.partition_code)='GLOBAL'
    AND UPPER(WIP.external_id) <> 'SPREADSHEET'
    GROUP BY srmp.id) actuals,
    RESOURCES BUD_HOLDER,
    PROJECTS PROJ_TEMPLATE
    WHERE SRM.ID = PRJP.PRID
    AND ODFP.ID=PRJP.PRID
    AND ODFP.PARTITION_CODE='GLOBAL'
    AND RESM.id(+)=prjp.manager_id
    AND ODFP.GS_PROJECT_TYPE = ODF_PRJ_TYPE.LOOKUP_CODE(+)
    AND ODFP.GS_SOURCE_SYSTEM = ODF_SRC_SYS.LOOKUP_CODE(+)
    AND ODFP.GS_prog_type = GS_PROGTYPE.LOOKUP_CODE(+)
    AND budg.odf_parent_id(+)=SRM.ID
    AND frct.project_id(+) = prjp.prid
    AND ODFP.gs_risk_gfr=gfr.code(+)
    AND ODFP.GS_REV_RECOGNITION=rev_rec.LOOKUP_CODE(+)
    AND ODFP.GS_PROJ_ACCT = PROJ_ACCT.ID(+)
    AND ODFP.GS_COMM_APPR = COMM_APPR.ID(+)
    AND ODFP.GS_PR_APPR = PR_APPR.ID(+)
    AND actuals.id(+) = prjp.prid -- Arnab
    AND ODFP.GS_BUDGET_HOLDER=BUD_HOLDER.ID(+)
    AND ODFP.GS_TEMPLATE_USED=PROJ_TEMPLATE.ID(+)
    The above is OSQ with cost 45000. ... takes 1-1.5 hours through Toad
    SELECT SRM.ID PROJECT_ID,
    SRM.UNIQUE_NAME PROJECT_CODE,
    ODFP.GS_FINANCE_PROJ_CODE,
    ODFP.GS_PRODUCT_CODE,
    ODFP.GS_CUSTOMER,
    ODFP.GS_LEGAL_ENT_REF,
    ODF_PRJ_TYPE.NAME GS_PROJ_TYPE,
    ODF_SRC_SYS.NAME GS_SOURCE_SYSTEM,
    RESM.FIRST_NAME||' '||RESM.LAST_NAME GS_PROJECT_MANAGER,
    ODFP.GS_LEG_PROJ_NUM,
    ODFP.GS_SUPP_DOC ,
    ODFP.GS_REQUEST_ID ,
    ODFP.GS_CONTRACT ,
    ODFP.GS_SIEBEL_REF,
    DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg)LAB_BDGT,
    DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)NLI_BDGT,
    (DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg))+(DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)) sum_lab_nonlab_budg,
    frct.LABOUR,
    frct.NON_LABOUR,
    frct.LABOUR+frct.NON_LABOUR FORCAST_TOT_SUM,
    gfr.code GFR_CODE,
    odfp.gs_rev_recognition,
    rev_rec.NAME gs_rev_recognition,
    DECODE(odfp.gs_prevent_reqtxn,1,'YES','0','NO','NO'),
    GS_PROGTYPE.NAME,
    COMM_APPR.UNIQUE_NAME COMM_APPROVER_EIN,
    COMM_APPR.FIRST_NAME||' '||COMM_APPR.LAST_NAME COMM_APPROVER_NAME,
    PR_APPR.UNIQUE_NAME PR_APPROVER_EIN,
    PR_APPR.FIRST_NAME||' '||PR_APPR.LAST_NAME PR_APPROVER_NAME,
    ODFP.GS_BILL_BUD,
    frct.FRCST_REVENUE TOTAL_FRC_REVENUE,
    -- actuals.LABOUR_ACTUALS, -- Arnab
    -- actuals.NL_ACTUALS, -- Arnab
    get_wip_totalcost(prjp.prid,'L') LABOUR_ACTUALS, -- Arnab
    get_wip_totalcost(prjp.prid,'M') NL_ACTUALS, -- Arnab
    ODFP.GS_ASS_PRODUCT_CODE,
    PROJ_TEMPLATE.ID GS_PROJ_TEMPLATE_ID,
    PROJ_TEMPLATE.UNIQUE_NAME GS_PROJ_TEMPLATE_CODE,
    PROJ_TEMPLATE.name GS_PROJ_TEMPLATE_NAME,
    DECODE( NVL(ODFP.GS_FIN_TEMPLATE,0) ,0,'NO',1,'YES') GS_FIN_TEMPLATE,
    ODF_PRJ_TYPE.LOOKUP_CODE GS_PROJ_TYPE
    FROM PROJECTS SRM,
    J_PROJECTS PRJP,
    A_PROJECT ODFP,
    RESOURCES RESM,
    s_gfr gfr,
    RESOURCES PROJ_ACCT,
    RESOURCES COMM_APPR,
    RESOURCES PR_APPR,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_PROJECT_TYPE'
    AND LOOKUP.ID =CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_PRJ_TYPE,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_SOURCE_SYSTEM'
    AND LOOKUP.ID =CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_SRC_SYS,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_PROG_TYPE'
    AND LOOKUP.ID =CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS')GS_PROGTYPE,
    (SELECT odf_parent_id
    ,SUM(gs_lab_budg) lab_budg
    ,SUM(gs_nl_budg) nli_budg
    FROM T_CHANGES
    WHERE gs_status='APPR'
    GROUP BY odf_parent_id) budg,
    (SELECT CAP.NAME
    ,LOOKUP.LOOKUP_CODE
    FROM S_NLS CAP,
    N_LOOKUPS LOOKUP
    WHERE LOOKUP_TYPE ='GS_REV_RECOGNITION'
    AND LOOKUP.ID = CAP.PK_ID
    AND CAP.LANGUAGE_CODE='en'
    AND CAP.TABLE_NAME = 'CMN_LOOKUPS') rev_rec,
    (SELECT FRCP1.project_id,
    NVL(SUM(CASE WHEN srmr.resource_type=0 THEN for_val.cost ELSE NULL END),0) AS LABOUR,
    NVL(SUM(CASE WHEN srmr.resource_type <> 0 THEN for_val.cost ELSE NULL END),0) AS NON_LABOUR,
    NVL(SUM(for_val.revenue),0) AS FRCST_REVENUE
    FROM T_PROPERTIES FRCP1,
    T_DETAILS FOR_DET,
    RESOURCES SRMR,
    T_VALUES FOR_VAL
    WHERE (FRCP1.project_id, revision) IN (SELECT project_id, MAX(revision)
    FROM T_PROPERTIES FRCP
    WHERE FRCP.status=2
    AND FRCP.PERIOD_TYPE='SEMI_MONTHLY'
    GROUP BY project_id)
    AND FRCP1.PERIOD_TYPE='SEMI_MONTHLY'
    AND FOR_DET.forecast_id=frcp1.id
    AND SRMR.id(+)=FOR_DET.detail_id
    AND FOR_VAL.currency_type='BILLING'
    AND FOR_VAL.forecast_details_id=FOR_DET.ID
    GROUP BY FRCP1.project_id) frct,
    /* (SELECT srmp.id
    ,NVL(SUM(CASE WHEN wip.transtype='L' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS LABOUR_ACTUALS
    ,NVL(SUM(CASE WHEN wip.transtype='M' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS NL_ACTUALS
    FROM A_WIP WIP
    ,om_periods biz
    ,P_VALUES WIPVAL
    ,PROJECTS SRMP
    ,A_PROJECT ODF
    WHERE biz.period_type ='SEMI_MONTHLY'
    AND TRUNC(wip.TRANSDATE) BETWEEN biz.start_date AND biz.end_date
    AND WIP.TRANSNO=WIPVAL.TRANSNO
    AND WIPVAL.CURRENCY_TYPE='BILLING'
    AND SRMP.UNIQUE_NAME=WIP.PROJECT_CODE
    AND SRMP.ID=ODF.ID
    AND UPPER(ODF.partition_code)='GLOBAL'
    AND UPPER(WIP.external_id) <> 'SPREADSHEET'
    GROUP BY srmp.id) actuals, */ -- Arnab
    RESOURCES BUD_HOLDER,
    PROJECTS PROJ_TEMPLATE
    WHERE SRM.ID = PRJP.PRID
    AND ODFP.ID=PRJP.PRID
    AND ODFP.PARTITION_CODE='GLOBAL'
    AND RESM.id(+)=prjp.manager_id
    AND ODFP.GS_PROJECT_TYPE = ODF_PRJ_TYPE.LOOKUP_CODE(+)
    AND ODFP.GS_SOURCE_SYSTEM = ODF_SRC_SYS.LOOKUP_CODE(+)
    AND ODFP.GS_prog_type = GS_PROGTYPE.LOOKUP_CODE(+)
    AND budg.odf_parent_id(+)=SRM.ID
    AND frct.project_id(+) = prjp.prid
    AND ODFP.gs_risk_gfr=gfr.code(+)
    AND ODFP.GS_REV_RECOGNITION=rev_rec.LOOKUP_CODE(+)
    AND ODFP.GS_PROJ_ACCT = PROJ_ACCT.ID(+)
    AND ODFP.GS_COMM_APPR = COMM_APPR.ID(+)
    AND ODFP.GS_PR_APPR = PR_APPR.ID(+)
    AND     actuals.id(+) = prjp.prid  Arnab
    AND ODFP.GS_BUDGET_HOLDER=BUD_HOLDER.ID(+)
    AND ODFP.GS_TEMPLATE_USED=PROJ_TEMPLATE.ID(+)
    This one in SSS where "-- Arnab " are only changes .....
    The cost of this 7000.....finishes in 1 - 1.5 min in Toad
    This is used as INSERT INTO TABLE <Select Query> ... this was taking 1.5 Hours to run.
    It was changed using BULK COLLECT - still its taking around 1.5 hours to run...
    Thanks
    Arnab

  • Messaging performance problem on RAID disk with many IMAP users

    The performance problem can be solved by tunning the disc RAID system.

    The performance problem can be solved by tunning the disc RAID system.

  • Performance problem when running a personalization rule

    We have a serious performance problem when running a personalization rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType == deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType == tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences in the
    OS?
    Thank you,
    Luis Muñiz

    Luis,
    I think you are working through Support on this one, so hopefully you are in good
    shape.
    For others who are seeing this same performance issue with the reference CM implementation,
    there is a patch available via Support for the 3.2 and 3.5 releases that solves
    this problem.
    This issue is being tracked internally as CR060645 for WLPS 3.2 and CR055594 for
    WLPS 3.5.
    Regards,
    PJL
    "Luis Muniz" <[email protected]> wrote:
    We have a serious performance problem when running a personalization
    rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType ==
    deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType ==
    tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and
    cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and
    a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same
    time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences
    in the
    OS?
    Thank you,
    Luis Muñiz

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

Maybe you are looking for

  • Can't Start Up in Safe Mode

    I've followed the directions here 8 times now: http://docs.info.apple.com/article.html?artnum=107393 I've tried it with various timing alterations, and I just can't seem to get started up in safe mode. Any clue what I might be doing wrong? Thanks.

  • My iMac freezes when using Firefox

    My iMac freezes when using Firefox. We have to restart our computer.

  • A problem with Security settings.

    Good afternoon. We have just set up a new 802.11n Base Station to compliment our eight other Base Stations. These eight are an older version running V 5.7 of the software. Security on the older ones is set at WEP 128 bit and I need to set it at the s

  • Exchange rate difference is not coming

    Dear proffessionals, 1) I have created the PO with Ex.rate 35.65 ,not fixed 2)GR Completed for the Po. 3) While parking an Invoice in MIR7 wrt PO,and entered the Clearing date as      3 months back date.I am getting only K & S but not "M'. 4) Materia

  • UIX page run very slow on Jboss

    hi.. i am using Jdeveloper 10.1.2 to develop my application and my application containing JSP and UIX pages.At last i deploy it onto Jboss 3.2.7. It work properly, but i found that each time user access/open a page, which never open before, that page