[9i] poor performance with XMLType.transform

Hello,
I've got a problem with the Oracle function XMLType.transform.
When I try to apply a XSL to a big XML, it is very very slow, and it evens consumes all the CPU, and other users are not able to work until the processing is complete...
So I was wondering if my XSL was corrupted, but it does not seem so, because when i apply it with Internet Explorer (by just double-clicking on the .xml), it is immediately applied. I've also even tried with oraxsl, and the processing is quick and good.
So, i tried to use XDB, but it does not work, maybe I should upgrade to a newer version of XDB?
Please find the ZIP file here :
http://perso.modulonet.fr/~tleoutre/Oracle/samples.zip
Please find in this file :
1) The XSL file k_xsl.xsl
2) The "big" XML file big_xml.xml
Here you can try to apply the XSL on the XML with Internet Explorer : processing is very quick...
3) The batch file transform.bat
Here you can launch it, it calls oraxsl, and produces a result very quickly...
4) The SQL file test_xsl_with_xmltype_transform.sql.
You can try to launch it... First, it applies the same XSL with a little XML, and it's OK... And then, it applies the XSL to the same big XML as in point 1), and then, it takes a lot of time and CPU...
5) The SQL file test_xsl_with_xdb_1.sql ...
On my server, it fails... So I tried to change the XSL in the next point :
6) The SQL file test_xsl_with_xdb_2.sql with a "cleaned" XSL...
And then, it fails with exactly the same problem as in :
TransformerConfigurationException  (Unknown expression at EOF: *|/.)
Any help would be greatly appreciated!
Thank you!
P.S. : Sorry for my bad english, I'm a French man :-)

This is what I see...
Your tests are measuring the wrong thing. You are measuring the time to create the sample documents, which is being done very innefficiently, as well
as the time take to do the transform.
Below is the correct way to get mesasurements for each task..
Here's what I see on a PIV 2.4Ghz with 10.2.0.2.0 and 2GB of RAM
Fragments SourceSize  TargetSize createSource       Parse     Transform
        50      28014      104550  00:00:00.04 00:00:00.04   00:00:00.12
       100      55964      209100  00:00:00.03 00:00:00.05   00:00:00.23
       500     279564     1045500  00:00:00.16 00:00:00.23   00:00:01.76
      1000     559064     2091000  00:00:00.28 00:00:00.28   00:00:06.04
      2000    1118064     4182000  00:00:00.34 00:00:00.42   00:00:24.43
      5000    2795064    10455000  00:00:00.87 00:00:02.02   00:03:19.02I think this clearly shows the pattern.
Of course what this testing really shows is that you've clearly missed the point of performing XSLT transformation inside the database.
The idea behind database based transformation is to optimize XSLT processing by
(1), not having to parse the XML and build a DOM tree before commencing the XSLT processing. In this example this is not possible since the
XML is being created from a CLOB based XMLType, not a schema based XMLType.
(2) Leveraging the Lazily Loaded Virtual DOM when doing sparse transformation ( A Sparse transformation is one where there are large parts of the
source document that are not required to create the target document. Again in this case the XSL requires you to walk all the nodes to generate the
required output.
If is necessary to process all of the nodes in the source document to generate the entire output it probably makes more sense to use a midtier XSL engine.
Here's the code I used to generate the numbers in the above example
BTW in terms of BIG XML we've successully processed 12G documents with Schema Based Storage...So nothing you have hear comes any where our defintion of big.- 1 with Oracle 10g Express on Linux
Also, please remember that 9.2.0.1.0 is not a supported release for any XML DB related features.
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:44:59 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool createDocument.log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> create or replace directory &1 as '&3'
  2  /
old   1: create or replace directory &1 as '&3'
new   1: create or replace directory SCOTT as '/home/mdrake/bugs/xslTest'
Directory created.
SQL> drop table source_data_table
  2  /
Table dropped.
SQL> create table source_data_table
  2  (
  3    fragCount number,
  4    xml_text  clob,
  5    xml       xmlType,
  6    result    clob
  7  )
  8  /
Table created.
SQL> create or replace procedure createDocument(fragmentCount number)
  2  as
  3    fragmentText clob :=
  4  '<AFL LIGNUM="1999">
  5    <mat>20000001683</mat>
  6    <name>DOE</name>
  7    <firstname>JOHN</firstname>
  8    <name2>JACK</name2>
  9    <SEX>MALE</SEX>
10    <birthday>1970-05-06</birthday>
11    <salary>5236</salary>
12    <code1>5</code1>
13    <code2>6</code2>
14    <code3>7</code3>
15    <date>2006-05-06</date>
16    <dsp>8.665</dsp>
17    <dsp_2000>455.45</dsp_2000>
18    <darr04>5.3</darr04>
19    <darvap04>6</darvap04>
20    <rcrr>8</rcrr>
21    <rcrvap>9</rcrvap>
22    <rcrvav>10</rcrvav>
23    <rinet>11.231</rinet>
24    <rmrr>12</rmrr>
25    <rmrvap>14</rmrvap>
26    <ro>15</ro>
27    <rr>189</rr>
28    <date2>2004-05-09</date2>
29  </AFL>';
30
31    xmlText CLOB;
32
33  begin
34    dbms_lob.createTemporary(xmlText,true,DBMS_LOB.CALL);
35    dbms_lob.write(xmlText,5,1,'<PRE>');
36    for i in 1..fragmentCount loop
37       dbms_lob.append(xmlText,fragmentText);
38    end loop;
39    dbms_lob.append(xmlText,xmlType('<STA><COD>TER</COD><MSG>Op?ation R?ssie</MSG></STA>').getClobVal());
40    dbms_lob.append(xmlText,'</PRE>');
41    insert into source_data_table (fragCount,xml_text) values (fragmentCount, xmlText);
42    commit;
43    dbms_lob.freeTemporary(xmlText);
44  end;
45  /
Procedure created.
SQL> show errors
No errors.
SQL> --
SQL> set timing on
SQL> --
SQL> call createDocument(50)
  2  /
Call completed.
Elapsed: 00:00:00.04
SQL> call createDocument(100)
  2  /
Call completed.
Elapsed: 00:00:00.03
SQL> call createDocument(500)
  2  /
Call completed.
Elapsed: 00:00:00.16
SQL> call createDocument(1000)
  2  /
Call completed.
Elapsed: 00:00:00.28
SQL> call createDocument(2000)
  2  /
Call completed.
Elapsed: 00:00:00.34
SQL> call createDocument(5000)
  2  /
Call completed.
Elapsed: 00:00:00.87
SQL> select fragCount dbms_lob.getLength(xmlText)
  2    from sample_data_table
  3  /
select fragCount dbms_lob.getLength(xmlText)
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:01 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
  2     set xml = xmltype(xml_text)
  3   where fragCount = &3
  4  /
old   3:  where fragCount = &3
new   3:  where fragCount = 50
1 row updated.
Elapsed: 00:00:00.04
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
  2     set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
  3   where fragCount = &3
  4  /
old   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old   3:  where fragCount = &3
new   3:  where fragCount = 50
1 row updated.
Elapsed: 00:00:00.12
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.01
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
  2    from source_data_table
  3  /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
        50                        28014                     104550
       100                        55964
       500                       279564
      1000                       559064
      2000                      1118064
      5000                      2795064
6 rows selected.
Elapsed: 00:00:00.01
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:02 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
  2     set xml = xmltype(xml_text)
  3   where fragCount = &3
  4  /
old   3:  where fragCount = &3
new   3:  where fragCount = 100
1 row updated.
Elapsed: 00:00:00.05
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
  2     set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
  3   where fragCount = &3
  4  /
old   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old   3:  where fragCount = &3
new   3:  where fragCount = 100
1 row updated.
Elapsed: 00:00:00.23
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.03
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
  2    from source_data_table
  3  /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
        50                        28014                     104550
       100                        55964                     209100
       500                       279564
      1000                       559064
      2000                      1118064
      5000                      2795064
6 rows selected.
Elapsed: 00:00:00.01
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:02 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
  2     set xml = xmltype(xml_text)
  3   where fragCount = &3
  4  /
old   3:  where fragCount = &3
new   3:  where fragCount = 500
1 row updated.
Elapsed: 00:00:00.12
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.03
SQL> update source_data_table
  2     set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
  3   where fragCount = &3
  4  /
old   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old   3:  where fragCount = &3
new   3:  where fragCount = 500
1 row updated.
Elapsed: 00:00:01.76
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.00
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
  2    from source_data_table
  3  /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
        50                        28014                     104550
       100                        55964                     209100
       500                       279564                    1045500
      1000                       559064
      2000                      1118064
      5000                      2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:04 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
  2     set xml = xmltype(xml_text)
  3   where fragCount = &3
  4  /
old   3:  where fragCount = &3
new   3:  where fragCount = 1000
1 row updated.
Elapsed: 00:00:00.28
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
  2     set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
  3   where fragCount = &3
  4  /
old   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old   3:  where fragCount = &3
new   3:  where fragCount = 1000
1 row updated.
Elapsed: 00:00:06.04
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.00
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
  2    from source_data_table
  3  /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
        50                        28014                     104550
       100                        55964                     209100
       500                       279564                    1045500
      1000                       559064                    2091000
      2000                      1118064
      5000                      2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:11 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
  2     set xml = xmltype(xml_text)
  3   where fragCount = &3
  4  /
old   3:  where fragCount = &3
new   3:  where fragCount = 2000
1 row updated.
Elapsed: 00:00:00.42
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.02
SQL> update source_data_table
  2     set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
  3   where fragCount = &3
  4  /
old   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old   3:  where fragCount = &3
new   3:  where fragCount = 2000
1 row updated.
Elapsed: 00:00:24.43
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.03
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
  2    from source_data_table
  3  /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
        50                        28014                     104550
       100                        55964                     209100
       500                       279564                    1045500
      1000                       559064                    2091000
      2000                      1118064                    4182000
      5000                      2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:36 2006
Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
  2     set xml = xmltype(xml_text)
  3   where fragCount = &3
  4  /
old   3:  where fragCount = &3
new   3:  where fragCount = 5000
1 row updated.
Elapsed: 00:00:02.02
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.05
SQL> update source_data_table
  2     set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
  3   where fragCount = &3
  4  /
old   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new   2:    set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old   3:  where fragCount = &3
new   3:  where fragCount = 5000
1 row updated.
Elapsed: 00:03:19.02
SQL> commit
  2  /
Commit complete.
Elapsed: 00:00:00.01
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
  2    from source_data_table
  3  /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
        50                        28014                     104550
       100                        55964                     209100
       500                       279564                    1045500
      1000                       559064                    2091000
      2000                      1118064                    4182000
      5000                      2795064                   10455000
6 rows selected.
Elapsed: 00:00:00.04
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
bash-2.05b$

Similar Messages

  • Non jdriver poor performance with oracle cluster

    Hi,
    we decided to implement batch input and went from Weblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are a Weblogic 6.1 cluster and an Oracle 8.1.7 cluster.
    Problem is .. with the new Oracle drivers our actions on the webapp takes twice as long as with Jdriver. We also tried OCI .. same problem. We switched to a single Oracle 8.1.7 database .. and it worked again with all thick or thin drivers.
    So .. new Oracle drivers with oracle cluster result in bad performance, but with Jdriver it works perfectly. Does sb. see some connection?
    I mean .. it works with Jdriver .. so it cant be the database, huh? But we really tried with every JDBC possibility! In fact .. we need batch input. Advise is very appreciated =].
    Thanx for help!!
    Message was edited by mindchild at Jan 27, 2005 10:50 AM
    Message was edited by mindchild at Jan 27, 2005 10:51 AM

    Thx for quick replys. I forget to mention .. we also tried 10g v10.1.0.3 from instantclient yesterday.
    I have to agree with Joe. It was really fast on the single machine database .. but we had same poor performance with cluster-db. It is frustrating. Specially if u consider that the Jdriver (which works perfectly in every combination) is 4 years old!
    Ok .. we got this scenario, with our appPage CustomerOverview (intensiv db-loading) (sorry.. no real profiling, time is taken with pc watch) (Oracle is 8.1.7 OPS patch level1) ...
    WL6.1_Cluster + Jdriver6.1 + DB_cluster => 4sec
    WL6.1_Cluster + Jdriver6.1 + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_cluster => 8-10sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_single => 4sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_cluster => 8sec
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_single => 2-4sec (awesome fast!!)
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_cluster => 6-8sec
    Customers rough us up, because they cannot mass order via batch input. Any suggestions how to solve this issue is very appreciated.
    TIA
    >
    >
    Markus Schaeffer wrote:
    Hi,
    we decided to implement batch input and went fromWeblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are an Weblogic 6.1 cluster and a Oracle8.1.7 cluster.
    Problem is .. with the new Oracle drivers ouractions on the webapp takes twice as long
    as with Jdriver. We also tried OCI .. same problem.We switched to a single Oracle 8.1.7
    database .. and it worked again with all thick orthin drivers.
    So .. new Oracle drivers with oracle cluster
    result in bad performance, but with
    Jdriver it works perfectly. Does sb. see someconnection?Odd. The jDriver is OCI-based, so it's something
    else. I would try the latest
    10g driver if it will work with your DBMS version.
    It's much faster than any 9.X
    thin driver.
    Joe
    I mean .. it works with Jdriver .. so it cant bethe database, huh? But we really
    tried with every JDBC possibility!
    Thanx for help!!

  • Poor Performance with Fairpoint DSL

    I started using Verizon DSL for my internet connection and had no problems. When Fairpoint Communications purchased Verizon (this is in Vermont), they took over the DSL (about May 2009). Since then, I have had very poor performance with all applications as soon as I start a browser. The performance problems occur regardless of the browser - I've tried Firefox (3.5.4), Safari (4.0.3) and Opera (10.0). I've been around and around with Fairpoint for 6 months with no resolution. I have not changed any software or hardware on my Mac during that time, except for updating the browsers and Apple updates to the OS, iTunes, etc. The performance problems continued right through these updates. I've run tests to check my internet speed and get times of 2.76Mbps (download) and 0.58Mbps (upload) which are within the specified limits for the DSL service. My Mac is a 2GHz PowerPC G5 runnning OSX 10.4.11. It has 512MB DDR SDRAM. I use a Westell Model 6100 modem for the DSL provided by Verizon.
    Some of the specific problems I see are:
    1. very long waits of more than a minute after a click on an item in the menu bar
    2. very long waits of more than two minutes after a click on an item on a browser page
    3. frequent pinwheels in response to a click on a menu item/browser page item
    4. frequent pinwheels if I just move the mouse without a click
    5. frequent messages for stopped/unresponsive scripts
    6. videos (like YouTube) stop frequently for no reason; after several minutes, I'll get a little audio but no new video; eventually after several more minutes it will get going again (both video and audio)
    7. response in non-browser applications is also very slow
    8. sometimes will get no response at all to a mouse click
    9. trying to run more than one browser at a time will bring the Mac to its knees
    10. browser pages frequently take several minutes to load
    These are just some of the problems I have.
    These problems all go away and everything runs fine as soon as I stop the browser. If I start the browser, they immediately surface again. I've trying clearing the cache, etc with no improvements.
    What I would like to do is find a way to determine if the problem is in my Mac or with the Fairpoint service. Since I had no problems with Verizon and have made no changes to my Mac, I really suspect the problem lies with Fairpoint. Can anyone help me out? Thanks.

    1) Another thing that you could try it is deleting the preference files for networking. Mac OS will regenerate these files. You would then need to reconfigure your network settings.
    The list of files comes from Mac OS X 10.4.
    http://discussions.apple.com/message.jspa?messageID=8185915#8185915
    http://discussions.apple.com/message.jspa?messageID=10718694#10718694
    2) I think it is time to do a clean install of your system.
    3) It's either the software or an intermittent hardware problem.
    If money isn't an issue, I suggest an external harddrive for re-installing Mac OS.
    You need an external Firewire drive to boot a PowerPC Mac computer.
    I recommend you do a google search on any external harddrive you are looking at.
    I bought a low cost external drive enclosure. When I started having trouble with it, I did a google search and found a lot of complaints about the drive enclosure. I ended up buying a new drive enclosure. On my second go around, I decided to buy a drive enclosure with a good history of working with Macs. The chip set seems to be the key ingredient. The Oxford line of chips seems to be good. I got the Oxford 911.
    The latest the hard drive enclosures support the newer serial ata drives. The drive and closure that I list supports only older parallel ata.
    Has everything interface:
    FireWire 800/400 + USB2, + eSATA 'Quad Interface'
    save a little money interface:
    FireWire 400 + USB 2.0
    This web page lists both external harddrive types. You may need to scroll to the right to see both.
    http://eshop.macsales.com/shop/firewire/1394/USB/EliteAL/eSATAFW800_FW400USB
    Here is an external hd enclosure.
    http://eshop.macsales.com/item/Other%20World%20Computing/MEFW91UAL1K/
    Here is what one contributor recommended:
    http://discussions.apple.com/message.jspa?messageID=10452917#10452917
    Folks in these Mac forums recommend LaCie, OWC or G-Tech.
    Here is a list of recommended drives:
    http://discussions.apple.com/thread.jspa?messageID=5564509#5564509
    FireWire compared to USB. You will find that FireWire 400 is faster than USB 2.0 when used for a external harddrive connection.
    http://en.wikipedia.org/wiki/UniversalSerial_Bus#USB_compared_toFireWire
    http://www23.tomshardware.com/storageexternal.html

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Poor performance with Oracle Spatial when spatial query invoked remotely

    Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
    Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
    Thank you in advance for any thoughts you might share.

    OK, that's clearer.
    Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
    set autotrace on
    set timing on
    SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
    Have you profiled the procedure? Here is an example of how to do it:
    Prompt Firstly, create PL/SQL profiler table
    @$ORACLE_HOME/rdbms/admin/proftab.sql
    Prompt Secondly, use the profiler to gather stats on execution characteristics
    DECLARE
      l_run_num PLS_INTEGER := 1;
      l_max_num PLS_INTEGER := 1;
      v_geom    mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
    BEGIN
      dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE'));  -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
      v_geom := Parallel(v_geom,10,0.05,1);  -- Put your procedure call here
      dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
    END;
    SHOW ERRORS
    Prompt Finally, report activity
    COLUMN runid FORMAT 99999
    COLUMN run_comment FORMAT A40
    SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
      FROM plsql_profiler_runs
      ORDER BY runid;
    COLUMN runid       FORMAT 99999
    COLUMN unit_number FORMAT 99999
    COLUMN unit_type   FORMAT A20
    COLUMN unit_owner  FORMAT A20
    COLUMN text        FORMAT A100
    compute sum label 'Total_Time' of total_time on runid
    break on runid skip 1
    set linesize 200
    SELECT u.runid || ',' ||
           u.unit_name,
           d.line#,
           d.total_occur,
           d.total_time,
           text
    FROM   plsql_profiler_units u
           JOIN plsql_profiler_data d ON u.runid = d.runid
                                         AND
                                         u.unit_number = d.unit_number
           JOIN all_source als ON ( als.owner = 'CODESYS'
                                   AND als.type = u.unit_type
                                   AND als.name = u.unit_name
                                AND als.line = d.line# )
    WHERE  u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
    ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
    regards
    Simon

  • Poor Performance with Converged Fabrics

    Hi Guys,
    I'm having some serious performance issues with Converged Fabrics in my Windows Server 2012 R2 lab. I'm planning on creating a Hyper-V cluster with 3 nodes. I've built the first node, building and installing/configuring OS and Hyper-V pretty straight forward.
    My issue is with Converged Fabrics, I'm absolutely getting very slow performance in the sense of managing the OS, Remote Desktop connections taking very long and eventually times out. Server unable to find a writable domain controller due to slow performance.
    If I remove the converged fabric everything is awesome, works as expected. Please note that the cluster hasn't even been built yet and experiencing this poor performance.
    Here is my server configuration:
    OS: Windows Server 2012 R2
    RAM: 64GB
    Processor: Intel I7 Gen 3
    NICS: 2 X Intel I350-T2 Adapters, supporting SRIOV/VMQ
    Updates: All the latest updates applied
    Storage:
    Windows Server 2012 R2 Storage Spaces
    Synology DS1813+
    Updates: All the latest updates applied
    Below is the script I've written to automate the entire process.
    # Script: Configure Hyper-V
    # Version: 1.0.2
    # Description: Configures the Hyper-V Virtual Switch and
    #              Creates a Converged Fabric
    # Version 1.0.0: Initial Script
    # Version 1.0.1: Added the creation of SrIOV based VM Switches
    # Version 1.0.2: Added parameters to give the NLB a name, as well as the Hyper-V Switch
    param
        [Parameter(Mandatory=$true)]
        [string]$TeamAdapterName="",
        [Parameter(Mandatory=$true)]
        [string]$SwitchName="",
        [Parameter(Mandatory=$true)]
        [bool]$SrIOV=$false
    #Variables
    $CurrentDate = Get-Date -Format d
    $LogPath = "C:\CreateConvergedNetworkLog.txt"
    $ManagmentOSIPv4="10.150.250.5"
    $ManagmentOS2IPv4="10.250.251.5"
    #$CommanGatewayIPv4="10.10.11.254"
    $ManagmentDNS1="10.150.250.1"
    $ManagmentDNS2="10.150.250.3"
    $ManagmentDNS3="10.250.251.1"
    $ManagmentDNS4="10.250.251.3"
    $ClusterIPv4="10.253.251.1"
    $LiveMigrationIPv4="10.253.250.1"
    $CSVIPv4="10.100.250.1"
    $CSV2IPv4="10.250.100.1"
    #Set Excution Policy
    Write-Host "Setting policy settings..."
    Set-ExecutionPolicy UnRestricted
    try
        # Get existing network adapters that are online
        if($SrIOV)
            #$sriov_adapters = Get-NetAdapterSriov | ? Status -eq Up | % Name # Get SRIOV Adapters
            $adapters = Get-NetAdapterSriov | ? Status -eq Up | % Name # Get SRIOV Adapters
            Enable-NetAdapterSriov $adapters # Enable SRIOV on the adapters
        else
            $adapters = Get-NetAdapterSriov | % Name
            #$adapters = Get-NetAdapter | ? Status -eq Up | % Name
        # Create NIC team
        if ($adapters.length -gt 1)
            Write-Host "$CurrentDate --> Creating NIC team $TeamAdapterName..."
            Write-Output "$CurrentDate --> Creating NIC team $TeamAdapterName..." | Add-Content $LogPath
            #New-NetLbfoTeam -Name "ConvergedNetTeam" -TeamMembers $adapters -Confirm:$false | Add-Content $LogPath
            New-NetLbfoTeam -Name $TeamAdapterName -TeamMembers $adapters -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false | Add-Content $LogPath
        else
            Write-Host "$CurrentDate --> Check to ensure that at least 2 NICs are available for teaming"
            throw "$CurrentDate --> Check to ensure that at least 2 NICs are available for teaming" | Add-Content $LogPath
        # Wait for team to come online for 60 seconds
        Start-Sleep -s 60
        if ((Get-NetLbfoTeam).Status -ne "Up")
            Write-Host "$CurrentDate --> The ConvergedNetTeam NIC team is not online. Troubleshooting required"
            throw "$CurrentDate --> The ConvergedNetTeam NIC team is not online. Troubleshooting required" | Add-Content $LogPath
        # Create a new Virtual Switch
        if($SrIOV) #SRIOV based VM Switch
            Write-Host "$CurrentDate --> Configuring converged fabric $SwitchName with SRIOV..."
            Write-Output "$CurrentDate --> Configuring converged fabric $SwitchName with SRIOV..." | Add-Content $LogPath
            #New-VMSwitch "ConvergedNetSwitch" -MinimumBandwidthMode Weight -NetAdapterName "ConvergedNetTeam" -EnableIov $true -AllowManagementOS 0
            New-VMSwitch $SwitchName -MinimumBandwidthMode Weight -NetAdapterName $TeamAdapterName -EnableIov $true -AllowManagementOS 0
            $CreatedSwitch = $true
        else #Standard VM Switch
            Write-Host "$CurrentDate --> Configuring converged fabric $SwitchName..."
            Write-Output "$CurrentDate --> Configuring converged fabric $SwitchName..." | Add-Content $LogPath
            #New-VMSwitch "ConvergedNetSwitch"-MinimumBandwidthMode Weight -NetAdapterName "ConvergedNetTeam" -AllowManagementOS 0
            New-VMSwitch $SwitchName -MinimumBandwidthMode Weight -NetAdapterName $TeamAdapterName -AllowManagementOS $false
            $CreatedSwitch = $true
        if($CreatedSwitch)
            #Set Default QoS
            Write-Host "$CurrentDate --> Setting default QoS policy on $SwitchName..."
            Write-Output "$CurrentDate --> Setting default QoS policy $SwitchName..." | Add-Content $LogPath
            #Set-VMSwitch "ConvergedNetSwitch"-DefaultFlowMinimumBandwidthWeight 30
            Set-VMSwitch $SwitchName -DefaultFlowMinimumBandwidthWeight 20
            #Creating Management OS Adapters (SYD-MGMT)
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Management OS"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Management OS" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "SYD-MGMT" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "SYD-MGMT" -MinimumBandwidthWeight 30 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "SYD-MGMT" -Access -VlanId 0
            #Creating Management OS Adapters (MEL-MGMT)
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Management OS"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Management OS" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "MEL-MGMT" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "MEL-MGMT" -MinimumBandwidthWeight 30 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MEL-MGMT" -Access -VlanId 0
            #Creating Cluster Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Cluster"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Cluster" | Add-Content $LogPath
            #Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "ConvergedNetSwitch"
            Add-VMNetworkAdapter -ManagementOS -Name "HV-Cluster" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "HV-Cluster" -MinimumBandwidthWeight 20 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HV-Cluster" -Access -VlanId 0
            #Creating LiveMigration Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for LiveMigration"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for LiveMigration" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "HV-MIG" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "HV-MIG" -MinimumBandwidthWeight 40 -VmqWeight 90
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HV-MIG" -Access -VlanId 0
            #Creating iSCSI-A Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-A"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-A" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "iSCSI-A" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "iSCSI-A" -MinimumBandwidthWeight 40 -VmqWeight 100
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI-A" -Access -VlanId 0
            #Creating iSCSI-B Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-B"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-B" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "iSCSI-B" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "iSCSI-B" -MinimumBandwidthWeight 40 -VmqWeight 100
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI-B" -Access -VlanId 0
            Write-Host "Waiting 40 seconds for virtual devices to initialise"
            Start-Sleep -Seconds 40
            #Configure the IP's for the Virtual Adapters
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC" | Add-Content $LogPath
            #New-NetIPAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -IPAddress $ManagmentOSIPv4 -PrefixLength 24 -DefaultGateway $CommanGatewayIPv4
            New-NetIPAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -IPAddress $ManagmentOSIPv4 -PrefixLength 24
            Set-DnsClientServerAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -ServerAddresses ($ManagmentDNS1, $ManagmentDNS2)
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (MEL-MGMT)" -IPAddress $ManagmentOS2IPv4 -PrefixLength 24
            Set-DnsClientServerAddress -InterfaceAlias "vEthernet (MEL-MGMT)" -ServerAddresses ($ManagmentDNS3, $ManagmentDNS4)
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Cluster virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Cluster virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (HV-Cluster)" -IPAddress $ClusterIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (HV-Cluster)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the LiveMigration virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the LiveMigration virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (HV-MIG)" -IPAddress $LiveMigrationIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (LiveMigration)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the iSCSI-A virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the iSCSI-A virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI-A)" -IPAddress $CSVIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (iSCSI-A)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the iSCSI-B virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the iSCSI-B virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI-B)" -IPAddress $CSV2IPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (CSV2)" -ServerAddresses $ManagmentDNS1
            #Write-Host "$CurrentDate --> Configuring IPv4 address for the VMNet virtual NIC"
            #Write-Output "$CurrentDate --> Configuring IPv4 address for the VMNet virtual NIC" | Add-Content $LogPath
            #New-NetIPAddress -InterfaceAlias "vEthernet (VMNet)" -IPAddress $VMNetIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (VMNet)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Hyper-V Configuration is Complete"
            Write-Output "$CurrentDate --> Hyper-V Configuration is Complete" | Add-Content $LogPath
    catch [Exception]
        throw "$_" | Add-Content $LogPath
    I would really like to know why I'm getting absolutely poor performance. Any help on this would be most appreciated.

    I didn't parse the entire script, but a few things stand out.
    SR-IOV and teaming don't mix. The purpose of SR-IOV is to go straight from the virtual machine into the physical adapter and back, completely bypassing the entire Hyper-V virtual switch and everything that goes with it. Team or SR-IOV.
    You're adding DNS servers to adapters that don't need them. Inbound traffic is going to be confused, to say the least. The only adapter that should have DNS addresses is the management adapter. For all others, you should run Set-DnsClient -RegisterThisConnectionsAddress
    $false.
    I don't know that I'm reading your script correctly, but it appears you have multiple adapters set up for management. That won't end well.
    It also looks like you have QoS weights that total over 100. That also won't end well.
    I don't know that these explain poor performance like you're describing, though. It could just be that you're a victim of network adapters/drivers that have poor support for VMQ. Bad VMQ is worse than no VMQ. But, VMQ+teaming+SR-IOV sounds like recipe for
    heartache to me, so I'd start with that.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Poor Performance with 10.1

    I'm on a windows xp sp 3 machine with an x800 xt pe graphics card using the latest catalyst drivers from Ati.  Ever since I updated flash, I'm getting very poor performance during 720p with acceleration enabled.  I'm also getting bad performance when it's disabled.  Is there an issue with the 10.2 legacy (ati) drivers not enabling hardware acceleration with 10.1 flash?
    BTW I went back to using 10.0.42.34 flash and everything is running fine again.

    I've also noticed poor performance since installing the 10.1 plugin.  Not just in HD video etc, but in general use.  So much for "improved performance"...
    No idea on solutions at this stage - In the process of downgrading to previous version to see if that fixes my problems

  • Poor performance with SVM mirrored stripes

    Problem: poor write performance with configuration included below. Before attaching striped subdevices to mirror, they write/read very fast (>20,000 kw/s according to iostat), but after mirror attachment and sync performance is <2,000 kw/s. I've got standard SVM-mirrored root disks that perform at > 20,000 kw/s, so something seems odd. Any help would be greatly appreciated.
    Config follows:
    Running 5.9 Generic_117171-17 on sun4u sparc SUNW,Sun-Fire-V890. Configuration is 4 72GB 10K disks with the following layout:
    d0: Mirror
    Submirror 0: d3
    State: Okay
    Submirror 1: d2
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 286607040 blocks (136 GB)
    d3: Submirror of d0
    State: Okay
    Size: 286607040 blocks (136 GB)
    Stripe 0: (interlace: 4096 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c1t4d0s0 0 No Okay Yes
    c1t5d0s0 10176 No Okay Yes
    d2: Submirror of d0
    State: Okay
    Size: 286607040 blocks (136 GB)
    Stripe 0: (interlace: 4096 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c1t1d0s0 0 No Okay Yes
    c1t2d0s0 10176 No Okay Yes

    I have the same issue and it is killing our iCal Server performance.
    My question would be, can you edit an XSAN Volume without destroying the data?
    There is one setting that I think might ease the pressure... in the Cache Settings (Volumes -> <Right-Click Volume> -> Edit Volume Settings)
    Would increasing the amount of Cache help? Would it destroy the data?!?!?

  • PCMCIA gigabit cards are poor performance with lenovo n100

    Hello,
    I have a lenovo n100 and I tried two pcmcia gigabit cards (Dlink DGE 660TD and TRENDnet TEG-PCBUSR) with very poor performance. I tested with iperf (cat6 cable with Siemens Sinergie S300 server) and Windows, linux, normal config, tunned tcp parameters, the outcome is always the same, 200Mbits upload and 76Mbits for download.
    Any sugestion?
    Kind regards,
    André P. Muga

    lead_org wrote:
    There is two type of PCMCIA, one is running at 16 bit another is running through 32 bit.
    You can achieve the gigabit requirements quite comfortably with the 32 bit, but i think the 16 bit version is not going to be enough for the gigabit ethernet. 
    Running at Mhz??? I don't understand how that is anything to with data bandwidth. 
    Both are pcmcia2, 32 bits, gigabit ethernet, full duplex, cards.
    The peak transfer rate is calculated by <bus speed MHz> * <bus width> = <Mbps>
    Example:
    Bus speed 33MHz, Bus width:  32bits
    33 * 32  = 1056 Mbps or 132MBps
    Bus speed 667MHz, Bus width:  32bits
    667 * 32  = 21344Mbps or 2668MBps
    The lenovo N100 has a 32 bit bus with 667MHz.... no limitation here.

  • Are there Issues with poor performance with Maverick OS,

    Are there issues with slow and reduced performance with Mavericks OS

    check this
    http://apple.stackexchange.com/questions/126081/10-9-2-slows-down-processes
    or
    this:
    https://discussions.apple.com/message/25341556#25341556
    I am doing a lot of analyses with 10.9.2 on a late 2013 MBP, and these analyses generally last hours or days. I observed that Maverick is slowing them down considerably for some reasons after few hours of computations, making it impossible for me to work with this computer...

  • Photoshop laggy, poor performance with GPU enabled

    I'm using Photoshop CS6 for some time now. I had enabled GPU usage earlier and I remember that Photoshop ran absolutely smoothly without and lag what-so-ever. This morning, all of a sudden, it has become extermely laggy. I don't understand the reason behind this. This had happened to me before as well but it became alright after a few days. This also happened to me once while using Photoshop CS5. I didn't install any plugin and didn't change any software. I also want to point out that I had 13.4 driver version when Photoshop was working fine, and now too I have the same driver version.
    CPU: AMD Phenom II X4 955
    Memory: 6 GB
    Free storage: 8 GB
    GPU: ATI Radeon HD 5850

    I've also noticed poor performance since installing the 10.1 plugin.  Not just in HD video etc, but in general use.  So much for "improved performance"...
    No idea on solutions at this stage - In the process of downgrading to previous version to see if that fixes my problems

  • Poor performance of XSLT-transformation

    Hi, everyone
    I`ve got report in BI-publisher based on xsl-template.
    This template transforms flat xml-file in Excel file and displays the data in the likeness of a cross-tab.
    Everything looks fine, but when the input file is larger than 1 MB, then the output file is formed about 10-15 minutes.
    However, when the input file is less than 1 MB, I got the output file is less than 1 minute.
    How i can see where is the "bottleneck"?
    What i can do to improve performance of xslt-transformation?
    Thanks

    Hi,
    Sorry that did not respond sooner.
    We have gave the process of 1024 MB memory.
    It`s about 4100+ records in result xml.
    BIP version is 10.1.3.4 and java - 1.6.
    Xsl-template and Xml you can download here
    Thank you for reply =)

  • Poor performance with a PL/SQL block

    I have a problem with the a PL/SQL block which perform very poor when the criteria of a sub-query is to be compared with a local variable rather than a hard-coded value.
    I 've put in the 2 blocks below for discussion. Block 1 take only 56 seconds to run to insert 16815 records while Block2 take ages to run.
    Can someone tell me why?
    Block 1:
    ========
    declare
    lv_sih_comp_cd varchar2(10):= '304400';
    lv_vdr_code varchar2(50):= '912732';
    begin
    insert into taba
    select * from taba@dlphtest h
    where h.HIST_DT >= to_date('070903','ddmmyy') AND h.HIST_DT < TRUNC(SYSDATE)
    and exists (select null
    from dlph.model@dlphtest model
    where model.model_code = h.model_code
    and model.model_vdr_code = ls_vdr_code
    and model.model_fact_code=ls_sih_comp_cd);
    end;
    Block 2:
    ========
    declare
    lv_sih_comp_cd varchar2(10):= '304400';
    lv_vdr_code varchar2(50):= '912732';
    begin
    insert into taba
    select * from taba@dlphtest h
    where h.HIST_DT >= to_date('070903','ddmmyy') AND h.HIST_DT < TRUNC(SYSDATE)
    and exists (select null
    from dlph.model@dlphtest model
    where model.model_code = h.model_code
    and model.model_vdr_code = '912732'
    and model.model_fact_code='304400');
    end;
    Regards
    Louis

    Correction:
    Block 1 should be:
    declare
    lv_sih_comp_cd varchar2(10):= '304400';
    lv_vdr_code varchar2(50):= '912732';
    begin
    insert into taba
    select * from taba@dlphtest h
    where h.HIST_DT >= to_date('070903','ddmmyy') AND h.HIST_DT < TRUNC(SYSDATE)
    and exists (select null
    from dlph.model@dlphtest model
    where model.model_code = h.model_code
    and model.model_vdr_code = lv_vdr_code
    and model.model_fact_code=lv_sih_comp_cd);
    end;
    Sorry for the typo error.
    Regards
    Louis

  • Dblink poor performance with varchar(4000) after upgrade 11g

    Hi
    Since a long time we connected from a 10g database via dblink to another 10g database to copy a simple table with four columns. Tablesize at about 500MB
    column1 (varchar(100))
    column2/3/4 (varchar(4000)).
    After the upgrade of the source database to 11g the dblink performance is poor with the big varchar columns. If I copy (select column1 from ) only column1 then I get the data within minutes. If I want to copy the whole table (select column1, column2, column3, column4 from...) then the performance is poor. It didn't finish within days.
    Does anyone know about dblink issues with 11g and big varchar columns?
    Thank you very much

    Use DBlink to pull data from table(s) using IMPDP:
    #1 Create DBlink in the Target database
    create database link sourceDB1_DBLINK connect to system identified by password10 using 'SourceDB1';
    [update tnsnames.ora appropriately]
    #2
    select DIRECTORY_PATH from dba_directories where DIRECTORY_NAME=’DATA_PUMP_DIR’;
    create DATA_PUMP_DIR directory if not exists
    #3
    impdp system/pwdxxx SCHEMAS=SCOTT NETWORK_LINK=ORCL10R2 JOB_NAME=scott_import LOGFILE=data_pump_dir:network_imp_scott.log
    That's all

  • Slow Performance with XMLType table

    Hi Mark,
    We are trying to implement a new interface program based on XML DB.
    We have a table named BO_XML.
    Create table BO_XML (Circuit_ID Varchar2(12), BO_XMLDOC XMLTYPE);
    Then, we inserted an xml document with an XSD into this BO_XML table.
    The xml document has 15 elements and it has around 6000 records.
    Once we insert the xml documents, the next step will be extract the contents of
    the xml documents and insert the record into the STG_TEMP table.
    But the following query for this takes more than 5 minutes and since we have
    many jobs like this to run, this query will need to be optimized.(please note
    that the same query run faster if it has few records in the xml document like 5
    to 10.)
    We are using xPath with XMLSequnce to parse the xml document.
    Please help.
    Thanks,
    Ron
    INSERT INTO STG_TEMP
    (Circuit_Number,
    Unit_Number,
    Unit_Name,
    Release,
    Release_Name,
    PlayDate,
    Show_Category,
    Admission_Type_ID,
    Ticket_Price,
    Tickets_Sold,
    Tickets_Refunded,
    Ticket_Tax,
    Total_Tax,
    Ticket_Open,
    Ticket_Close,
    FROM_EDI_INDICATOR,
    Updater,
    Time_Stamp
    SELECT
    x.Circuit_Id,
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TheatreNumber'), 1, 12)) ,
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TheatreName'), 1, 25) ) ,
    TRIM(SUBSTR(extractValue(VALUE(t1),'//MovieNumber'), 1, 12 ) ) ,
    TRIM(SUBSTR(extractValue(VALUE(t1),'//MovieTitle'), 1, 30) ),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//DatePlayed'),1, 10) ),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//ShowCategory'),1, 12)) ,
    TRIM(SUBSTR(extractValue(VALUE(t1),'//AdmissionType'),1, 12)) ,
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TicketPrice'), 1, 10)),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TicketsSold'),1, 9)),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TicketRefund'),1, 5)),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TaxAmount'),1, 10)),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//TaxAmount'),1, 10)),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//OpeningTicket'),1, 10)),
    TRIM(SUBSTR(extractValue(VALUE(t1),'//ClosingTicket') , 1, 10)),
    'N',
    USER,
    SYSDATE
    FROM BO_XML x,
    TABLE ( xmlsequence ( EXTRACT( x.BO_XMLDOC,'BO/BO_RECORD'))) t1 ;

    I also have a performance issue with a xmltype column
    without an associated schema. If you're recommanding
    to associate a schema, I would like to understand why.
    Can you please explain this a bit more?
    Tnx,
    Jeroen

Maybe you are looking for