Poor performance of XSLT-transformation
Hi, everyone
I`ve got report in BI-publisher based on xsl-template.
This template transforms flat xml-file in Excel file and displays the data in the likeness of a cross-tab.
Everything looks fine, but when the input file is larger than 1 MB, then the output file is formed about 10-15 minutes.
However, when the input file is less than 1 MB, I got the output file is less than 1 minute.
How i can see where is the "bottleneck"?
What i can do to improve performance of xslt-transformation?
Thanks
Hi,
Sorry that did not respond sooner.
We have gave the process of 1024 MB memory.
It`s about 4100+ records in result xml.
BIP version is 10.1.3.4 and java - 1.6.
Xsl-template and Xml you can download here
Thank you for reply =)
Similar Messages
-
Performance of XSLT transformation
Hello everybody,
I´m trying to do some in-database XML transformations from one XML format to another with a rather complex xsl stylesheet of about 9000 lines of code which performs this transformation. At first I tested the XDK XSLT functionality by simply doing a Transformation over the JAXP API outside the Database. It performed not bad, altough the memory consumption was a little bit higher then Saxon or Xalan. After that I read about the possibilities of in-database transformation, which should consume less memory because of oracle´s xml features like scalable DOM, XML Indexing and Query Rewriting when using object relational or binary storage. So we loaded our testdata into the db (we are using 11gR2 on a test system with Intel Core2Duo @ 3GhZ and 3GB RAM), in different XML storage options (XMLTYPE CLOB, Binary, OR) and tried to do a XSLT transformation on the columns. But it isn´t working.
The db seems not to be able to cope with the transformation although the parser and processor should be the same as the off-database transformation that was working well (or am I wrong?). After half an hour we aborted because the db hadn´t finished the transformation yet (the off-db transformation of this 100KB XML took less then a second). We tried indexing the columns with XMLIndex, didn´t work out, we tried both XMLTYPE´s XMLtransform() and dbms_xslprocessor.processxsl(), same problem, the DB is working and working with no result, after some time we have to terminate. With a simple teststylesheet the transformation is doing fine but our complex stylesheet seems to be the problem.
My question is, why is the xslt performance so poorly with the complex stylesheet and why can I use the XDK Parser and Processor in an off-db transformation very efficiently but when doing the same transformation in the db it´s not working.
Hope you can help me out a little bit I´m really running out of ideas.
Greetz diracHi,
The db seems not to be able to cope with the transformation although the parser and processor should be the same as the off-database transformation that was working well (or am I wrong?). No, the database built-in XSLT processor (C-based) is not the same as the XDK's Java implementation.
There's also a C XSLT processor available for external transformation and working with compiled stylesheets, it should outperform the Java implementation.
See : http://docs.oracle.com/cd/E11882_01/appdev.112/e23582/adx_c_xslt.htm#i1020191
We tried indexing the columns with XMLIndex, didn´t work out, we tried both XMLTYPE´s XMLtransform() and dbms_xslprocessor.processxsl(), same problem, the DB is working and working with no result, after some time we have to terminate. With a simple teststylesheet the transformation is doing fine but our complex stylesheet seems to be the problem.XMLIndexes won't help at all in this situation.
In my experience so far with the built-in engine, I've seen the best result when the XML document we want transform is stored as binary XML.
I don't know about the scalable DOM (or "lazy" DOM loading) feature described in the documentation, seems it doesn't support complex transformations, much like functional vs. streaming evaluation for XQuery.
Hope you can help me out a little bit I´m really running out of ideas.I'd be interested in a test case if you have the possibility (stylesheet + sample input XML). You can mail me at this address : mb[dot]perso[at]wanadoo[dot]fr
If you can't, could you describe what kind of transformation you're performing and how can I reproduce that behaviour?
Thanks.
Marc. -
[9i] poor performance with XMLType.transform
Hello,
I've got a problem with the Oracle function XMLType.transform.
When I try to apply a XSL to a big XML, it is very very slow, and it evens consumes all the CPU, and other users are not able to work until the processing is complete...
So I was wondering if my XSL was corrupted, but it does not seem so, because when i apply it with Internet Explorer (by just double-clicking on the .xml), it is immediately applied. I've also even tried with oraxsl, and the processing is quick and good.
So, i tried to use XDB, but it does not work, maybe I should upgrade to a newer version of XDB?
Please find the ZIP file here :
http://perso.modulonet.fr/~tleoutre/Oracle/samples.zip
Please find in this file :
1) The XSL file k_xsl.xsl
2) The "big" XML file big_xml.xml
Here you can try to apply the XSL on the XML with Internet Explorer : processing is very quick...
3) The batch file transform.bat
Here you can launch it, it calls oraxsl, and produces a result very quickly...
4) The SQL file test_xsl_with_xmltype_transform.sql.
You can try to launch it... First, it applies the same XSL with a little XML, and it's OK... And then, it applies the XSL to the same big XML as in point 1), and then, it takes a lot of time and CPU...
5) The SQL file test_xsl_with_xdb_1.sql ...
On my server, it fails... So I tried to change the XSL in the next point :
6) The SQL file test_xsl_with_xdb_2.sql with a "cleaned" XSL...
And then, it fails with exactly the same problem as in :
TransformerConfigurationException (Unknown expression at EOF: *|/.)
Any help would be greatly appreciated!
Thank you!
P.S. : Sorry for my bad english, I'm a French man :-)This is what I see...
Your tests are measuring the wrong thing. You are measuring the time to create the sample documents, which is being done very innefficiently, as well
as the time take to do the transform.
Below is the correct way to get mesasurements for each task..
Here's what I see on a PIV 2.4Ghz with 10.2.0.2.0 and 2GB of RAM
Fragments SourceSize TargetSize createSource Parse Transform
50 28014 104550 00:00:00.04 00:00:00.04 00:00:00.12
100 55964 209100 00:00:00.03 00:00:00.05 00:00:00.23
500 279564 1045500 00:00:00.16 00:00:00.23 00:00:01.76
1000 559064 2091000 00:00:00.28 00:00:00.28 00:00:06.04
2000 1118064 4182000 00:00:00.34 00:00:00.42 00:00:24.43
5000 2795064 10455000 00:00:00.87 00:00:02.02 00:03:19.02I think this clearly shows the pattern.
Of course what this testing really shows is that you've clearly missed the point of performing XSLT transformation inside the database.
The idea behind database based transformation is to optimize XSLT processing by
(1), not having to parse the XML and build a DOM tree before commencing the XSLT processing. In this example this is not possible since the
XML is being created from a CLOB based XMLType, not a schema based XMLType.
(2) Leveraging the Lazily Loaded Virtual DOM when doing sparse transformation ( A Sparse transformation is one where there are large parts of the
source document that are not required to create the target document. Again in this case the XSL requires you to walk all the nodes to generate the
required output.
If is necessary to process all of the nodes in the source document to generate the entire output it probably makes more sense to use a midtier XSL engine.
Here's the code I used to generate the numbers in the above example
BTW in terms of BIG XML we've successully processed 12G documents with Schema Based Storage...So nothing you have hear comes any where our defintion of big.- 1 with Oracle 10g Express on Linux
Also, please remember that 9.2.0.1.0 is not a supported release for any XML DB related features.
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:44:59 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool createDocument.log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> create or replace directory &1 as '&3'
2 /
old 1: create or replace directory &1 as '&3'
new 1: create or replace directory SCOTT as '/home/mdrake/bugs/xslTest'
Directory created.
SQL> drop table source_data_table
2 /
Table dropped.
SQL> create table source_data_table
2 (
3 fragCount number,
4 xml_text clob,
5 xml xmlType,
6 result clob
7 )
8 /
Table created.
SQL> create or replace procedure createDocument(fragmentCount number)
2 as
3 fragmentText clob :=
4 '<AFL LIGNUM="1999">
5 <mat>20000001683</mat>
6 <name>DOE</name>
7 <firstname>JOHN</firstname>
8 <name2>JACK</name2>
9 <SEX>MALE</SEX>
10 <birthday>1970-05-06</birthday>
11 <salary>5236</salary>
12 <code1>5</code1>
13 <code2>6</code2>
14 <code3>7</code3>
15 <date>2006-05-06</date>
16 <dsp>8.665</dsp>
17 <dsp_2000>455.45</dsp_2000>
18 <darr04>5.3</darr04>
19 <darvap04>6</darvap04>
20 <rcrr>8</rcrr>
21 <rcrvap>9</rcrvap>
22 <rcrvav>10</rcrvav>
23 <rinet>11.231</rinet>
24 <rmrr>12</rmrr>
25 <rmrvap>14</rmrvap>
26 <ro>15</ro>
27 <rr>189</rr>
28 <date2>2004-05-09</date2>
29 </AFL>';
30
31 xmlText CLOB;
32
33 begin
34 dbms_lob.createTemporary(xmlText,true,DBMS_LOB.CALL);
35 dbms_lob.write(xmlText,5,1,'<PRE>');
36 for i in 1..fragmentCount loop
37 dbms_lob.append(xmlText,fragmentText);
38 end loop;
39 dbms_lob.append(xmlText,xmlType('<STA><COD>TER</COD><MSG>Op?ation R?ssie</MSG></STA>').getClobVal());
40 dbms_lob.append(xmlText,'</PRE>');
41 insert into source_data_table (fragCount,xml_text) values (fragmentCount, xmlText);
42 commit;
43 dbms_lob.freeTemporary(xmlText);
44 end;
45 /
Procedure created.
SQL> show errors
No errors.
SQL> --
SQL> set timing on
SQL> --
SQL> call createDocument(50)
2 /
Call completed.
Elapsed: 00:00:00.04
SQL> call createDocument(100)
2 /
Call completed.
Elapsed: 00:00:00.03
SQL> call createDocument(500)
2 /
Call completed.
Elapsed: 00:00:00.16
SQL> call createDocument(1000)
2 /
Call completed.
Elapsed: 00:00:00.28
SQL> call createDocument(2000)
2 /
Call completed.
Elapsed: 00:00:00.34
SQL> call createDocument(5000)
2 /
Call completed.
Elapsed: 00:00:00.87
SQL> select fragCount dbms_lob.getLength(xmlText)
2 from sample_data_table
3 /
select fragCount dbms_lob.getLength(xmlText)
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:01 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 50
1 row updated.
Elapsed: 00:00:00.04
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 50
1 row updated.
Elapsed: 00:00:00.12
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964
500 279564
1000 559064
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.01
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:02 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 100
1 row updated.
Elapsed: 00:00:00.05
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 100
1 row updated.
Elapsed: 00:00:00.23
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.03
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564
1000 559064
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.01
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:02 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 500
1 row updated.
Elapsed: 00:00:00.12
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.03
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 500
1 row updated.
Elapsed: 00:00:01.76
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.00
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:04 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 1000
1 row updated.
Elapsed: 00:00:00.28
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 1000
1 row updated.
Elapsed: 00:00:06.04
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.00
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064 2091000
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:11 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 2000
1 row updated.
Elapsed: 00:00:00.42
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.02
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 2000
1 row updated.
Elapsed: 00:00:24.43
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.03
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064 2091000
2000 1118064 4182000
5000 2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:36 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 5000
1 row updated.
Elapsed: 00:00:02.02
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.05
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 5000
1 row updated.
Elapsed: 00:03:19.02
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064 2091000
2000 1118064 4182000
5000 2795064 10455000
6 rows selected.
Elapsed: 00:00:00.04
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
bash-2.05b$ -
How to perform an xslt transformation on an xml document dynamically ?
Hi all,
I'd like to perform an xslt tranformation on a dynamically generated xml code. I'm looking for a tag which can by useful for it. The tag should take both xml code and xslt code, passed as String arguments (not as names of xml and xslt files). I tried to use xtags library and jstl:transform tag but it did not work. It worked only with plain xml code passed to x-tag. When I try to perform a transformation on a tag body containing an output text tag, the transformation formats only this tag, not its content.
For example:
<xtags:style xslt="xslName.xsl">
<h:outputText value="#{beanName.stringPropertyName}"/>
</xtags:style>
Thanks in advance
Message was edited by:
opadThe issue is in the logic f your XSL mapping.
Use the same source XML and use it with XML Spy to debug why it is not working as you want it to.
Regards
Bhavesh -
PL/SQL XSLT transformation problem
I am trying to perform an XSLT transformation inside a PL/SQL procedure using dbms_xslprocessor.processxsl() but I am getting the following error 'LPX-00411: unknown function name encountered'
The transformation XSL contains a java extension to preform some encoding of data and with the template containing the reference to the extension removed the transformation works fine. Does this mean that DBMS_XMLPARSER does not support java extensions in XSL or is it possible I'm just calling the function incorrectly?
I have been testing up to now using oraxsl on the command line which seemed to work fine.
Thanks,
AlanAlan,
Are you using the right namespace for the class you want to use? -
How to run the ASDoc XSLT transformation manually?
Hello there,
I am combining the XML (toplevel.xml and toplevel_classes.xml) output of several actionscript projects into
a one central XML file. I do this by first running the asdoc tool on the separate projects with the keep-xml and
skip-xsl switches set to true. I then run my own XSLT script that combines the toplevel XML files and
toplevel_classes XML files into two main files.
After that I'd like to run the actual HTML generation, as far as I know its not possible to run asdoc with certain
switches that tell it to skip the XML generation and just perform the XSLT transform immediately for a specified
toplevel.xml and toplevel_classes.xml.
Therefore I guess I have to run the XSLT tranforms from the 'templates' directory myself.
So, thus my question is, what is the exact order of XSLT transforms and for which files?
i.e. do I call class-list.xsl with toplevel_classes.xml, etc?
thanx a lot in advance,
RolandHey there,
yes, I understood what you meant, but even just the physical copying of the files will take a fair amount of time that I prefer to avoid
I'm getting down with ANT as we speak Its a load of trial and error, but I'll report back if I have things working.
Cheers,
Roland -
Configuring the XSLT transformation service
I need to use XSL-T 2.0 in a service. The documentation for XSLT Transformation says that an alternate service provider can be used. It says I need to supply a string value that represents the fully qualified name of the Java class to use for performing the XSLT transformation, such as org.apache.xalan.xslt.XSLTProcessor. The Java class used must extend javax.xml.transform.TransformerFactory and must be available to the class loader of the LiveCycle ES server.
The Saxon XSL-T engine supports the required interface, but I have not been able to install it successfully. In theory, it is a component that can be installed from the Components panel, but when I try that I receive the message "An error occurred. Please check the Eclipse error log for details". I have not been able to locate the error log.
Any assistance greatly appreciated.
--RKWThanks for the assistance. That turns out to be the right advice. If you want to use the Saxon processor, you need to put the following jars in the server\all\lib folder:
saxon8.jar
saxon8-dom.jar
--RKW -
Automatic insert of xmlns:xsl during XSLT transformation
I am using the xmlparserv2.jar of dated "12/10/00" to perform a XSLT transformation. I need to insert a xml:space="preserve" attribute to a generated tag. However, there is an addition attribute added : xmlns:xml="http://www.w3/org/XML/1998/namespace". How can I suppress the generation of this attribute ?
Here is my XML :
<?xml version = '1.0' standalone = 'no'?>
<COLDdoc>
<Page template="bkgnd1" num="1">
<Line num="1"> CN011A021C 1A021</Line>
<Line num="2"> 1954.90 7713.36</Line>
</Page>
</COLDdoc>
My XSL is :
<?xml version = '1.0' standalone = 'yes'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="xml" indent="no" doctype-system="svg-20000303-stylable.dtd"/>
<xsl:template match="Line">
<text>
<xsl:attribute name="x">0</xsl:attribute>
<xsl:attribute name="y">
<xsl:value-of select="@num*10"/>
</xsl:attribute>
<xsl:attribute name="xml:space">preserve</xsl:attribute>
<xsl:value-of select="."/>
</text>
</xsl:template>
</xsl:stylesheet>
The result XSLT Transformation :
<?xml version = '1.0' encoding = 'UTF-8'?>
<!DOCTYPE text SYSTEM "svg-20000303-stylable.dtd">
<text x="0" y="10" xmlns:xml="http://www.w3/org/XML/1998/namespace" xml:space="preserve"> CN011A021C 1A021</text>
<text x="0" y="20" xmlns:xml="http://www.w3/org/XML/1998/namespace" xml:space="preserve"> "> 1954.90 7713.36</text>
Regards.
Jeffrey
nullHi Greg,
please try it with the following (just slightly) modified transformation (works fine for me):
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:asx="http://www.sap.com/abapxml"
xmlns:sap="http://www.sap.com/sapxsl"
version="1.0">
<xsl:strip-space elements="*"/>
<xsl:template match="/PEXR2002/IDOC">
<asx:abap version="1.0">
<asx:values>
<HEADER_DATA>
<SNDPRN>
<xsl:value-of select="EDI_DC40/SNDPRN"/>
</SNDPRN>
<BGMREF>
<xsl:value-of select="E1IDKU1/BGMREF"/>
</BGMREF>
<MOABETR>
<xsl:value-of select="E1IDKU5/MOABETR"/>
</MOABETR>
<CREDAT>
<xsl:value-of select="EDI_DC40/CREDAT"/>
</CREDAT>
<DATUM>
<xsl:value-of select="E1EDK03/DATUM"/>
</DATUM>
</HEADER_DATA>
</asx:values>
</asx:abap>
</xsl:template>
</xsl:transform>
I recommend to test all transformations that you define on a sample source and check the output. If you apply your original transformation you would see that it basically doesn't select anything and therefore you just get an XML document with the field names but no values.
Cheers, harald -
Updating XML prior to XSLT transformation
I am attempting to convert an XML document using XSLT but I need to transform it a bit beforehand. Does anyone have a suggestion as to how to do this?
Below is the logic that will transform the XML directly from the file, but I need to manipulate it first.
Thanks in advance.
Patti
import javax.xml.transform.Source;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.stream.StreamSource;
import javax.xml.transform.stream.StreamResult;
import java.io.*;
public class Transform {
* Performs an XSLT transformation, sending the results
* to System.out.
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.err.println(
"Usage: java Transform [xmlfile] [xsltfile]");
System.exit(1);
File xmlFile = new File(args[0]);
File xsltFile = new File(args[1]);
// JAXP reads data using the Source interface
Source xmlSource = new StreamSource(xmlFile);
Source xsltSource = new StreamSource(xsltFile);
// the factory pattern supports different XSLT processors
TransformerFactory transFact =
TransformerFactory.newInstance();
Transformer trans = transFact.newTransformer(xsltSource);
trans.transform(xmlSource, new StreamResult(System.out));
}The first scenerio is what I intended. Since I posted my question I realized that I can manipulate the string and then create a StringSource passing a StringReader like so:
StringReader src = new StringReader(xmlStr);
StringWriter target = new StringWriter();
currentTransformer.transform(new StreamSource(src),
new StreamResult(target));Thanks. -
Tags not printed when outputting XML after XSLT transformation
Hi!
I want my to perform an XSLT transformation on requests to my web service.
This is the style of a typical request:
<app:aRequest xmlns:app="http://myNamespace">
<app:firstTag>value1</app:firstTag>
<app:secondTag>value2</app:secondTag>
<app:thirdTag>value3</app:thirdTag>
</app:aRequest>I want to transform it into something like this:
<app:aRequest xmlns:app="http://myNamespace">
<app:firstTag>A hard coded, completely different value!</app:firstTag>
<app:secondTag>value2</app:secondTag>
<app:thirdTag>value3</app:thirdTag>
</app:aRequest>To do that, I use the following XSLT:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:app="http://myNamespace">
<xsl:template match="/">
<app:aRequest>
<app:firstTag>A hard coded, completely different value!</app:firstTag>
<app:secondTag><xsl:value-of select="app:aRequest/app:secondTag" /></app:secondTag>
<app:thirdTag><xsl:value-of select="app:aRequest/app:thirdTag" /></app:thirdTag>
</app:aRequest>
</xsl:template>
</xsl:stylesheet>And I use the following code to execute the transformation:
public Object unmarshal(Source source) throws XmlMappingException, IOException {
TransformerFactory factory = TransformerFactory.newInstance();
// Does this factory support SAX features?
if(factory.getFeature(SAXSource.FEATURE)) {
SAXTransformerFactory saxTransformerFactory = (SAXTransformerFactory)factory;
// Read xslt file.
InputStream inputStream = getClass().getResourceAsStream("/META-INF/xsltTransformation.xsl");
if(inputStream == null) {
throw new RuntimeException("No XSLT transformation file present at " + location);
Templates xsltOutputTemplate = saxTransformerFactory.newTemplates(new StreamSource(inputStream));
Transformer transformer = xsltTemplate.newTransformer();
// Perform transformation.
StringWriter stringWriter = new StringWriter();
StreamResult result = new StreamResult(stringWriter);
transformer.transform(source, result);
// Print result.
String xmlString = stringWriter.toString();
System.out.println(xmlString);
// Proceed with unmarshalling.
StringReader stringReader = new StringReader(xmlString);
StreamSource transformedSource = new StreamSource(stringReader);
return oxMapper.getUnmarshaller().unmarshal(transformedSource);
}However, the System.out.println(xmlString); produces the following result:
<?xml version="1.0" encoding="UTF-8"?>
value1
value2
value3The tags are not printed and the hard coded value is not present.
What am I doing wrong? Please help!Following up on what the good Dr said,
Are you sure that you are using the stylesheet that you think you are?
I tried your stylesheet and input and got more or less what you wanted and not what you actually got.
Note that I did not use your code. -
We have an HTML page which user will use to get information. At the click of submit information is queried from 2 systems which is in XML format. This information is then merged into one before sending it back to the client.
Couple of questions:
1. My question, is it advisable to use XSLT for such kind of real time environment. I am not sure of the performance, if it's going to overload CPU, memory etc.
2. What are the basic guidelines for writing efficient XSLT transformation.
3. Is merging using XSLT slower than directly doing it in JAXB?In my experience, the decision to use Java code vs. XSLT is based on a couple of factors:
1) Is what you want to do something that can be done in XSLT? XSLT is a very powerful tool but it is very quirky. It is very different from any other programming language I've used (and I have used a lot of them). So, the first test is "Can what I want to do be done easily in XSLT?"
2) Next, who will make changes? If this is code to be shipped to a client and they will need to make minor changes over a period of time, and you don't want them to change the Java code, or the person who will be responsible for making those changes is not a Java programmer, then XSLT may be a better way than having the transformative code in Java. Also, changes might be done by an administrator with some training in XSLT more easily than having to teach them Java + XML + WebServices + database + + + .
3) I've never heard of situations where XSLT caused stack overflows, or lengthy delays. That is not to say that you can be sloppy and XSLT will save you. XSLT does not have some "simple" constructs like mutable variables, or loops as in most for or while loops, one fallback is to use recursion. This can eat memory if not controlled and tested.
4) Any work to fine tune the speed of this operation will probably be overshadowed by bad code in other parts of the application.
I cannot think of anything that can be done in XSLT that cannot be done in Java code. There are many things that cannot be done in XSLT, however, that can be done in Java. Writing and debugging XSLT, especially the first 100,000 or so times is not simple for anything more complex than a textbook or tutorial exercise.
So, there are no hard and fast rules on which is better. Each is the right choice for a set of problems. You have not given us enough information to make that decision. I've tried to help with your decision. -
Error in ABAP XSLT transformation
Hi,
Im trying to upload some data from XML to abap. But Im getting an error while transforming xml data to internal table.
Here are the details.
XML:
<?xml version="1.0" encoding="ISO-8859-1" ?>
<!-- Edited by XMLSpy® -->
<?xml-stylesheet type="text/xsl" href="ABAP1.xsl"?>
<conceptRevDecisionXml>
<projectInfo>
<projectId>P000755</projectId>
<stage>CON</stage>
<country>Ethiopia</country>
<region>AFRICA</region>
<teamleader>Priya Agarwal</teamleader>
<teamleaderfirstname>Priya</teamleaderfirstname>
<teamleaderlastname>Agarwal</teamleaderlastname>
<actionType>X</actionType>
</projectInfo>
</conceptRevDecisionXml>
XSLT: Transformation
<xsl:transform version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:sapxsl="http://www.sap.com/sapxsl"
>
<xsl:strip-space elements="*"></xsl:strip-space>
<xsl:template match="/">
<asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
<asx:values>
<PROJID>
<xsl:apply-templates select="//conceptRevDecisionXml"></xsl:apply-templates>
</PROJID>
</asx:values>
</asx:abap>
</xsl:template>
<xsl:template match="conceptRevDecisionXml">
<xsl:for-each select="projectInfo">
<xsl:value-of select="projectId"></xsl:value-of>
<xsl:value-of select="stage"></xsl:value-of>
<xsl:value-of select="country "></xsl:value-of>
<xsl:value-of select="region"></xsl:value-of>
<xsl:value-of select="teamleader"></xsl:value-of>
<xsl:value-of select="teamleaderfirstname"></xsl:value-of>
<xsl:value-of select="teamleaderlastname"></xsl:value-of>
<xsl:value-of select="actionType"></xsl:value-of>
</xsl:for-each>
</xsl:template>
</xsl:transform>
Once I run the program...Im getting an error saying...ABAP XML Formatting error in XML node..
Im new to ABAP-XML parsing..Pls help me where Im going wrong..
Thanks in advance.
Regards,
PriyaHi Priya,
you can try with the below,
1) Create a local ITAB with the structure of the XML,
TYPES: BEGIN OF t_data,
projectid TYPE char30,
stage TYPE char30,
country TYPE char30,
region TYPE char30,
teamleader TYPE char30,
teamleaderfirstname TYPE char30,
teamleaderlastname TYPE char30,
actiontype TYPE char30,
END OF t_data.
2) Create an XSLT prog in "STRANS" with the below code,
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:sap="http://www.sap.com/sapxsl" version="1.0">
<xsl:strip-space elements="*"/>
<xsl:template match="/">
<asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
<asx:values>
<L_DATA>
<xsl:apply-templates select="//projectInfo"/>
</L_DATA>
</asx:values>
</asx:abap>
</xsl:template>
<xsl:template match="projectInfo">
<conceptRevDecisionXml>
<PROJECTID>
<xsl:value-of select="projectId"/>
</PROJECTID>
<STAGE>
<xsl:value-of select="stage"/>
</STAGE>
<COUNTRY>
<xsl:value-of select="country"/>
</COUNTRY>
<REGION>
<xsl:value-of select="region"/>
</REGION>
<TEAMLEADER>
<xsl:value-of select="teamleader"/>
</TEAMLEADER>
<TEAMLEADERFIRSTNAME>
<xsl:value-of select="teamleaderfirstname"/>
</TEAMLEADERFIRSTNAME>
<TEAMLEADERLASTNAME>
<xsl:value-of select="teamleaderlastname"/>
</TEAMLEADERLASTNAME>
<ACTIONTYPE>
<xsl:value-of select="actionType"/>
</ACTIONTYPE>
</conceptRevDecisionXml>
</xsl:template>
</xsl:transform>
3) Call the transformation as shown below,
CALL TRANSFORMATION zxslt_project ---> "Name of the XSLT prog created above
SOURCE XML l_xml_str ---> Source XML string
RESULT l_data = l_data. ---> ITAB as in step 1 above
Regards,
Chen -
Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high?
How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?
Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
Thanks for any clarification on these matters.siri will only use apple maps, this cannot be changed. you could try google voice in the google app.
-
General poor performance of my Mac mini
I have been using my mac mini for over a year, it continues to frustrate me how slow and unresponsive it can be even though I do very little with it, i.e organise photos and browse the web - usually looking for solutions to the poor performance problem!
I came a cross this app that has given me lots of information, unfortunately with my limited IT know-how there is nothing here that gives me any clues, as to the cause of the problem. I guess the red font is not good?
I'm sorry i can't be more specific about the problem other than to say that iPhoto and Safari the two programmes i use way more often than any other, keep giving me the beach ball. My internet speed is ok 25 Mbps down - 4.5 Mbps
Is there anything here that stands out as being wrong? if so any suggestions/ instructions for what I should do?
p.s. i had to quit Safari while writing this and have a report for this if it helps?
Any help much appreciated.
Problem description:
My mac is just generally very slow. I only use it to look at the internet and manage photos.
EtreCheck version: 2.1.5 (108)
Report generated 3 January 2015 16:03:59 GMT
Click the [Support] links for help with non-Apple products.
Click the [Details] links for more information about that line.
Click the [Adware] links for help removing adware.
Hardware Information: ℹ️
Mac mini (Late 2012) (Verified)
Mac mini - model: Macmini6,2
1 2.3 GHz Intel Core i7 CPU: 4-core
4 GB RAM Upgradeable
BANK 0/DIMM0
2 GB DDR3 1600 MHz ok
BANK 1/DIMM0
2 GB DDR3 1600 MHz ok
Bluetooth: Good - Handoff/Airdrop2 supported
Wireless: en1: 802.11 a/b/g/n
Video Information: ℹ️
Intel HD Graphics 4000
DELL U2412M 1920 x 1200
System Software: ℹ️
OS X 10.10.1 (14B25) - Uptime: 5 days 22:14:3
Disk Information: ℹ️
APPLE HDD HTS541010A9E662 disk0 : (1 TB)
EFI (disk0s1) <not mounted> : 210 MB
Macintosh HD (disk0s2) / : 999.35 GB (759.01 GB free)
Recovery HD (disk0s3) <not mounted> [Recovery]: 650 MB
USB Information: ℹ️
HP Photosmart B110 series
Apple Inc. BRCM20702 Hub
Apple Inc. Bluetooth USB Host Controller
Apple, Inc. IR Receiver
Thunderbolt Information: ℹ️
Apple Inc. thunderbolt_bus
Gatekeeper: ℹ️
Mac App Store and identified developers
Problem System Launch Daemons: ℹ️
[killed] com.apple.AssetCacheLocatorService.plist
[killed] com.apple.coreservices.appleid.passwordcheck.plist
[killed] com.apple.ctkd.plist
[killed] com.apple.wdhelper.plist
[killed] com.apple.xpc.smd.plist
5 processes killed due to memory pressure
Launch Daemons: ℹ️
[loaded] com.adobe.fpsaud.plist [Support]
User Login Items: ℹ️
iTunesHelper Application (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
Dropbox Application (/Applications/Dropbox.app)
Wondershare Helper Compact Application (/Users/[redacted]/Library/Application Support/Helper/Wondershare Helper Compact.app)
Internet Plug-ins: ℹ️
Silverlight: Version: 5.1.30514.0 - SDK 10.6 [Support]
FlashPlayer-10.6: Version: 16.0.0.235 - SDK 10.6 [Support]
Flash Player: Version: 16.0.0.235 - SDK 10.6 [Support]
QuickTime Plugin: Version: 7.7.3
Unity Web Player: Version: UnityPlayer version 4.5.5f1 - SDK 10.6 [Support]
Default Browser: Version: 600 - SDK 10.10
3rd Party Preference Panes: ℹ️
Flash Player [Support]
Time Machine: ℹ️
Auto backup: YES
Volumes being backed up:
Macintosh HD: Disk size: 999.35 GB Disk used: 240.33 GB
Destinations:
Time Capsule [Network]
Total size: 2.00 TB
Total number of backups: 56
Oldest backup: 2014-09-30 04:29:13 +0000
Last backup: 2015-01-03 15:37:35 +0000
Size of backup disk: Adequate
Backup size 2.00 TB > (Disk used 240.33 GB X 3)
Top Processes by CPU: ℹ️
110% com.apple.WebKit.Plugin.64
6% WindowServer
1% Activity Monitor
1% coreaudiod
1% sysmond
Top Processes by Memory: ℹ️
1.11 GB com.apple.WebKit.Plugin.64
73 MB iTunes
49 MB com.apple.WebKit.WebContent
39 MB mds
39 MB WindowServer
Virtual Memory Information: ℹ️
68 MB Free RAM
1.07 GB Active RAM
1.02 GB Inactive RAM
893 MB Wired RAM
63.93 GB Page-ins
2.15 GB Page-outs
Diagnostics Information: ℹ️
Jan 1, 2015, 11:43:39 AM /Library/Logs/DiagnosticReports/com.apple.WebKit.Plugin.64_2015-01-01-114339_[r edacted].cpu_resource.diag [Details]
Jan 1, 2015, 11:32:46 AM /Library/Logs/DiagnosticReports/WindowServer_2015-01-01-113246_[redacted].crash
Jan 1, 2015, 11:00:30 AM /Library/Logs/DiagnosticReports/com.apple.WebKit.Plugin.64_2015-01-01-110030_[r edacted].cpu_resource.diag [Details]
Jan 1, 2015, 10:19:03 AM /Library/Logs/DiagnosticReports/iBooks_2015-01-01-101903_[redacted].cpu_resourc e.diag [Details]Hi Linc Davis
Thank you for taking the time to respond.
I got this information from a time when I was opening up iPhoto, and then going to the iCloud folder. To be honest it wasn't the most disastrous of events, I did get a few beachballs and when I opened up a shared folder the photos weren't loaded as normal. As I get more lengthy delays in carrying out any activities I will post back the console results.
Just as a bit more info, I don't consider my problem an update to Yosemite problem, if anything since updating performance has been improved from Mavericks.
Do you have any comments about upgrading RAM from the 4GB I have?
Should I act on any of the messages in red from the EtreCheck report?
Many Thanks
Duncan
03/01/2015 19:30:18.331 com.apple.iCloudHelper[12481]: objc[12481]: Class FALogging is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircl e and /System/Library/PrivateFrameworks/FamilyNotification.framework/Versions/A/Famil yNotification. One of the two will be used. Which one is undefined.
03/01/2015 19:30:18.391 com.apple.xpc.launchd[1]: (com.apple.imfoundation.IMRemoteURLConnectionAgent) The _DirtyJetsamMemoryLimit key is not available on this platform.
03/01/2015 19:30:19.162 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:30:19.162 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:30:19.163 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:30:19.163 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:30:19.186 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:30:19.187 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:30:25.446 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 10.70 seconds (server forcibly re-enabled them after 1.00 seconds)
03/01/2015 19:30:33.548 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:30:33.919 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:30:34.077 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:30:34.605 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:30:34.774 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:30:34.853 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:30:47.562 pkd[12123]: FCIsAppAllowedToLaunchExt [343] -- *** _FCMIGAppCanLaunch timed out. Returning false.
03/01/2015 19:31:10.695 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:31:10.807 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:31:16.961 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:31:18.514 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:31:18.593 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:31:22.538 discoveryd[49]: Basic Sockets SetDelegatePID() failed for PID[3491] errno[3] result[-1]
03/01/2015 19:31:27.912 mds[32]: (DiskStore.Normal:2376) 6052001 1.780097
03/01/2015 19:31:56.831 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:31:57.224 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:31:57.281 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:31:57.405 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:31:58.507 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:31:58.541 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:31:59.160 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:32:09.891 com.apple.InputMethodKit.UserDictionary[12456]: -[PFUbiquitySetupAssistant canReadFromUbiquityRootLocation:](1492): CoreData: Ubiquity: Error attempting to read ubiquity root url: file:///Users/Duncan/Library/Mobile%20Documents/com~apple~TextInput/Dictionarie s/.
Error: Error Domain=NSCocoaErrorDomain Code=134323 "The operation couldn’t be completed. (Cocoa error 134323.)" UserInfo=0x7f90e152a380 {NSAffectedObjectsErrorKey=<PFUbiquityLocation: 0x7f90e152a2e0>: /Users/Duncan/Library/Mobile Documents/com~apple~TextInput}
userInfo: {
NSAffectedObjectsErrorKey = "<PFUbiquityLocation: 0x7f90e152a2e0>: /Users/Duncan/Library/Mobile Documents/com~apple~TextInput";
03/01/2015 19:32:12.862 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:32:20.818 com.apple.SecurityServer[53]: Killing auth hosts
03/01/2015 19:32:20.819 com.apple.SecurityServer[53]: Session 100281 destroyed
03/01/2015 19:32:20.974 com.apple.SecurityServer[53]: Session 100577 created
03/01/2015 19:32:34.833 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:32:35.767 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:32:35.789 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:32:35.980 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:32:36.273 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:32:36.487 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:32:47.035 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
03/01/2015 19:32:50.555 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 4.52 seconds (server forcibly re-enabled them after 1.00 seconds)
03/01/2015 19:32:51.578 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
03/01/2015 19:32:52.745 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 2.17 seconds (server forcibly re-enabled them after 1.00 seconds)
03/01/2015 19:32:53.098 mds[32]: (DiskStore.Normal:2376) 6052001 1.279589
03/01/2015 19:33:03.332 BezelServices 245.23[7816]: ASSERTION FAILED: dvcAddrRef != ((void *)0) -[DriverServices getDeviceAddress:] line: 2602
03/01/2015 19:33:03.332 BezelServices 245.23[7816]: ASSERTION FAILED: dvcAddrRef != ((void *)0) -[DriverServices getDeviceAddress:] line: 2602
03/01/2015 19:33:11.961 mds[32]: (DiskStore.Normal:2376) 6052001 1.939804
03/01/2015 19:33:29.541 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
03/01/2015 19:33:32.354 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 3.81 seconds (server forcibly re-enabled them after 1.00 seconds)
03/01/2015 19:33:44.549 discoveryd[49]: Basic DNSResolver dropping message because it doesn't match the one sent Port:53 MsgID:20602
03/01/2015 19:33:54.055 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:34:05.143 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.143 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:05.499 discoveryd[49]: WCFNameResolvesToAddr: called
03/01/2015 19:34:05.499 discoveryd[49]: WCFNameResolvesToAddr: entering
03/01/2015 19:34:07.001 discoveryd[49]: Basic Sockets SetDelegatePID() failed for PID[3491] errno[3] result[-1]
03/01/2015 19:34:13.809 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:34:13.821 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:34:14.124 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:34:14.766 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:34:14.867 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
03/01/2015 19:34:17.769 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
03/01/2015 19:34:18.186 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
03/01/2015 19:34:18.186 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End) -
Poor performance of the BDB cache
I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
Overview
Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
The Database
Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
sequences (maintains record IDs for all other tables)
urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
downloads (p), downloads.values (s), downloads.xfer (s)
agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
search (p), search.values (s), search.hits (s)
users (p), users.values (s), users.hits (s), users.groups.hits (sf)
errors (p), errors.values (s), errors.hits (s)
dhosts (p), dhosts.values (s)
statuscodes (HTTP status codes)
totals.daily (31 days)
totals.hourly (24 hours)
totals (one record)
countries (a couple of hundred countries)
system (one record)
visits.active (active visits - variable length)
downloads.active (active downloads - variable length)
All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
Database Size
One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
urls (p):
8192 Underlying database page size
2031 Overflow key/data size
1471636 Number of unique keys in the tree
1471636 Number of data items in the tree
193 Number of tree internal pages
577738 Number of bytes free in tree internal pages (63% ff)
55312 Number of tree leaf pages
145M Number of bytes free in tree leaf pages (67% ff)
2620 Number of tree overflow pages
16M Number of bytes free in tree overflow pages (25% ff)
urls.hits (s)
8192 Underlying database page size
2031 Overflow key/data size
2 Number of levels in the tree
823 Number of unique keys in the tree
1471636 Number of data items in the tree
31 Number of tree internal pages
201970 Number of bytes free in tree internal pages (20% ff)
45 Number of tree leaf pages
243550 Number of bytes free in tree leaf pages (33% ff)
2814 Number of tree duplicate pages
8360024 Number of bytes free in tree duplicate pages (63% ff)
0 Number of tree overflow pages
The Testbed
I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
I also used a code profiler to analyze SSW and BDB performance.
The Problem
Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
The Tests
SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
Some of the other things I tried/observed:
* I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
* I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
* I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
* The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
* I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec.
Maybe you are looking for
-
Unable to create a pdf file using the print routine
Hey all, Recently, i've been unable to create a pdf using the print routine from quickbooks, word, outlook, etc... using the print routine and the adobe pdf driver.... I have vista and cs3... Any thoughts? Thanks, Murph [email address deleted by host
-
My HP Pavilion dv9700 will not power up. Lights blink on and off on a cycle, but screen is black.
-
Sun Management Console doesn't support MD5 passwords?
I recently converted all our Sun systems to use MD5 passwords, using the Linux-BSD algorithm. I chose the Linux-BSD algorithm for compatibility reasons. After giving root a new password, now stored in MD5 format, I can no longer log in to the Sun Man
-
How to get Information Template fields to Standard Purchase order Document.
Hi Guy's I Created Information Templates to capture extra information to suit our business needs. But i dont know how to get populate those information into Standard Purchase order docuemnt. After Some research I found that we have to customize the P
-
MRS (Multi Resource Scheduling) - Using Business Partners
In a previous (long) thread I had asked about using Business Partners and the problem I was having assigning work to a BP. I keep getting the error message "no work center found for resource". I was told that to fix this problem I was told "... For