Performance question on small XML content but with large volume

Hi all,
I am new to Berkeley XML DB.
I have the following simple XML content:
<s:scxml xmlns:s="http://www.w3.org/2005/07/scxml">
<s:state id="a"/>
<s:state id="b"/>
<s:state id="c"/>
</s:scxml>
about 1.5K bytes each but the total number of such content is large (5 million+ records).
This is a typical query:
query 'count(collection("test.dbxml")/s:scxml/s:state[@id="a"]/following-sibling::s:state[@id="e"])'
where the id attribute is used heavily.
I've tested with about 10000 records with the following indexes:
Index: edge-attribute-equality-string for node {}:id
Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
Index: edge-element-presence-none for node {}:scxml
Index: edge-element-presence-none for node {}:state
but the query took just under one minute to complete. Is this the expected performance? It seems slow. Is there anyway to speed it up?
In addition, the total size of the XML content is about 12M but ~100M of data is generated with the log.xxxxxxxxxx files. Is this expected?
Thanks.

Hi Ron,
Yes, I've noticed the URI issue after sending the post and changed them to:
dbxml> listindex
Default Index: none
Index: edge-attribute-equality-string for node {http://www.w3.org/2005/07/scxml}
:id
Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2
002/dbxml}:name
Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:scx
ml
Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:sta
te
5 indexes found.
I added more records (total ~30000) but the query still took ~1 minute and 20 seconds to run. Here is the query plan:
dbxml> queryplan 'count(collection("test.dbxml")/s:scxml/s:state[@id="start"]/fo
llowing-sibling::s:state[@id="TryToTransfer"])'
<XQuery>
<Function name="{http://www.w3.org/2005/xpath-functions}:count">
<DocumentOrder>
<DbXmlNav>
<QueryPlanFunction result="collection" container="test.dbxml">
<OQPlan>n(P(edge-element-presence-none,=,root:http://www.sleepycat.com
/2002/dbxml.scxml:http://www.w3.org/2005/07/scxml),P(edge-element-presence-none,
=,scxml:http://www.w3.org/2005/07/scxml.state:http://www.w3.org/2005/07/scxml))<
/OQPlan>
</QueryPlanFunction>
<DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
name="scxml" nodeType="element"/>
<DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
name="state" nodeType="element"/>
<DbXmlFilter>
<DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
ute">
<Sequence>
<AnyAtomicTypeConstructor value="start" typeuri="http://www.w3.org
/2001/XMLSchema" typename="string"/>
</Sequence>
</DbXmlCompare>
</DbXmlFilter>
<DbXmlStep axis="following-sibling" prefix="s" uri="http://www.w3.org/20
05/07/scxml" name="state" nodeType="element"/>
<DbXmlFilter>
<DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
ute">
<Sequence>
<AnyAtomicTypeConstructor value="TryToTransfer" typeuri="http://ww
w.w3.org/2001/XMLSchema" typename="string"/>
</Sequence>
</DbXmlCompare>
</DbXmlFilter>
</DbXmlNav>
</DocumentOrder>
</Function>
</XQuery>
I've noticed the indexes with URI were not used so I added back the indexes without URI:
dbxml> listindex
Default Index: none
Index: edge-attribute-equality-string for node {}:id
Index: edge-attribute-equality-string for node {http://www.w3.org/2005/07/scxml}
:id
Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2
002/dbxml}:name
Index: edge-element-presence-none for node {}:scxml
Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:scx
ml
Index: edge-element-presence-none for node {}:state
Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:sta
te
8 indexes found.
Here is the query plan with the above indexes:
dbxml> queryplan 'count(collection("test.dbxml")/s:scxml/s:state[@id="start"]/fo
llowing-sibling::s:state[@id="TryToTransfer"])'
<XQuery>
<Function name="{http://www.w3.org/2005/xpath-functions}:count">
<DocumentOrder>
<DbXmlNav>
<QueryPlanFunction result="collection" container="test.dbxml">
<OQPlan>n(P(edge-element-presence-none,=,root:http://www.sleepycat.com
/2002/dbxml.scxml:http://www.w3.org/2005/07/scxml),P(edge-element-presence-none,
=,scxml:http://www.w3.org/2005/07/scxml.state:http://www.w3.org/2005/07/scxml),V
(edge-attribute-equality-string,state:http://www.w3.org/2005/07/scxml.@id,=,'sta
rt'),V(edge-attribute-equality-string,state:http://www.w3.org/2005/07/scxml.@id,
=,'TryToTransfer'))</OQPlan>
</QueryPlanFunction>
<DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
name="scxml" nodeType="element"/>
<DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
name="state" nodeType="element"/>
<DbXmlFilter>
<DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
ute">
<Sequence>
<AnyAtomicTypeConstructor value="start" typeuri="http://www.w3.org
/2001/XMLSchema" typename="string"/>
</Sequence>
</DbXmlCompare>
</DbXmlFilter>
<DbXmlStep axis="following-sibling" prefix="s" uri="http://www.w3.org/20
05/07/scxml" name="state" nodeType="element"/>
<DbXmlFilter>
<DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
ute">
<Sequence>
<AnyAtomicTypeConstructor value="TryToTransfer" typeuri="http://ww
w.w3.org/2001/XMLSchema" typename="string"/>
</Sequence>
</DbXmlCompare>
</DbXmlFilter>
</DbXmlNav>
</DocumentOrder>
</Function>
</XQuery>
The indexes are used in this case and the execution time was reduced to about 40 seconds. I set the namespace with setNamespace when the session is created. Is this the reason why the indexes without URI are used?
Any other performance improvement hints?
Thanks,
Ken

Similar Messages

  • To create 3 diff files with same content but with diff names in same target

    Hi SapAll.
    i have got a a requirement where pi need to create 3 different files with same content but with different names under same target from a single Idoc.
    its an IDOC to 3 File Inteface.
    can any body help me in providing the differnt solutions for this without use of any script executions.
    will be waiitng for response.
    regards.
    Varma

    > i want to use only one communication channel to produce 3 different file names with same content ,so here i should use only one message mapping in 3 operation mappings .
    This is not possible to produce 3 different file names with single CC. You have to use 3 different CCs. unless you have going to use some other trick e.g some script to rename the file etc..
    As I suggested in my previous reply use Multi-Mapping Or create 3 different Interface Mappings (by using the same MM).
    Note: You have to create 3 different Inbound Message Interfaces (you can use the same Inbound Message Type) otherwise while creating the 3 Interface Determination it won't allow because of same Outbound & Inbound Message Interface. It will simply say Interface alreday exists..
    So, just use the Multi-Mapping which is best solution in my opinion, because the benefit of using multi-mapping are:
    1. You have to create only single Message Mapping
    2. Single Interface Mapping
    3. Single Receiver Determination
    4. Single Interface Determination
    5. 3 Receiver CCs (3 you have to use in any case)
    6. Performance wise it is good (read the blog's last 2 para)
    7. And last but not the least easy to maintain the scenario.

  • To create multiple files with same content but with different names

    Hi SapAll.
    here i have got a tricky situation on Idoc to File Scenario.
    in my interface of an Idoc to file ,there  is requirement to create multiple files with different file names but with same content based on one Idoc Segment.
    which means there will be one Zsegment with two fields in the idoc,where one field with (content refers to the name which file name should start with .so lets say if this segment is repeated for 3 times then PI should create 3 files in the same directory with same content but with different file names (from the filed).
    so here for now iam using one reciever file communication channel.
    can any body give me the quick answer.
    regards.
    Varma

    What do you mean by different names?
    when i make proper setting in the Receiver Channel....on how to create the filename (what to append) like add Timestamp, counter, date, messageid.....even in this case you will ahve file with different names and that too from same File channel.
    You can perform multi-mapping in XI/ PI and then your File channel will place the files in the target folder with relevant names. You cannot use Dynamic Configuration with Multi-Mapping!
    If you intend to use different File channels, then do the configuration as required (normal)...even over here you can follow multi-mapping.
    Do not use a BPM!
    Regards,
    Abhishek.

  • Ability to process several raw files with the same content but with different exposure into the single picture

    Can you add to the Lightroom an ability to process several raw files with the same content but with different exposure into the single picture?
    Base raw files can be given with exposure bracketing during shooting, for example.
    The goal - to get maximum details in darks and lights (if we use the "ligths recovery" or "fill lights" we lose the quality because raw file just have no all required information).
    The similar (but not the same, only the idea) thing - is High Dynamic Range Photography in Adobe Photoshop
    Thank you

    The plugin LR/Enfuse does this already. And of course Photomatix have a plugin available for Lightroom. This essentially amounts to pixel editing, which is beyond the range of Lightroom's metadata editing.

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • In OSB , xquery issue with large volume data

    Hi ,
    I am facing one problem in xquery transformation in OSB.
    There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
    Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
    How to check what is exactly causing the issue here,  why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
    My Xquery is something like below.
    <InputParameters>
                    for $choice in $inputParameters1/choice              
                     let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])                
                     let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
                     return
                      <choice>
                     if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
                     let $claimID:= $withSamePrimaryID[1]/ClaimID
                     return
                     <ClaimID>{$claimID}</ClaimID>                
                     else
                     <ClaimID>{ data($choice/ClaimID) }</ClaimID>

    HI ,
    I understand your use case is
    a) read the file ( from ftp location.. txt file hopefully)
    b) process the file ( your x query .. although will not get into details)
    c) what to do with the file ( send it backend system via Business Service?)
    Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
    Can say that is expected behaviour.
    the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
    if no error handlers - look at the timeout and error condition scenarios on your service.
    HTH

  • I need to be able to edit Japanese content, but with English UI

    I am a native English speaker, working in Japan and quite often I need to be able to create and edit Japanese content as well as English. My Japanese isn't quite up to navigating the Japanese language version of InDesign (and Illustrator).
    In the previous version of CC I could fool InDesign and Illustrator to give me the extra Japanese settings (Kinsoku etc.) in English by installing as Japanese language in the CC settings, but having my Mac OS language to English. This worked fine. But now I've tried the same with the new CC 2014 apps, and if I install the CC2014 apps with with the CC language set to Japanese, the applications have their UIs set to Japanese (PhotoShop actually gives me the option to change to English, where Illustrator and InDesign do not).
    Is there any way I can use the English language InDesign and Illustrator yet have those settings as before? There must be plenty of people working on multilingual documents out there.
    Am I missing an option somewhere, or a similar trick?
    Wouldn't it be better to have special language formatting available as options in all localised versions?

    Thanks again, but I can't see any way to do it. If I'm changing languages in CC it installs the version with the language I have selected, and changing the language doesn't change anything, or asks me to update so that it can install that language.
    The tips on the page you linked to are actually how I originally got it working in the previous CC version, and it's been working fine. But it seems things have changed in CC 2014. It just follows the language you have selected when you install these apps, not the OS language.
    Paying an extra $180 on top of an already expensive application for Harb's World Tools Pro just seems like a kick in the pants given that the abilities are already there in the application, they're just hidden for some unknown reason. I've sent Adobe a feature request. And until there's a solution I guess I'll be stuck on the old version. It's not the end of the world I guess.

  • Dreaded Question Mark... But with a Catch!

    The other night my mid 2009 MBP froze. I got the pinwheel and it was unresponsive. I gave it 10 minutes to come to, but it never did. I held the power button to turn it off, and then turned it back on. After a couple of minutes of blank white screen, I got the blinking question mark/folder. After searching the forums, I tried everything. CMD+R, the option key, etc. Nothing worked. The biiiig problem was that my last TM backup was corrupted, and I couldn't be bothered to spend the time doing a fresh backup. Well I pulled my HD, put it in an external enclosure and booted from it. Guess what? It worked perfectly. I was able to do a fresh TM backup, and used the computer this way to surf the internet today. I thought maybe it was some kind of fluke and put the HD back in my MBP. I checked the sata cable where it meets the motherboard, and it was fine. I booted it up, and I'm back to the question mark. Why would my HD work as an external, but not as an internal?

    Kris,
    Sorry. Someone else asked this question with almost exactly the same wording, including the "with a catch." Shop around a little, you should be able to find it for less than $60.
    http://www.powerbookmedic.com/Hard-Drive-Cable-w--IR-Sensor-for-MacBook-Pro-13-U nibody-p-17336.html
    Make sure this is right for your machine. I just looked for a 13" mid-2009.

  • Time Machine: Can I use a smaller external hard drive with larger internal?

    Can I use a 250G external hard drive with and 500G unfilled internal drive with time machine? Or will time machine require I a 500G? I don't plan on filling the internal drive for a long time and don't want to buy a new external drive right now.

    Yes, but you are very likely to get in trouble very quickly.
    The problem is that TimeMachine saves multiple versions of any file modified, and if that file happens to be large, you can quickly fill up your TimeMachine drive such that it is throwing away older versions faster than you would desire.
    Also if your boot drive's storage usage gets even close to the 250GB external drive's capacity, TimeMachine is likely to stop working.
    If possible, I would suggest an external drive that is twice as large as your boot drive, or at least 1.5 times larger.
    I guess you could repartition your boot drive so it is smaller than your external disk so you would be less likely to use more space than could fit on the external.
    Personally, my opinion about backups is that much of my data is impossible to replace (family pictures, etc...), and spending money on backup hardware is a small price to pay for securing those memories. I also try to have it backup in more than one location in more than one way.

  • How to improve performance(insert,delete and search) of table with large data.

    Hi,
    I am having a table which is used for maintaining history and have a large data and that keeps on increasing or decreasing based on the business rules.
    I am getting performance issues with this table which searching for any records or while inserting new data into it. I have already used index in this table but still I am facing lot of issues related to performance.
    Also, we used to insert bulk data into this table.
    Can we have any solution to achieve this, any solutions are greatly appreciated.
    Thanks in Advance!

    Please do not duplicate your posts across forums.  It's considered bad practice and rude, as people will not know what answers you've already received and may end up duplicating the effort.
    Locking this thread - answer on other thread please

  • Bad Performance/OutOfMemory Error in CMP Entity Bean with Large DB

    Hello:
    I have an CMP Entity deployed on WLS 7.0
    The entity bean maps to a table that has 97,480 records.
    It has a finder: findAll() -- SELECT OBJECT(e) FROM Equipment e
    I have a JSP client that invokes the findALL()
    The performance is very poor ~ 150 seconds just to perform the findAll() - (Benchmark
    from within the JSP code)
    If more than one simultaneous call is made then I get outOfMemory Error.
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>
    (without max-beans-in-cache directive the performance is worse)
    Is there any documentation available to help us in deploying CMP Entity Beans
    with very large number of records (instances) ?
    Any help is greatly appreciated.
    Regards
    Rajan

    Hi
    You should use a Select Method, it does support cursors.
    Or a Home Select Method combination.
    Regards
    Thomas
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>>
    (without max-beans-in-cache directive the performance is worse)>
    Is there any documentation available to help us in deploying CMP
    Entity Beans with very large number of records (instances) ? Any help
    is greatly appreciated.>
    Regards>
    Rajan>
    >
    "Rajan Jena" <[email protected]> schrieb im Newsbeitrag
    news:3dadd7d1$[email protected]..
    >
    Hello:
    I have an CMP Entity deployed on WLS 7.0
    The entity bean maps to a table that has 97,480 records.
    It has a finder: findAll() -- SELECT OBJECT(e) FROM Equipment e
    I have a JSP client that invokes the findALL()
    The performance is very poor ~ 150 seconds just to perform the findAll() -(Benchmark
    from within the JSP code)
    If more than one simultaneous call is made then I get outOfMemory Error.
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>
    (without max-beans-in-cache directive the performance is worse)
    Is there any documentation available to help us in deploying CMP EntityBeans
    with very large number of records (instances) ?
    Any help is greatly appreciated.
    Regards
    Rajan

  • Email and XML **Figured out with other posts suggestion**

    Hello all,
    First let me say thanks in advance for anyone taking the time to read this and post a response. A double big thanks for a response that works.
    I have created a form that once filled in and saved I want to be able to mail to a group of people. I have created this group and saved them in a unique email. I am able to send the file to the group, and they can see my file is there but the trouble is that it is saved and sent to them as xml, not pdf. They are then unable to read it, or open it despite having adobe 7 or greater. I feel this is a simple oversight on my part, and what ever is under my nose I miss. So I would appreciate any help on this matter.
    I have an additional question attached to the subject of email. I want to know if it is possible to have a form be routed once a group fills in their appropriate section so A fills out, and sends to B and so on and so forth so that by the time it gets to G it is complete, and can be finalized. Again, if this makes your eyes roll in OMG what a noob understand that the only programming I do is my tivo, and my car radio and that sometimes foils me too ;)

    Plese can someone help me.
    I am converting a word document to a PDF. I added reference to Acrobat.dll to C# project and then wrote the following method and it works fine but not sure whether this would work with large volume data? Is this code thread safe? I tried this code and it runs very slow.
    private void WordToPdf(string inputDocumentName, string outputPDFName)
    // Create an Acrobat Application object
    Type AcrobatAppType;
    AcrobatAppType = Type.GetTypeFromProgID("AcroExch.App");
    Acrobat.CAcroApp oAdobeApp = (Acrobat.CAcroApp)Activator.CreateInstance(AcrobatAppType);
    // Create an Acrobat AV Document object;
    Type AcrobatAvType;
    AcrobatAvType = Type.GetTypeFromProgID("AcroExch.AVDoc");
    Acrobat.CAcroAVDoc oAdobeAVDoc = (Acrobat.CAcroAVDoc)Activator.CreateInstance(AcrobatAvType);
    // Create an Acrobat Document object;
    Type AcrobatPDDocType;
    AcrobatPDDocType = Type.GetTypeFromProgID("AcroExch.PDDoc");
    Acrobat.CAcroPDDoc oAdobePDDoc = (Acrobat.CAcroPDDoc)Activator.CreateInstance(AcrobatPDDocType);
    if (oAdobeAVDoc.Open(inputDocumentName,""))
    oAdobePDDoc = (Acrobat.CAcroPDDoc)oAdobeAVDoc.GetPDDoc();
    oAdobePDDoc.Save(1,@"c:\input.pdf" );
    oAdobePDDoc.Close();
    oAdobeAVDoc.Close(1);
    oAdobeApp.CloseAllDocs();
    oAdobeApp.Exit();

  • In IE if I set the zoom at lets say 150% it will stay that way even if I shut down. But with firefox, it goes back to small print every time I view a new page. Is there a way to set the zoom to stay at a larger size? Thanks, EH

    # Question
    In IE if I set the zoom at lets say 150% it will stay that way even if I shut down. But with firefox, it goes back to small print every time I view a new page. Is there a way to set the zoom to stay at a larger size? Thanks, EH

    If you need to adjust the font size on websites then look at:
    * Default FullZoom Level - https://addons.mozilla.org/firefox/addon/6965
    * NoSquint - https://addons.mozilla.org/firefox/addon/2592
    Your above posted system details show outdated plugin(s) with known security and stability risks.
    *Shockwave Flash 10.0 r45
    Update the [[Managing the Flash plugin|Flash]] plugin to the latest version.
    *http://www.adobe.com/software/flash/about/

  • Crosstab with multiple rowset xml content

    I have multiple rowsets (xml files) which I want to calculate subtotals from.  Each xml data set has identical columns.  If I union all the files together, the xml content contains multiple rowsets and the Crosstab function does not give me a summed value of each column, but instead it creates a column for each column in each rowset.
    The Normalize and the rowset combiner transform both combine rowsets by appending the second dataset into new columns, is there any way to append the data into new rows instead?
    Because my final file is going to have something over 30,000 rows (14Mb), I am reluctant to use a repeater on each row of each file to combine it into a new rowset.  Is there an efficient way to handle this calculation?

    Sue,
    I believe that we are off on the wrong foot here...all things aside...
    Join will work for your scenario when combined with other actions for your calculation and it will be easier to maintain than a stylesheet which will be beneficial to you in the end.  Please do not be too quick to judge the solution
    As for the error message that's one for support, what was the error in the logs?
    -Sam

  • How to display string with XML content in 4.6?

    Hi,
    I`d like to know how to display string with XML content in it for 4.6.
    4.6 has not method parse_string.
    And example like this is not helpful:
      DATA: lo_mxml    TYPE REF TO cl_xml_document.
      CREATE OBJECT lo_mxml.
      CALL METHOD lo_mxml->parse_string
        EXPORTING
          stream = gv_xml_string.
      CALL METHOD lo_mxml->display.
    Thank you.

    Hi,
    May be you can use fm SAP_CONVERT_TO_XML_FORMAT. But it have some issues with memory usage, the program consumed tons of memory during convert.

Maybe you are looking for

  • Disk Utility - secure erase process does not complete

    Help, please! It is taking "for ever" for Disk Utility* to securely** erase (a) the 37.25 GB hard drive in my G4 Powerbook (while booted from a startup disk), and (b) an external 55.9 GB firewire hard drive (connected to a 17" G4 iMac). After more th

  • Installing snow leopard on new hard drive - disc not seen or found

    I can't provide too many details right now as my hard drive is out, and I'm trying to reinstall an operating system. Problem:  Installed a new hard drive purchased from reputable repair store. Last installation disc I have is OS X Snow Leopard. When

  • 'Disk full' message even though 14gb is shown as available by Finder

    I've started to receive a 'Disk full' message when every indication shows that I still have over 10GB of hard drive space available.  The bar at the bottom of the Finder window, the 'Get Info' screen, and Disk Inventory X all show that my space usage

  • Vector Import Problem with CS5

    Hi everyone, I really need help with this problem as it's driving me round the bend! I've searched the forums for answers but even though i've seen a few people with similar issues, i still haven't found any answers? Here's my problem. I'm using illu

  • How to delete a web gallery

    I was trying out the Web section of Lightroom and after choosing a template went to add some images just for a trial, and found I added the whole 243 images I had in my Library. Now, I want to delete that effort and explore other templates (this time