XMLType's poor performance? (vs. Relational data retrieval)

Hi,
I'm currently doing a proof of concept for the XMLType Vs Relational Storage.
I have TABLE_A which has the following DDL:
CREATE TABLE "WEBUSER"."TABLE_A"
( "HEADER_NUM" NUMBER(10,0),
"DETAIL_NUM" NUMBER(10,0),
"XML_RESPONSE" "SYS"."XMLTYPE"
) XMLTYPE COLUMN "XML_RESPONSE" STORE AS RELATIONAL XMLSCHEMA "MySchema.xsd" ELEMENT "test"
I have TABLE_B which has regular relational table structure (with each column representing a node in MySchema.xsd. The schema has around 85-100 elements.
Now I'm running queries as follows:
A) TO RETRIEVE XMLTYPE COLUMN
     StringBuilder queryStr = new StringBuilder();
     queryStr.append("SELECT X.* FROM TABLE_A X WHERE ROWNUM <= 10000 ORDER BY X.DETAIL_NUM");
     pstmt=conn.prepareStatement(queryStr.toString());     
     rs = pstmt.executeQuery();
     int totalRecordsFiltered = 0
     while(rs.next())
     totalRecordsFiltered++;
     System.out.println(totalRecordsFiltered);
B) TO RETRIEVE VALUES FROM RELATIONAL TABLE
     StringBuilder queryStr = new StringBuilder();
     queryStr.append("SELECT X.* FROM TABLE_B X WHERE ROWNUM <= 10000 ORDER BY X.DETAIL_NUM");
     pstmt=conn.prepareStatement(queryStr.toString());     
     rs = pstmt.executeQuery();
     int totalRecordsFiltered = 0
     while(rs.next())
     totalRecordsFiltered++;
     System.out.println(totalRecordsFiltered);
When the number of rows in both tables are 10,000
     - the While loop in (a) completes in about 5 min
     - the while loop in (b) completes in about 10 sec
Note that I'm really not doing anything inside the While loop. Just want to see how long it takes to loop through the resultset.
Can you please tell me why we see such difference in both types of data storage? Am I doing something wrong in the way resultset is handled?
I'm using Oracle 10g / ojdbc14.jar with JDBC thin driver / Websphere Application Server
Edited by: 916343 on Mar 7, 2012 1:03 PM
Edited by: 916343 on Mar 7, 2012 1:04 PM

First to answer you question..
1. There is no such option STORE AS RELATIOAL, I assume you meant OBJECT RELATIONAL.
2. We would need to see the XML schema, to get an idea of how complex the object model defined by the XML Schema is. We we would also need to see the instacne docuemnt to see how much of that model the instance document(s) are populating
3. Remember that your statement is going to retrieve all of the data ( which may require joining multliple tables to get all the content if the XML contains collections, we can't tell without seeing your schema) and then re-assemble the XML from the data stored in the object relational tables. This is way more expensive that simply fetching a number of columns from one table.
4. Note that in general if you are interested in fetching alll the XML, rather than fragments or scalar values the binary storage model (11g+) is a better choice for XML persistence. THe Object Relational model is more suited to situations where the access pattens on the XML willl be primarily leaf level selects and updates.
To answer Dan's comment...
If the you have an Ojbect Relational storage model, and write an XML Query using XMLTable to access leaf level nodes and the query is re-written in SQL it should be (nearly) as fast as a relational query for the same set of data. In some cases the XML can be faster since it alllows us to do master detail joins and only bring back the master data once for the result set, rather than once for each row in the result set.

Similar Messages

  • MacBook Pro poor performance in relation to NVIDIA GeForce 9600M GT

    Hi All who may be having similar issues.
    I am now on my third replacement custom Macbook Pro 2.66GHz with 7200rpm hardrive and 4Gb of ram, and yes am so peeved that this has happened again. I am aware of the article in the 'Inquirer' about the 'bad bumps' of the NVIDIA GeForce 9600M GT in these machines and realise it is not all but from the various postings and refurbished MacBook Pro's... quite a volume!
    First off i also own an older powerPC imac 1.8GHz 2GB Ram and at my work use a 3.0GHz Mac Pro with 6Gb of Ram and a 2008 imac intel 2.4GHz with 4Gb of Ram. I am fully competent and primarily use for photographic programs such as capture one, photoshop, lightroom, Aperture and these new MacBook Pro modes has failed me in so many ways
    I have read many threads regarding the possible reasons for why these cards are failing or at fault, and am encountering an attitude from apple that sees this problem as unique to me. Obviously not as from when i google or go into the discussion rooms
    the issue?
    make sure you are on the higher performance graphics card.
    simple test.. take a screen grab( of anything in safari or even of the desktop) and press fn+F11 or however you configure your keypad to go to the desktop. you may notice nothing happens.. try again.. wait. you will see a delay and the commands follow when the computer catches up.
    if all my other computers i use can do the easiest of tasks why can't this? seems incredible to me!
    also after installing any of the above mentioned software and beginning processing raw files(in all the programs to make sure there was no software issue) the computer becomes very hot.. i get beach balling and regular crashes.. also the fact that my computers at work seem to work amazingly well with the above software versions. To add note, all machines have up to date apple software.
    In the first machine i had it got so bad the dock disappeared and spaces crashed..the track pad froze and that was it. it took forever to start up and shut down.. it appeared to work on the lower performance setting intially but over time became buggy and crashed on that side too!
    these last 2 models have behaved exactly the same... all the same custom configurations i have mentioned above.But i have recognised the problem straight off and asked apple for replacements.. i am now on the third and will call them tomorrow with the same issue. I just want a machine that works smoothly and reflects the price i have paid for it.
    Has anyone else experiences this? any support or advice would be deeply appreciated.. i can't keep getting dodgy Shanghi custom models!!

    I read your thread Nee Kun.. and thanks, yes that is the basics of my problem, but that is just a major signal that something is seriously wrong. The screen grab is the first test to see if a new MacBook Pro computer may have this underlying issue. what i found with my first 2 models was, this issue then developed into all the other issues i have described in my original post, so its not really the screen grab i am bothered about... however annoying it is.. but the fact that these machines over time on the higher performance graphics card become disfunctional and crash periodically say every 20 min of light photography retouching and processing then there is something that is fundamentally flawed in these over priced and under tested machines. i'm not a happy bunny.
    However, I have been in discussions with apple and they are listening to my issue and we are trying to resolved it, but if not... it will have to be some form of compensation as i simply cannot do any high end work on these machines that a 3yr old powerbook, 5 yr power PC imac and all all the new range of imacs and mac pro's i have access to can! it is utterly bewildering!
    hmmmm... will post an update as and when.

  • Performance tuning for data retrieval for PCL4 cluster

    Hi all,
    I am using PCL4 cluster to read history of employee data. The FM modules used are HR_INFOTYPE_LOG_GET_LIST and HR_INFOTYPE_LOG_GET_DETAIL .
    Currently it is taking lot of time. Do we have any better methods.
    Thanks in advance.
    Regards,
    tjgupta

    Hi All,
    I am also facing performance problem with PCL4 Audit cluster.
    Please guide me how to do it in an efficient way.
    I am using it is follows:
    SELECT  client relid srtfd srtf2 histo aedtm uname pgmid versn clustr INTO CORRESPONDING FIELDS OF TABLE it_pcl4
    FROM pcl4  WHERE   relid EQ 'LA'
                      AND     srtf2 EQ '00'
                      AND     aedtm IN s_aedtm.
    Where, s_aedtm-low = '20000101' & s_aedtm-high = '99991231'
    Regards

  • Poor performance showing BW data in NW portal

    Hi all,
    we have a problem with showing data from BW in a NetWeaver Portal here.
    There is a table with several different entries displayed on a portal page. When you click one entry some details for this are shown in an area below the table.
    Now the problem is that every time you click one entry it lasts around 5 seconds until the details are shown which is much to long.
    The data to be shown comes from a BW system.
    I know that there are a lot of parameters you can check in BW to optimize performance. But maybe I have a common problem here and one of you can tell me which parameters should be check first.
    Thanks for all answers in advance!
    Best Regards,
    Torben

    Dear Torben,
    The basic things which we need to check in our BW system is:
    1) Increase the parallel processing during extraction
    2) Selective loading.
    3) Every job will be having a priority (A, B, C – A being the highest and C being the lowest), choose this based on your scenario.
    4) Check with the basis team for the sizing of your server
    5) You can increase the number of background processes during data loads. This can be done by making dialog processes as Background processes.
    For this you need basis inputs. (This is done in some profile settings by making the system behave differently during loads. (something like day mode/night mode))
    6) There are some maintenance jobs that should run regularly in any SAP box to ensure proper functioning.
    7) Use of start routines is preferred instead of update routines.
    Assign points if these helps u...
    Regards,
    KK.

  • N580GTX Poor Performance

    I recently bought a 580GTX Lightning and out of the box I was experiencing fairly poor performance especially in DX11 games and benchmarks. This was with the MSI OC clocks (832Mhz core etc). I down-clocked to standard 580 values and performance immediately increased to levels you would expect a 580 card to be capable of. Seemed at the time as if the GPU voltage was set too low...
    In playing around with the card, I flipped the BIOS dipp switch to the LN2 setting and found that the clock settings were at (by default) the standard 580GTX settings. Performance was in this case again poor despite the lower clocks. I then increased clocks to the MSI OC values and performance jumped again to where a 580GTX card should be. Odd.
    Next I flipped the Bios back to the original, restarted and left the OC at the MSI values. Performance remains at most excellent.
    At the moment it seems I get good performance when I am using Afterburner and have the "Apply Overclocking at System Startup" option applied.
    My question is why is this? Will this card only work correctly when used in conjunction with Afterburner? Any thoughts on why this is?
    GPU BIOS: 70.10.17.00.06
    PSU: Silverstone Strider Gold 750W
    Nvidia Drivers: 270.61
    OS: Windows 7 SP1

    Quote
    My question is why is this? Will this card only work correctly when used in conjunction with Afterburner? Any thoughts on why this is?
    AB is just a software tool that allows you to manipulate the clocks and voltages, easily. Nothing more.
    Quote
    At the moment it seems I get good performance when I am using Afterburner and have the "Apply Overclocking at System Startup" option applied.
    That setting will apply an Overclock that you manually set and then saved as a user profile within Afterburner. If you did not save a user profile, then it will be the same settings as what your card's factory clocks are. i.e. it will not apply anything.
    Quote
    I recently bought a 580GTX Lightning and out of the box I was experiencing fairly poor performance especially in DX11 games and benchmarks.
    Poor performance is relative. This needs to be quantified and a measuring standard is needed, as well as comparisons to the same or similar cards to establish a consistent baseline.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Multi server data retrieval performance

    Hi experts,
    I have a question regarding the data retrieval performance (EVDRE) on a multi server installation environment on Microsoft SQL Server 2008.
    We have succesfully migrated Outlooksoft 4.2 SP03 to SAP BPC 7.0 SP07 for a customer. During this project we have also set up a complete new server environment consisting of:
    Development server: dedicated single server, Windows 2003 Standard SP2 32 bit, SQL Server 2008 SP1 with cumulative update package 6, SAP BPC 7.0 SP07, 2 quad core processors, 4 GB RAM
    QA server: dedicated multi servers - 1 database server (SQL/OLAP), Windows 2003 Standard SP2 64 bit, SQL Server 2008 SP1 with cumulative update package 6, 2 quad core processors, 32 GB RAM - 1 dedicated application/web server, Windows 2003 Standard SP2 32 bit, SQL Server 2008 SP1 with cumulative update package 6 (shared components / reporting services), SAP BPC 7.0 SP07, 2 quad core processors, 4 GB RAM
    Production server: dedicated multi servers - 1 database server (SQL/OLAP/Reporting services), Windows 2003 Standard SP2 64 bit, SQL Server 2008 SP1 with cumulative update package 6, 2 quad core processors, 32 GB RAM - 2 dedicated application/web server, Windows 2003 Standard SP2 32 bit, SQL Server 2008 SP1 with cumulative update package 6 (shared components), SAP BPC 7.0 SP07, 2 quad core processors, 4 GB RAM
    Furthermore, two terminal servers with the SAP BPC client.
    All servers have good performancve and we have great times on cube processing and SQL processing. However, to our great surprise we find that the single development server is much faster with a single user to retrieve data using EVDRE than the multi-server environment. About 2x as fast. A reporting book with more then 10 sheets and about 25 EVDRE's takes about 42 seconds on the development server and 93 seconds on the multi server.
    It seems that EVDRE is taking up a lot of time to communicate between the application server and the database server in a multi server environment while being much faster on a single server. This is not what we want :-). The network speed in the domain consist of all 1 GB lines so that should not be the issue.
    Do you have any experience with this? How can we upgrade the speed of the multi server, are there specific settings?
    Hope the get some useful answers. Thanks in advance.
    Damien
    Edited by: DWiegman on Feb 20, 2010 4:15 PM

    Hi,
       You have to activate also the EVDRE logs on the client and server level, just to understand from where is coming the problem (appserver-db comuncication or client-appserver communication). You have to check also if there is any proxy firewall between client and application server.
        In case you are using NLB, please verify if afinity is setup to true.
        The performance problems can be come from db level. Did you verify how many records do you have into WB table for the specific application? Are you keeping the DB in full mode? How big is the log of the databse?
        The are a lot of things that can have impact of this, but it looks to be a setup problem.
    Hope this can help you,
    Mihaela

  • How to retrive relational data from an XMLType column in Oracle 10g R2

    Hi
    I want how to retrive the data which is in XML document in an XMLColumn in a Table(or an XMLTable which has the XML Document). This XML Document has to be Queried with XQuery as a Relational data(not an XML Document).
    If any body has some ideas please share it across ASAP.
    please share an example for this because i am new to this XQuery.
    Thanks in Expectation,
    Selva.

    Got it working now. I used the 'extract' function in my select statement, but had to add the .getStringValue() fuction. The extract function, just by itself, returns an XMLDocument type. The call for the column in the SQL statement looked like this.
    extract(XML_CONTENT, '/ROOTOBJECT').getStringVal() xml_content
    Thanks so much for your help. Problem solved!

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Poor performance of BLOB queries using ODBC

    I'm getting very poor performance when querying a BLOB column using ODBC. I'm using an Oracle 10g database and the Oracle 10g ODBC driver on Windows XP.
    I create two tables:
    create table t1 ( x int primary key, y raw(2000) );
    create table t2 ( x int primary key, y blob );
    Then I load both tables with the same data. Then I run the following queries using ODBC:
    SELECT x, y FROM t1;
    SELECT x, y FROM t2;
    I find that the BLOB query takes about 10 times longer than the RAW query to execute.
    However, if I execute the same queries in SQL*Plus, the BLOB query is roughly as fast as the RAW query. So the problem seems to be ODBC-related.
    Has anyone else come across this problem ?
    Thanks.

    Hi Biren,
    By GUID, are you referring to the Oracle Portal product?

  • Poor performance of WD Abap/ Adobe

    Dear sirs,
    I would like to know if anybody of you have experienced very poor performance of WD ABAP with Adobe interactive form. Our client has paid for a 2-3 pages interactive form in WDA and is complaining about very poor performace. As a result no users are using this application because of this poor performance.
    Can anybody point out what can be a problem? Some developement problem? Basis issue? Any experience related to WDA Adobe performance? Thank you all, Otto

    Update: SAP OSS message was opened regarding this problem.
    We got a list of patches to update, notes to apply etc. All was done, applied, patched. The performance didn´t get better, it anything it was like extra percent or two, but nothing what would make the customer less angry.
    The result: this technology is promising but a) needs strong client PC b) will get better (i hope gets better soon)
    Our basis team checked all the times (of actions that has to be done to load/use the app) and the memory need both on server and client. On some client PCs only the Adobe Rader was starting like a half a minute (and more, not less). If you add time for WD, for WD/ Adobe communication and the data transfer, the time to start working with WD ABAP Adobe app can be more than a minute. That is not very usable.
    Otto

  • BPC 10 - EPM data retrieval very slow!

    Hi BPCers,
    We are using an Excel EPM Input Schedules as a Resource Management tool - using VBA to provide the functionality we need.
    Performance is generally good, but quickly deteriorates when handling larger data sets - even 500-600 rows of transactional data is enough to slow data retrieval from our BPC Cube to EPM to an unusable speed. This is in relative terms pretty small so  there should be some option for optimisation.
    Does anybody have any experience with this? All suggestions welcome. We are operating on EPM Service Pack 7 Patch 1, but I'm not sure that EPM is necessarily the problem here.
    Thanks,
    Tom

    Thanks Gersh,
    Had a look through fiddler and have identified the job that is causing the delay - some rooting around in the ABAP debugger produced the answer as to why adding more data slows processing speed so dramatically.
    When we take data from the back end, we select a couple of parameters which limit the range of data that we are pulling through - a certain set of people, and a certain range of days. Once this is pulled through, allocations are made to any combination of person and day within this range, which generates and extra two properties - a project ID and a work status.
    This makes 4 properties, and when BPC pulls data it attempts to find every combination of every one of the properties that exist within this range - so the more allocations made the more this slows down as it dramatically increases the number of combinations.
    The result is that BPC runs through a couple of hundred thousand generated tables, most of which are nonsense.
    Not sure what to do from here. This is how BPC reads data so approaching a fix could be difficult.
    Tom

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

Maybe you are looking for

  • P7n Dimaond Vista 64 bit lock-ups

    Hi, I'm experiencing random hard lock ups of my newly built Vista 64bit PC I've reinstalled the OS 3 times, i have all the latest drivers. I've ran memtest several times and the vista memory diag tool with no errors, and on individual sticks. i have

  • How to Activate a Condition Type in the Tax OCde for A/P  -- Very Urgent

    Hi  Peers I'm creating a new Tax Code in T.Code FTXP, while creating, One Condition Type is Deactivated, say JM01, How to activate that condition Type. I know how to activate the Condition Type in A/R side but I want to know it in A/P side.  Plz help

  • How to execute a shell command in java?

    here, my environment is redhat 7, jdk 1.5. i don't know how to use the shell command in java. i want to use this function: #include <stdlib.h> int system(const char * string); please give me some ideas. and Thank you so much if coming with a little d

  • Sending files in ALSB

    Hi everyone, I have to integrate some modules of my project, using AquaLogic 2.6 (or 3.0) and WebLogic 9.2 We need to send files between the modules. I read about MTOM (can I use it in weblogic 9.2?), FTP and DIME with attachments... What would you r

  • Problem with Profit Center Report

    On my client i m populating profit center and COGS profit center field on (A/R Invoice + Payment) screen. The problem is that while posting the transaction system apart from populating revenue and COGS with the profit centers it also populates in fro