How to Improve XML Serializer Performance?

I am writing a test program to generate an large XML data and then seralize this large XML document by SAX.
The document structure is like this:
<a>
<b> 1M data </b>
<b> 1M data </b>
<b> 1M data </b>
</a>
I tried to use org.apache.xml.serialize.Serializer for the output, (using Seralizer.asContentHandler()) but found that it is very slow (it took several minutes to write out the whole document), mainly because of the characters() method designed in it.
Any one here has any experience of using other APIs to serializer or write out of XML document? Or other ways to improve its performance?
Or any one knows how to use xalan's Seralizer.asContentHandler() for document serialization?
Thanks!

The only thing I can suggest is that you direct the serializer's output to something that is buffered.

Similar Messages

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve the OpenGL performance for AE

    I upgraded my display card from Nvidia 8600GT to GTX260+ hoping to have a better and smoother scrubbing of the timeline in AE. But to my disappointment, there is absolutely no improvement at all. I checked the OpenGL benchmark of the 2 cards with the Cinebench software and the results are almost the same for the 2 cards.
    I wonder why the GTX260+ costs as much as about 3 times the cost of the 8600GT, but the OpenGL performance is almost the same.
    Any idea how to improve the OpenGL performance please ?
    Regards

    juskocf wrote:
    But to scrub the timeline smoothly, I think OpenGL plays an important role.
    No, not necessarily. General things like footage I/O performance can be much more critical in that case. Generally speaking, AE only uses OpenGL in 2 specific situations: When navigating 3D space and with hardware-accelerated effects. It doesn't do so consistently, though, as any non-accelerated function, such as a specific effect or exhaustion of the avialbale resources can negate that.
    juskocf wrote:
    Also, some 3D plugins such as Boris Continuum 6 need OpenGL to smoothly maneuver the 3D objects.  Just wonder why the OpenGL Performance of such an expensive card should be so weak.
    It's not the card, it's what the card does. See my above comment. Specific to the Boris stuff: Geometry manipulation is far simpler than pixel shaders. Most cards will allow you to manipulate bazillions of polygons - as long as they are untextured and only use simple shading, you will not see any impact on performance. Things get dicy, when it needs to use textures and load those textures into the graphics card's memory. Either loading those textures takes longer than the shading calculations, or, if you use multitexturing (different images combined with transparencies or blendmodes), you'll at some point reach the maximum. It's really a mixed bag. Ultimately the root of all evil is, that AE is not build around OpenGL because at the time it didn't exist, but rather the other way around OpenGL was plugged-on at some point and now there is a number of situations where one gets in the way of the other...
    Mylenium

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • How to improve database link performance?

    Hello all,
    We use db links to do DML operations on remote databases. For OLTP applications we are facing performance problems for transactions dependent on data on remote database.
    For legal and business reasons we cannot state all the data locally.
    Could anybody suggest how to improve database links performance or suggest methods/procedures/techniques to enhance speed of OLTP applications going against remote databases ?
    Thanks
    Sky

    AQ is as reliable as Oracle-- the guarantees about delivery of queued messages are the same as the guarantees about committed transactions (i.e. ACID). AQ is designed for asynchronous operation, though. If you are batching transactions, it sounds like you are already doing some sort of asynchronous operations-- I've generally found AQ a lot easier to administer & maintain than rolling your own batching system.
    If you want to tune the Oracle side of things, you'll need to explain more about the system(s) involved here. Architecture, data flow, operations that involve the dblink, etc. If you're not comfortable posting that sort of information to a public forum, feel free to send me mail directly [email protected]
    As an aside, I'm interested in how you can legally pull data from the remote system to display to your users but that you can't legally cache that data in your system via replication. Sounds like an odd constraint.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to Improve Report View performance

    Hi All, i have a webi report which runs about 3 minutes. But when i click on view the report takes about 21 seconds(average) or so to open up. Any ideas on how to improve the report view performance? Does it have anything to do with server load? Any server settings to tweak to speed it up? Any ideas are appreciated.
    The requirement is that my web team has to strip off the Business Objects logo etc(using sdk), and display the report in my company web page, so its
    looking sort of ugly as the web page is taking about 21 seconds just to display the report.
    Some Report statistics:
    Report size is about 90 MB, as it has about 300 k rows of data(which i am aggregating using formulas)
    Report has about 15 simple division formulas
    Report is in Drill Mode. There are about 5 drill filters
    Thanks,
    Kon

    Hi Larry,
    I'll assume you are scheduling this report and viewing the instance in ~21 seconds.  Is that correct?
    We definitely need some environment info to go along with this post.  Like Simone said, Product Version, Patch Level, and other OS, Hardware, App Server details would help as well.
    There are certain properties of a document that can slow down the rendering of a report but we generally have to look at the logs to determine what part of the report is taking the longest time to process.  Assuming this is an instance, I would be curious to know if it is quicker to come up if you immediately view it a second time?
    If you were to turn on a trace, you would see a number of lines like this:
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut
    2011/06/15 20:11:54.153|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:ContextPromptList_StreamUnit_SerializeOut: 0
    2011/06/15 20:11:54.153|>=| | | 7676|7436|{|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:cdbSQLStreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_DPSerialization:QTDP_StreamUnit_SerializeOut: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveMe_Serial: 0.015
    2011/06/15 20:11:54.168|>=| | | 7676|7436|}|||||||||||||||C3_QTDataprovider:SaveAll_Serial: 0.015
    The numbers at the end are how long the function took to run.  Generally the function gives us an idea of what the engine was doing.
    When evaluating performance issues, you can occasionally find a function that is taking long to run within the logs and based on the function and module names, it can sometime lead you to the reason it is taking longer than expected.
    Another good test might be to run a very basic report to see how long it takes to come up.  Even a report without a datasource would suffice as that will give you your baseline time on how long it takes to load the viewer, convert the WID file to XML and send it up through the application server to your browser.  If a test report takes 15 seconds to view, then you are really only looking at 6 seconds for this other report.
    Hope this helps and gets you started.  More environment info would help take it further.
    Thanks
    Jb

  • How to improve slow PowerPivot performance when adding/modifying measures, calculated columns or Relationships?

    I have been using PowerPivot for a couple of months now and whilst it is extremely quick when pulling in data to populate Pivot Tables, it is extremely slow to make the following kind of changes to the Data Model:
    - Add a Measure / Calculated Field
    - Add a Calculated Column
    - Rename a Calculated Field
    - Re-name a Calculated Column
    - Modify a relationship
    - Change a tables properties
    - Update a table
    In the status bar of excel I get a very quick 'calculating', then it spends a lot of time 'reading data',
    then it 'finalises' after which nothing is in the status bar but it still takes approx. 45 seconds before the program becomes responsive again. This waiting time does not change depending on the action, it is the same if I rename a
    column as it is if I add a new measure.
    My question is what affects performance of these actions and how do I improve it?
    To give you an idea of where my data comes from, I have:
    - 7 tables that feed into the Data Model directly from within the workbook which contains the data model itself. These are a combination of static tables and tables that connect to a MySQL database.
    - 6 separate workbooks which contain static data that is updated manually periodically (copied and pasted from another source)
    - 5 separate workbooks which contain dynamic tables that are linked to our MySQL database and update when opened.
    Now I realise that this is probably where my issue is, however I have no idea how to fix it. You do not seem to be able to connect to a MySQL database directly within the PowerPivot window itself so there is no way to generate and update tables without
    first creating them either in a worksheet or separate workbook (as far as I know).  If I try to create all of the tables directly within the single workbook containing the Data Model I get performance and crashing issues hence why I separate tables into
    individual workbooks.
    Any advice on how to improve performance would be tremendously appreciated. I'm new and keen to learn, I'm aware this set-up is far from best practice.
    Hardware wise I am using:
    - Windows 8 64-bit
    - Excel 2013 64-bit
    - Intel Core i7 processor
    - 6 GB Ram
    Thanks,
    James

    Darren,
    I think the point I was making is its in memory, geez... BTW what do all applications do when they run out of paged memory,  if PowerPivot is using all available memory then wouldn't this force the other applications to use Virtual or essentially write
    back and forth to the disks? I think Virtual memory white to disk ??, lol Also, there are parts if the architecture of Excel 2013 that when importing data into PowerPivot require memory and when working in SharePoint the PowerPivot data is cached to disk
    unless recently refreshed... But this conversation isn't help the James who asked the question and as much as I would love to continue its become a little boring..
    Hi James,
    If you download one the ODBC MySQL Connectors
    http://dev.mysql.com/downloads/connector/odbc/ and I believe yours is the first one for x64 systems and connect directly to the data you should be able to reduce the number of workbooks your opening and if you notice in the following graphic these
    connection are automatically refreshed by default, the parts in red are the differences between PowerPivot 2010 and 2013
    You should notice a lot of improvement especially when refreshing data please let us know how it goes...
    After registering the ODBC Driver
    Click Add. on the User-DSN sheet, choose the “MySQL ODBC 5.x driver”, fill in the credentials, choose a database (from the select menu) and a data source name and you’re done.
    Back in Excel you go on the PowerPivot section of the ribbon and open the PowerPivot window  (the green icon on the left side). In the ‘Home’ section of that window you will see a small gray cylindrical symbol (the international
    symbol for “database”) which will suggest to you different data sources to choose from. Take the one where it says “ODBC”.
    In the next dialog you click on create, choose the adapter, and then Ok. Back in the assistant you can check the connection and proceed.
    Now you have the choice between importing the data from tables using the import assistant or Query depends on your skillset..
    Cheers,
    Ivan
    Ivan Sanders <a href="http://www.linkedin.com/in/iasanders">My LinkedIn </a> , <a href="http://msmvps.com/blogs/ivansanders">My Blog</a>, <a href="http://twitter.com/iasanders"> @iasanders</a>,
    <a href="http://shop.oreilly.com/product/0790145372703.do">BI in SP2013</a>, <a href="http://sharepointdemobuilds.codeplex.com">SP2013 Content Packs</a>.

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • How to improve and maintain performance of droid phones

    ive read bits and pieces about how to make the phones faster and stuff but whats the best way of improving  the phones performance without overclocking and putting on custom roms and maintaining that performance.

    Biggest thing to do is keep the cache cleared out the applications. I recommend once a week check depending on usage.
    Keep an eye on your internal storage. Any thing below 30mb needs some serious cleaning of applications, cache, call history, text messages, in that order. I try and keep my internal storage at 50 mb or higher.
    Also try and pay attention to your Dialer Storage. It holds the call history and text messages. But it can grow quickly. I found out the hard way. Had some ringtones saved in the text message thread but rarely looked at them. Then one weekend almost six months after I got, I was looking at the thread a lot because a new message had been sent from that number. The dialer storage went from 5mb to 21mb in a couple of days. Even after deleting the entire thread it only went down 1mb. There was no way to clear data for that app so I ended up doing a factory reset. Now Dialer Storage is a baby size of 64 kb!
    I never used a task killer only task managers.
    I have a battery monitor and have seen no big difference. However I don't use facebook or twitter so I don't have those constantly updating.

  • How to Improve ASM IO Performance

    How can I improve ASM IO Performance? Are there any parameters to do the same?
    I am using 11.2.0.3 on Linux x86-64.

    Hello;
    There a paper on this here :
    www.orafaq.com/papers/tuning_asm.pdf
    You will have to judge how good it is.
    Best Regards
    mseberg

  • How to improve Domain Gateway Performance

    Sometimes, too many services queue in Domain Gateway,
    and so,some services will be timeout.
    How to improve Domain Gateway?

    the messages outbound to another domain
    Scott Orshan <[email protected]> wrote:
    Tuxedo 8.1 has improved Domain Gateway performance relative to 8.0.
    Are the messages ones that are outbound to another domain, or inbound
    from another domain?
    zhangjr wrote:
    Sometimes, too many services queue in Domain Gateway,
    and so,some services will be timeout.
    How to improve Domain Gateway?

  • How to improve stored procedure performance?

    hi,
    Suppose I have a stored procedure which contains 30 insert/update statements. How do I know Stored Procedure is slowly running or don't have any performance issue? how to improve performance?
    Thanks in advance.
    Anujit Karmakar Sr. Software Engineer

    Stored Procedures Optimization Tips
    Use stored procedures instead of heavy-duty queries.
    This can reduce network traffic, because your client will send to server only stored procedure name (perhaps with some parameters) instead of large heavy-duty queries text. Stored procedures can be used to enhance security and conceal underlying data objects
    also. For example, you can give the users permission to execute the stored procedure to work with the restricted set of the columns and data.
    Include the SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a Transact-SQL statement.
    This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a Transact-SQL statement.
    Call stored procedure using its fully qualified name.
    The complete name of an object consists of four identifiers: the server name, database name, owner name, and object name. An object name that specifies all four parts is known as a fully qualified name. Using fully qualified names eliminates any confusion about
    which stored procedure you want to run and can boost performance because SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names.
    Consider returning the integer value as an RETURN statement instead of an integer value as part of a recordset.
    The RETURN statement exits unconditionally from a stored procedure, so the statements following RETURN are not executed. Though the RETURN statement is generally used for error checking, you can use this statement to return an integer value for any other reason.
    Using RETURN statement can boost performance because SQL Server will not create a recordset.
    Don't use the prefix "sp_" in the stored procedure name if you need to create a stored procedure to run in a database other than the master database.
    The prefix "sp_" is used in the system stored procedures names. Microsoft does not recommend to use the prefix "sp_" in the user-created stored procedure name, because SQL Server always looks for a stored procedure beginning with "sp_"
    in the following order: the master database, the stored procedure based on the fully qualified name provided, the stored procedure using dbo as the owner, if one is not specified. So, when you have the stored procedure with the prefix "sp_" in the
    database other than master, the master database is always checked first, and if the user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.
    Use the sp_executesql stored procedure instead of the EXECUTE statement.
    The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve readability of your code when there are many parameters are used. When you use the sp_executesql stored procedure to
    executes a Transact-SQL statements that will be reused many times, the SQL Server query optimizer will reuse the execution plan it generates for the first execution when the change in parameter values to the statement is the only variation.
    Use sp_executesql stored procedure instead of temporary stored procedures.
    Microsoft recommends to use the temporary stored procedures when connecting to earlier versions of SQL Server that do not support the reuse of execution plans. Applications connecting to SQL Server 7.0 or SQL Server 2000 should use the sp_executesql system
    stored procedure instead of temporary stored procedures to have a better chance to reuse the execution plans.
    If you have a very large stored procedure, try to break down this stored procedure into several sub-procedures, and call them from a controlling stored procedure.
    The stored procedure will be recompiled when any structural changes were made to a table or view referenced by the stored procedure (for example, ALTER TABLE statement), or when a large number of INSERTS, UPDATES or DELETES are made to a table referenced by
    a stored procedure. So, if you break down a very large stored procedure into several sub-procedures, you get chance that only a single sub-procedure will be recompiled, but other sub-procedures will not.
    Try to avoid using temporary tables inside your stored procedure.
    Using temporary tables inside stored procedure reduces the chance to reuse the execution plan.
    Try to avoid using DDL (Data Definition Language) statements inside your stored procedure.
    Using DDL statements inside stored procedure reduces the chance to reuse the execution plan.
    Add the WITH RECOMPILE option to the CREATE PROCEDURE statement if you know that your query will vary each time it is run from the stored procedure.
    The WITH RECOMPILE option prevents reusing the stored procedure execution plan, so SQL Server does not cache a plan for this procedure and the procedure is recompiled at run time. Using the WITH RECOMPILE option can boost performance if your query will vary
    each time it is run from the stored procedure because in this case the wrong execution plan will not be used.
    Use SQL Server Profiler to determine which stored procedures has been recompiled too often.
    To check the stored procedure has been recompiled, run SQL Server Profiler and choose to trace the event in the "Stored Procedures" category called "SP:Recompile". You can also trace the event "SP:StmtStarting" to see at what point
    in the procedure it is being recompiled. When you identify these stored procedures, you can take some correction actions to reduce or eliminate the excessive recompilations.
    http://www.mssqlcity.com/tips/stored_procedures_optimization.htm
    Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/

  • How to improve X11 apps performance?

    Hi all,
    I'm looking for advices on how to improve performance of x11 apps on Lion (10.7.3), specifically apps running in (Amazon) cloud because local x11 apps (e.g. GIMP) run just fine.
    Some of the apps I have tried include: (basic) xfontsel, Firefox on Amz EC2 Linux64 AMI, Chromium on Amz EC2 Ubuntu AMI. These mostly seem sluggish compared to the average performance I've come to expect from most OS X apps, including x11 ones like, again, GIMP.
    Thanks in advance.

    You wrote "VNC into the Linux system and run the X11 session local to the virtual machine".
    On my headless remote Linux virtual machine client, I have it configured so vncserver is started at boot time on port 5951 (because I want to use Display 51 ) running a gnome session via ~username/.vnc/xstartup:
    #!/bin/sh
    # Bob Harris $HOME/.vnc/xstartup
    [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
    xsetroot -solid grey
    vncconfig -iconic &     # needed for clipboard support.
    # run gnone as my session manager.
    /usr/bin/gnome-session &
    I modify
    /etc/sysconfig/vncservers
    and add
    VNCSERVERS="51:myusername"
    VNCSERVERSARGS[51]="-geometry 800x600"
    Then run the command
    sudo chkconfig --level 345 vncserver on
    to configure the vncserver so it is started in runlevel's 3, 4, and 5 after booting.
    Do i understand correctly that you VNC into a shell?
    As stated above, vncserver is started at boot time via one of the /etc/sysconfig/vncservers file and the chkconfig command.
    I ssh into a shell session on my remote Linux system (often several ssh shell sessions), but they are not involved with the VNC sessions (except when I did my initial vncserver configuration work.
    and the VM uses a vnc server that is not x11vnc.
    While I have played with x11vnc, I do not need it for my headless Linux system.
    Where I have found x11vnc useful is when I want to mirror a "real" monitor attached to a Linux workstation.  Generally speaking the vncserver will NOT attach to a real monitor.  But if you use the Linux workstation while in the office, but then want to take over the active sessions when you go home at night, or are working from home the next day, then x11vnc is useful.  There are several people in our office that only come into work a few days a week, and whant the ability to continue working where they left off while at work.
    My remote Linux system does not have a display head, so the default vncserver is perfectly OK.
    And running X11 local on VM means startx from that shell?
    That means, I use a VNC client on my Mac that connects to the remote Linux vncserver, started as specified above.  This VNC session gives me access to the remote Linux's vncserver started desktop.  From there I can start local to the remote Linxu box, X11 GUI session, I can start xterm sessions, etc... all of which are presented to me via the VNC session.
    As for Mac VNC clients.  There is always the built-in Mac OS X VNC client:  Finder -> Go -> Connect to server -> vnc://address.of.remote.Linux:5951.  Or you can use Chicken (formally known as Chicken of the VNC), RealVNC, JollysFastVNC, and if you want to you can even use TightVNC via MacPorts.org that will use the Mac OS X X11 as its display.  There are most likely other Mac VNC clients, but these are the ones I'm familiar with.
    To recap.  I have VNC started on my remote system via configuration options.  I connect using a Mac OS X VNC client, then through the VNC client, I start X11 sessions that run local to the remote Linux box and allow VNC to show me the image.
    I also ssh into a bash shell session on my remote Linux box where I mostly edit via Vim sources, do compiles, source code control, etc...  All the typical software developer activities.
    And for some very specific X11 applications (mostly gvimdiff) I will allow the X11 display to be exported to my Mac across an ssh -Y connection, but ONLY because the work network connection to that facility 2,000 miles away is a very fat very fast networking connection, AND because gvimdiff is not as X11 chatty as a lot of other X11 GUI applications.  But I do not use gvimdiff across the internet if my connection is slow.  For example, if I'm at home, my home network connection is not all that fast, so I then use a VNC session for that kind of thing.  But since I mostly go into the office, I only really have play with VNC from home when I'm sick or need to be home for a delivery or repair people working at the house.
    Hopefully you understand my VNC vs ssh vs X11 usage now.

Maybe you are looking for