Performance issue - Loading and Calculating

Hi,<BR>I am having 5 GB of data. It is taking 1 hr to load and 30 min to calculate. <BR>I did the following things to improve the performance.<BR>1) Sort the data and loading them in the order of largest sparse first, followed by smallest and dense<BR>2) Enabled parallel load, gave 6 threads for prepare and 4 for writing.<BR>3) Increased data file cache as 400MB and data cache as 50MB, then index cache as 100MB.<BR>4) Calculation only for 4 dimensions, out of 9. In that 2 are dense and 2 are sparse. <BR>5) Calculation with parallel calculation having 3 threads and CALCTASKDIMS as 2.<BR><BR>But i am not getting any improvements.<BR>While doing the calculation i got following message in the logs.<BR>I feel that CALCTASKDIM is not working<BR><BR>[Fri Jan  6 22:01:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012679)<BR>Calculation task schedule [2870,173,33,10,4,1]<BR><BR>[Fri Jan  6 22:01:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012680)<BR>Parallelizing using [1] task dimensions. Usage of Calculator cache caused reduction in task dimensions<BR><BR>[Fri Jan  6 22:33:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012681)<BR>Empty tasks [2434,115,24,10,2,0]<BR><BR>Can any one help me what the above log message is telling and what are the other things to be done to imrpove the performance.<BR><BR>Regards<BR>prsan<BR><BR><BR>

<p>its not the problem with ur calc task dim.</p><p> </p><p><b>Calculation task schedule [2870,173,33,10,4,1</b>] indicates that ur parell calc can start with 2870calculations in parallel, after which 173 can be performed inparallel then 33 ,10,4 and 1.</p><p> </p><p><b>Empty tasks [2434,115,24,10,2,0]</b>  means these manytasks dont need any calculation- either because there is no data orthey are marked clean due to intelligent calc.</p><p> </p><p>the problem lies with your cal cache setting. try increaing thecal cache settings in ur cfg file and use calcache high setting inyour calc.</p><p> </p><p>hope this works<br></p>

Similar Messages

  • Known performance issue bugs and patches for R12.1.3

    Hi Team,
    We have upgraded oracle application from 12.1.1 to 12.1.3.
    I wanted to apply the known performance issue bugs and patches for R12.1.3.
    Please let me know for any details.
    Thanks,

    Are u currently facing any performance issues on 1213?
    Tuning All Layers Of E-Business Suite – Performance Topics
    http://www.oracle.com/technetwork/apps-tech/collab2011-tuning-ebusiness-421966.pdf
    • Start with Best Practices : (note: 1121043.1)
    • SQL Tuning
    – Trace files
    – SQLT output (note: 215187.1)
    – Trace Analyzer (note: 224270.1)
    – AWR Report (note: 748642.1)
    – AWR SQL Report (awrsqrpt.sql)
    – 11g SQL Monitoring
    – SQL Tuning Advisor
    • PL/SQL Tuning
    – Product logs
    – PL/SQL Profiler (note: 808005.1)
    • Reports Tracing
    – note: 111311.1
    • Database Tuning
    – AWR Report (note: 748642.1)
    – ADDM report (note: 250655.1)
    – Automated Session History (ASH) Report
    – LTOM output (note: 352363.1)
    • Forms Tuning
    • Forms Tracing (note: 373548.1)
    • FRD Log (note: 445166.1)
    – Generic note: 438652.1
    • Middletier Tuning
    – JVM Logs
    – JVM Sizing (note: 362851.1)
    – JDBC Tuning (note: 278868.1)
    • OS
    – OSWatcher (note: 301137.1)

  • Required info on SQL Server Performance Issue Analysis and Troubleshoot way

    Dear All,
    I am going to prepare the simple documentation steps on SQL Server Performance Issue Analysis and troubleshoot method. I am struggling to make this documentation since we have different checklist (like network latency,disk latency, memory/processor pressure,SQL
    query tuning etc) to validate once application performance issue reported from the customer.So, I am looking for the experts document or link sharing .
    Your input will help for document preparation in better way.
    Thanks in advance.

    Hi,
    Recommendations and Guidelines on configuring disk partitions for SQL Server
    http://support.microsoft.com/kb/2023571
    Disk and File Layout for SQL Server
    https://blogs.technet.com/b/dataplatforminsider/archive/2012/12/19/disk-and-file-layout-for-sql-server.aspx
    Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure
    http://www.packtpub.com/article/sql-server-2012-implementing-physical-database-strusture
    Database Mirroring Best Practices and Performance Considerations
    http://technet.microsoft.com/en-us/library/cc917681.aspx
    Hope the information helps.
    Tracy Cai
    TechNet Community Support

  • Performance issue: Java and XSLT

    I have a performance issue concerning Java and XSLT: my goal is to transform an xml file (source.xml)
    by using a given xsl file (transformation.xsl). As result I would like to get a String object, in which the result
    of the transformation (html-code) is in, so that I can display it in a browser. The problem is the long time
    it takes for the code below to run through.
    xml = new File("C:\\source.xml");
    xmlSource = new StreamSource(xml);
    xslt = new File("C:\\transformation.xsl");
    StreamSource xsltSource = new StreamSource(xslt);
    TransformerFactory transFact = TransformerFactory.newInstance();
    trans = transFact.newTransformer(xsltSource);
    StringWriter stringWriter = new StringWriter();
    StreamResult streamResult = new StreamResult(stringWriter);
    trans.transform(xmlSource, streamResult);
    String output = stringWriter.toString();
    stringWriter.close();
    Before, I made the same transformation in an xml development environment, named Cooktop
    (see http://xmlcooktop.com/). The transformation took about 2 seconds. With the code above in Java it
    takes about 20 seconds.
    Is there a way to make the transformation in Java faster?
    Thanks in advance,
    Marcello
    Oldenburg, Germany
    [email protected]

    I haven't tried it but the if you can use java 6, you could try the new stax (StAX) with the XML stream loading..
    Take a look at:
    http://javaboutique.internet.com/tutorials/staxxsl/
    Then, you could cache the xslt in templates:
    ---8<---
    templates = transformerFactory.newTemplates( xsltSource );
    Transformer transformer = templates.newTransformer();
    (here you could probobly also cache the Transformer object but I think it's it's not thread safe so it's a little tricker..)
    StreamResult result = new StreamResult( System.out );
              transformer.transform(xmlSource, result);
    And, don't transform your result to a string, use a Stream or something, then the transformer could start pumping out html while working, and if you get a out of memory error it looks like you have a pretty big xml file...
    If you use jsp you could try the build in jsp taglib for xml which I think is rather good and they have support for varReader which implements the StreamSource iirc.
    /perty

  • ITunes on Windows 7 performance issues - slow and hangs

    Is there any way to improve performance of iTunes running on Windows 7?
    It has always been slow with lots of application hangs and stalls that will resolve themselves if you wait for iTunes to catch up, but I find it incredibly frustrating.
    For example, when iTunes starts and if it is checking podcasts, the entire application is unresponsive until it finishes checking and refreshing podcasts.
    Alternately, when viewing my iPad Air and looking at the list of apps installed via iTunes it can take 30 seconds before iTunes responds.
    Considering that this is running on a system with a quad-core Haswell processor at 3.2Ghz, 16GB of RAM and Windows 7 x64 it is pretty unacceptable.
    The efficiency at which iTunes operates is just appalling.  I can run Adobe Photoshop and many more memory hungry and processor intensive applications than iTunes and get better performance than what I see in iTunes.
    This is not specific to my system either.  All of my family members have iPhones and their own computers and this performance issue exists on every computer I have ever run iTunes on.
    Is there any way to manually tune performance because this is pretty crappy.  It's been bugging me for years, but today just kind of feels like the last straw.

    I guess the answer is, "It's terribad, live with it"

  • Optimizing data load and calculation

    Hi,
    I have a cube that takes more than 2 hours to load and calculates more than 3 hours (at its fastest build). There are times that my cube loads and calculates for more than 8 hours. My calculation only uses Calc All. I am very new to Essbase and couldn't find a way to minimize the build time of my cube.
    Can anybody help? Here are some stats about my cube. I hope this helps.
    Dimension Name Type Declared Size Actual Size
    ===================================================================
    ALL_ACCOUNTS DENSE 7038 6141 Accounts <5> (Dynamic Calc)
    ALL_LEDGERS SPARSE 4 3 <1> (Label Only)
    ALL_YEARS SPARSE 3 1 <1> (Label Only)
    ALL_MONTHS SPARSE 22 22 Time <7> (Active Dynamic Time Series Members: Y-T-D, Q-T-D)
    ALL_FUNCTIONS SPARSE 55 54 <9>
    ALL_AFFILIATES SPARSE 715 696 <4>
    ALL_BUSINESS_UNITS SPARSE 452 440 <3>
    ALL_MCC SPARSE 1557 1536 <3>
    Any suggestions would be greatly appreciated.
    Thanks!
    Joe

    Joe,
    There are too many potential optimizations to list and not enough detail to make any one or two suggestions. I can see some potential areas from improvemt, but your best bet is to bring in a knowledgable consultant for a couple of days to review the cube and make changes. For example, at one client, I made changes that brought a calculation down from 4 + hours to 5 minutes. It took changes to load rules, calc scripts and how they loaded their data. So it was not one thing, but mutiple changes.
    If you look at Jason's Hyperion Blog http://www.jasonwjones.com/?m=200908 , he describes taking a calculation down from 20 minutes to a few seconds. Again, nat a single change, but a combination.

  • Performance Issue : Application and oracle database in different sub-net

    Hi,
    We have an 24/7 application that uses oracle 11g R2 as the back end database. The application sits on a separate box and the oracle database sits on a separate box.
    Unless we keep both the machines in the same sub-net, the throughput of the application becomes very slow and kind of un-usable in performance setups.
    In fully loaded conditions, the application will be inserting around 12K records per minute into the database. In such scenario, restarting the application takes longer time (more than 2 hours) when the oracle server is on a different network as compared to the application. In real world, the oracle will be in a separate dedicated network and the DBAs resist to have an application in the same sub-net.
    Is there a way we can keep the application and the oracle database server in a different network (but present in same location) and achieve same throughput/performance when both servers are on same subnet.
    Thanks,
    Krishna

    871609 wrote:
    Hi,
    We have an 24/7 application that uses oracle 11g R2 as the back end database. The application sits on a separate box and the oracle database sits on a separate box.
    Unless we keep both the machines in the same sub-net, the throughput of the application becomes very slow and kind of un-usable in performance setups.
    In fully loaded conditions, the application will be inserting around 12K records per minute into the database. In such scenario, restarting the application takes longer time (more than 2 hours) when the oracle server is on a different network as compared to the application. In real world, the oracle will be in a separate dedicated network and the DBAs resist to have an application in the same sub-net.
    Is there a way we can keep the application and the oracle database server in a different network (but present in same location) and achieve same throughput/performance when both servers are on same subnet.
    Thanks,
    KrishnaHave the DBAs explained why they resist having the apps and db servers in the same subnet? Every place I've ever worked configured it exactly that way ... db and apps servers on different machines in the same subnet.

  • Performance issue loading data out of AS400

    Hi,
    For loading data out of AS400, I have created a view containing a join between three AS400 tables connected with a database link (And some more changes in the file tnsnames.ora and the listener. Hell of a job with Oracle, but it works finally)
    When I use the tool Toad, the results of this query will be shown in about 20 seconds.
    When I use this view in OWB to load this data into a target table, then the load takes about 15 MINUTES!
    Why is this so slow?
    Do I have to configure something in OWB to make this load faster?
    Other loads when I'm using views (to Oracle tables) to load data are running fast.
    It seems that Oracle does internally more dan just running the view.
    Who knows?
    Regards,
    Maurice

    Maurice,
    OWB generates optimized code based on whether sources are local or remote. With remote sources, Warehouse Builder will generate code that uses inline views in order to minimize network traffic.
    In your case, you confuse the generation by creating a view that does some remote/local joins telling OWB that the object is local (which is only partly true).
    Perhaps what you could do is create one-to-one views and leave it up to OWB to join the objects. One additional advantage you gain with this approach is that you can keep track of the impact analysis based on your source tables rather than views that are based on the tables with flat text queries.
    Mark.

  • Sudden performance issues, crashing and beach ball

    The last week or so Safari slowed down dramatically, and kept giving me the spinning beach ball. Now this is happening throughout my system, after a restart it seems to work ok for a few minutes, then will barely funciton at all in any application, usually resulting in a spinning beach ball that never stops, or the sytem freezes completely.
    I'm running 10.8.4, have the RAM maxed out, plenty of free HD space. I'm done some basic troubleshotting inlcuding verifying the disc, and also ran DiscWarrior from an external DVD.
    Still having the same issues.
    I work at home and this is my main computer so I'm trying to get it fixed soon.
    thanks in advance!
    Jeff

    Back up all data immediately as your boot drive may be failing.
    If you have more than one user account, these instructions must be carried out as an administrator. I've tested them only with the Safari web browser. If you use another browser, they may not work as described.
    Triple-click anywhere in the line below on this page to select it:
    syslog -k Sender kernel -k Message CReq 'Channel t|GPU D|I/O|n Cause: -' | tail | open -ef
    Copy the selected text to the Clipboard (command-C).
    Launch the Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    Paste into the Terminal window (command-V).
    The command may take a noticeable amount of time to run. Wait for a new line ending in a dollar sign (“$”) to appear.
    A TextEdit window will open with the output of the command. Normally the command will produce no output, and the window will be empty. If the TextEdit window (not the Terminal window) has anything in it, post it — the text, please, not a screenshot. The title of the TextEdit window doesn't matter, and you don't need to post that.

  • Performance issue loading 4000 records from XML

    Hello, Im' trying to upload in a table with the following sqlstatement records from an XML having content of this type
    <?xml version="1.0" encoding="UTF-8"?>
    <custom-objects xmlns="http://www.mysite.com/xml/impex/customobject/2006-10-31">
        <custom-object type-id="NEWSLETTER_SUBSCRIBER" object-id="[email protected]">
      <object-attribute attribute-id="customer-no"><value>BLY00000001</value></object-attribute>
      <object-attribute attribute-id="customer_type"><value>registered</value></object-attribute>
            <object-attribute attribute-id="title"><value>Mr.</value></object-attribute>
            <object-attribute attribute-id="first_name"><value>Jean paul</value></object-attribute>
            <object-attribute attribute-id="is_subscribed"><value>true</value></object-attribute>
            <object-attribute attribute-id="last_name"><value>Pennati Swiss</value></object-attribute>
            <object-attribute attribute-id="address_line_1"><value>newsletter ADDRESS LINE 1 data</value></object-attribute>
            <object-attribute attribute-id="address_line_2"><value>newsletter ADDRESS LINE 2 data</value></object-attribute>
            <object-attribute attribute-id="address_line_3"><value>newsletter ADDRESS LINE 3 data</value></object-attribute>
            <object-attribute attribute-id="housenumber"><value>newsletter HOUSENUMBER data</value></object-attribute>
            <object-attribute attribute-id="city"><value>newsletter DD</value></object-attribute>
            <object-attribute attribute-id="post_code"><value>6987</value></object-attribute>
            <object-attribute attribute-id="state"><value>ASD</value></object-attribute>
            <object-attribute attribute-id="country"><value>ES</value></object-attribute>
            <object-attribute attribute-id="phone_home"><value>0044 1234567 newsletter phone_home</value></object-attribute>
            <object-attribute attribute-id="preferred_locale"><value>fr_CH</value></object-attribute>
            <object-attribute attribute-id="exported"><value>true</value></object-attribute>
            <object-attribute attribute-id="profiling"><value>true</value></object-attribute>
            <object-attribute attribute-id="promotions"><value>true</value></object-attribute>
            <object-attribute attribute-id="source"><value>https://www.mysite.com</value></object-attribute>
            <object-attribute attribute-id="source_ip"><value>10.10.1.1</value></object-attribute>
            <object-attribute attribute-id="pr_product_serial_number"><value>000123345678 product serial no.</value></object-attribute>
            <object-attribute attribute-id="pr_purchased_from"><value>Store where product to be registered was purchased</value></object-attribute>
            <object-attribute attribute-id="pr_date_of_purchase"><value></value></object-attribute>
            <object-attribute attribute-id="locale"><value>fr_CH</value></object-attribute> 
        </custom-object>
        <custom-object type-id="NEWSLETTER_SUBSCRIBER" object-id="[email protected]">
       <object-attribute attribute-id="customer-no"><value></value></object-attribute>
       <object-attribute attribute-id="customer_type"><value>unregistered</value></object-attribute>
            <object-attribute attribute-id="title"><value>Mr.</value></object-attribute>
            <object-attribute attribute-id="first_name"><value>Jean paul</value></object-attribute>
            <object-attribute attribute-id="is_subscribed"><value>true</value></object-attribute>
            <object-attribute attribute-id="last_name"><value>Pennati Swiss</value></object-attribute>
            <object-attribute attribute-id="address_line_1"><value>newsletter ADDRESS LINE 1 data</value></object-attribute>
            <object-attribute attribute-id="address_line_2"><value>newsletter ADDRESS LINE 2 data</value></object-attribute>
            <object-attribute attribute-id="address_line_3"><value>newsletter ADDRESS LINE 3 data</value></object-attribute>
            <object-attribute attribute-id="housenumber"><value>newsletter HOUSENUMBER data</value></object-attribute>
            <object-attribute attribute-id="city"><value>newsletter CASLANO</value></object-attribute>
            <object-attribute attribute-id="post_code"><value>6987</value></object-attribute>
            <object-attribute attribute-id="state"><value>TICINO</value></object-attribute>
            <object-attribute attribute-id="country"><value>CH</value></object-attribute>
            <object-attribute attribute-id="phone_home"><value>0044 1234567 newsletter phone_home</value></object-attribute>
            <object-attribute attribute-id="preferred_locale"><value>fr_CH</value></object-attribute>
            <object-attribute attribute-id="exported"><value>true</value></object-attribute>
            <object-attribute attribute-id="profiling"><value>true</value></object-attribute>
            <object-attribute attribute-id="promotions"><value>true</value></object-attribute>
            <object-attribute attribute-id="source"><value>https://www.mysite.com</value></object-attribute>
            <object-attribute attribute-id="source_ip"><value>85.219.17.170</value></object-attribute>
            <object-attribute attribute-id="pr_product_serial_number"><value>000123345678 product serial no.</value></object-attribute>
            <object-attribute attribute-id="pr_purchased_from"><value>Store where product to be registered was purchased</value></object-attribute>
            <object-attribute attribute-id="pr_date_of_purchase"><value></value></object-attribute>
            <object-attribute attribute-id="locale"><value>fr_CH</value></object-attribute> 
        </custom-object>
    </custom-objects>
    I use the following sequence of queries below to do the insert (XML_FILE is passed to the procedure as XMLType) 
    INSERT INTO DW_CUSTOMER.NEWSLETTERS (
       BRANDID,
       CUSTOMER_EMAIL,
       DW_WEBSITE_TAG
    Select
    p_brandid as BRANDID,
    CUSTOMER_EMAIL,
    p_website
    FROM
    (select XML_FILE from dual) p,
    XMLTable(
    xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31'),
    '/custom-objects/custom-object' PASSING p.XML_FILE
    COLUMNS
    customer_email PATH '@object-id'
    ) CUSTOMER_LEVEL1;
    INSERT INTO DW_CUSTOMER.NEWSLETTERS_C_ATT (
       BRANDID, 
       CUSTOMER_EMAIL,
       CUSTOMER_NO, 
       CUSTOMER_TYPE,
       TITLE,
       FIRST_NAME,
       LAST_NAME,
       PHONE_HOME,
       BIRTHDAY,
       ADDRESS1,
       ADDRESS2,
       ADDRESS3,
       HOUSENUMBER,
       CITY,
       POSTAL_CODE,
       STATE,
       COUNTRY,
       IS_SUBSCRIBED,
       PREFERRED_LOCALE,
       PROFILING,
       PROMOTIONS,
       EXPORTED,
       SOURCE,
       SOURCE_IP,
       PR_PRODUCT_SERIAL_NO,
       PR_PURCHASED_FROM,
       PR_PURCHASE_DATE,
       LOCALE,
       DW_WEBSITE_TAG)
        with mainq as
            SELECT
            CUST_LEVEL1.customer_email as CUSTOMER_EMAIL,
            CUST_LEVEL2.*
            FROM
            (select XML_FILE from dual) p,
            XMLTable(
            xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31'),
            '/custom-objects/custom-object' PASSING p.XML_FILE
            COLUMNS
            customer_email PATH '@object-id',
            NEWSLETTERS_C_ATT XMLType PATH 'object-attribute'
            ) CUST_LEVEL1,
            XMLTable(
            xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31'),
            '/object-attribute' PASSING CUST_LEVEL1.NEWSLETTERS_C_ATT
            COLUMNS
            attribute_id PATH '@attribute-id',
            thevalue PATH 'value'
            ) CUST_LEVEL2
        select
        p_brandid
        ,customer_email
        ,nvl(max(decode(attribute_id,'customer_no',thevalue)),SET_NEWSL_CUST_ID) customer_no   
        ,max(decode(attribute_id,'customer_type',thevalue)) customer_type
        ,max(decode(attribute_id,'title',thevalue)) title
        ,substr(max(decode(attribute_id,'first_name',thevalue)) ,1,64)first_name
        ,substr(max(decode(attribute_id,'last_name',thevalue)) ,1,64) last_name
        ,substr(max(decode(attribute_id,'phone_hone',thevalue)) ,1,64) phone_hone
        ,max(decode(attribute_id,'birthday',thevalue)) birthday
        ,substr(max(decode(attribute_id,'address_line1',thevalue)) ,1,100) address_line1
        ,substr(max(decode(attribute_id,'address_line2',thevalue)) ,1,100) address_line2
        ,substr(max(decode(attribute_id,'address_line3',thevalue)) ,1,100) address_line3   
        ,substr(max(decode(attribute_id,'housenumber',thevalue)) ,1,64) housenumber
        ,substr(max(decode(attribute_id,'city',thevalue)) ,1,128) city
        ,substr(max(decode(attribute_id,'post_code',thevalue)) ,1,64) postal_code
        ,substr(max(decode(attribute_id,'state',thevalue)),1,256) state
        ,substr(max(decode(attribute_id,'country',thevalue)),1,32) country
        ,max(decode(attribute_id,'is_subscribed',thevalue)) is_subscribed
        ,max(decode(attribute_id,'preferred_locale',thevalue)) preferred_locale
        ,max(decode(attribute_id,'profiling',thevalue)) profiling
        ,max(decode(attribute_id,'promotions',thevalue)) promotions
        ,max(decode(attribute_id,'exported',thevalue)) exported   
        ,substr(max(decode(attribute_id,'source',thevalue)),1,256) source   
        ,max(decode(attribute_id,'source_ip',thevalue)) source_ip       
        ,substr(max(decode(attribute_id,'pr_product_serial_number',thevalue)),1,64) pr_product_serial_number
        ,substr(max(decode(attribute_id,'pr_purchased_from',thevalue)),1,64) pr_purchased_from   
        ,substr(max(decode(attribute_id,'pr_date_of_purchase',thevalue)),1,32) pr_date_of_purchase
        ,max(decode(attribute_id,'locale',thevalue)) locale
        ,p_website   
        from
        mainq
        group by customer_email, p_website
    I CANNOT MANAGE TO INSERT 4000 records in less than 30 minutes!
    Can you help or advise how to reduce this to reasonable timings?
    Thanks

    Simplified example on a few attributes :
    -- INSERT INTO tmp_xml VALUES ( xml_file );
    INSERT ALL
      INTO newsletters (brandid, customer_email, dw_website_tag)
      VALUES (p_brandid, customer_email, p_website)
      INTO newsletters_c_att (brandid, customer_email, customer_no, customer_type, title, first_name, last_name)
      VALUES (p_brandid, customer_email, customer_no, customer_type, title, first_name, last_name)
    SELECT o.*
    FROM tmp_xml t
       , XMLTable(
           xmlnamespaces(default 'http://www.mysite.com/xml/impex/customobject/2006-10-31')
         , '/custom-objects/custom-object'
           passing t.object_value
           columns customer_email varchar2(256) path '@object-id'
                 , customer_no    varchar2(256) path 'object-attribute[@attribute-id="customer-no"]/value'
                 , customer_type  varchar2(256) path 'object-attribute[@attribute-id="customer_type"]/value'
                 , title          varchar2(256) path 'object-attribute[@attribute-id="title"]/value'
                 , first_name     varchar2(64)  path 'object-attribute[@attribute-id="first_name"]/value'
                 , last_name      varchar2(64)  path 'object-attribute[@attribute-id="last_name"]/value'
         ) o

  • ADF-JSF: Application Performance Issue

    Hello!
    My question or set of questions will be a bit vague...I am simply not sure where to look for problem(s). So here is what I have. Application implemented with ADF-JSF (JDEV ver:10.1.3.2.0). It basically has 5 pages. Each page containes user input form, commandButton and result table. Functionally, each page is a 'search page' that returns results based on what user specified in the form. Components on each page are bound to VO that is based on EO (DB table). Tables have at least 2.5M records up to 16M. Certain indexes exist (for most common searches) to improve the performance. However, there have been performance issues found and largely they would be grouped into the following:
    1. User is on page A, performs the search, goes to page B (via link) and performs other search, then goes back to A and similar search takes much longer to return results. Seems to me that this moght be related to memory. Maybe results of the previous search are cashed and it takes new search to retreive results longer as the VO cashe needs to be cleared first. Does that make sense?
    2. User is on page A and then goes to B. Leaves browser for 10-20 minutes and tries to go back to A. It takes up to a minute before page reloads with the previously displayed results. I am thinking this has to be related to page lifecycle where AM tries to re-execute bindings ( I do not think it is passivation issue though). What is the best practice to control the lifecycle?
    Any pointer on where to look for the solution is very welcome.
    Rade

    Carl,
    To use Tom Kyte's analogy, you are firing a gun into a room full of people hoping to hit the bad guy. You haven't seemed to have gotten any information about where the performance issues lie. It could be in the DB, network, ADF Business Components, JSF layer, other stuff monopolizing resources, etc, etc. I have ADF BC apps developed in 10.1.3.3 that run quite well.
    So, I would recommend you spend some time investigating where the performance problems lie. Try turning on logging output, check machine utilization - use your investigative techniques to find the bad guy so you can then work on fixing him.
    John

  • Performance issue in first run

    Hi Experts,
    I am having one performance issue. In the first run of z program performance is very poor.  but in the second run it is fast. Performance is getting affected at one select query on table FAGLFLEXA. There is no buffering selected at technical setting level for this. Please guide in this case.

    Hello Swapnil,
    Please turn on a SQL trace in ST05 when you experience performance issue again and ask your developer to tune up the z program.
    Thanks,
    Siva Kumar

  • Property Loader and then Save File

    I have an application where I'm programmatically building my sequence file from a spec document.  I generate a CSV that has all of my properties, and build a dummy sequence from template functions.  I can import the properties to this file using Import/Export, save the file, and then run the rest of my generator sequence, but I need to automate the whole process.
    I am able to use the property loader step from the sequence that I'm actually generating and the parameters are loaded without a problem.  I attempted to move this into my sequence that builds that sequence, and it didn't work.  I GetSequenceFileEX before loading to make sure it's open, perform the load, and then Save the sequence file and ReleaseSequenceFileEX, but it's not doing anything.  Is what I'm trying to do possible, and if so, what am I doing wrong?
    Thanks,
    Bryan

    Hi Bryan,
    First let me make sure I understand your situation.  It sounds like you've already built the sequence that can programmatically build a sequence of template functions then populate using a Property Loader step in the generated sequence.  You are hoping to remove that Property Loader from the generated sequence and instead have the generating sequence load the properties and embed them as defaults in the generated sequence.  This way you would avoid having to run the Property Load step every time you ran this sequence.  Is this correct?
    Assuming it is, I believe this exceeds the capabilities of the Property Loader functionality.  It isn't designed to "push" properties to another sequence.  Therefore your options are to either stick with the Property Loader step at the beginning of your generated sequence or implement the parsing and assigning of the CSV to manually populate the template steps.
    Please post back if I misunderstood you or you have additional questions.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Generic extraction loading and performances issues

    hi,
          any one can give the details about the generic extraction loading as well as performance issues.
    advance thanks
    regards
    praveen

    Hi,
    when there is no suitable business content datasource we go for creating generic data source.
    by using generic data source we can extract data present in single or multiple tables.
    If the data is present in the single table we go for generic data source extracting from table.
    If the data that is to be extracted is present in multiple tables and the relation between the tables is one to one and if there is a common field in both the tables we create a view on these tables and create a generic data source extracting from view.
    if you want to extract data from different tables and there is no common field we create a info set on these tables and create a generic datasource extracting from query.
    if you want to extarc the data from different tables and the relation is one to many  or many to many we create a generic data source from function module.
    If we extarct from function module ,at run time it has to execute the code and brings the data to BW.so it degrades the loading performance.
    regards,

Maybe you are looking for