JSF 1.1 performance, especially UIData and Data Table

Hi,
Does anybody have any JSF 1.1 (Sun reference implementation) performance experiences to share? I am currently looking at the data table component and the use of UIData. Initial observations are an incredible amount of memory is churned during rendering the data table, with the following classes culprits:
java.util.HashMap$KeyIterator
javax.faces.component.UIComponentBase$ChildrenListIterator
java.util.AbstractList$Itr
char[]
java.util.ArrayList
javax.faces.component.UIComponentBase$FacetsMapKeySetIterator
javax.faces.component.UIComponentBase$FacetsMapKeySet
javax.faces.component.UIComponentBase$FacetsMapValues
javax.faces.component.UIComponentBase$FacetsAndChildrenIterator
To render 50 rows with 10 columns (each column only having a simple outputText component) I'm seeing 1.3Mb memory churned and 0.8 seconds processing time.
To rener 100 rows with same columns and components I'm seeing nearly 2Mb churned and 2 seconds processing time.
UIData.setRowIndex is a large culprit.
I'm really after finding out your experiences on JSF performance and its scalability.
Any help here is appreciated.
Thanks - JJ

Hi,
Does anybody have any JSF 1.1 (Sun reference implementation) performance experiences to share? I am currently looking at the data table component and the use of UIData. Initial observations are an incredible amount of memory is churned during rendering the data table, with the following classes culprits:
java.util.HashMap$KeyIterator
javax.faces.component.UIComponentBase$ChildrenListIterator
java.util.AbstractList$Itr
char[]
java.util.ArrayList
javax.faces.component.UIComponentBase$FacetsMapKeySetIterator
javax.faces.component.UIComponentBase$FacetsMapKeySet
javax.faces.component.UIComponentBase$FacetsMapValues
javax.faces.component.UIComponentBase$FacetsAndChildrenIterator
To render 50 rows with 10 columns (each column only having a simple outputText component) I'm seeing 1.3Mb memory churned and 0.8 seconds processing time.
To rener 100 rows with same columns and components I'm seeing nearly 2Mb churned and 2 seconds processing time.
UIData.setRowIndex is a large culprit.
I'm really after finding out your experiences on JSF performance and its scalability.
Any help here is appreciated.
Thanks - JJ

Similar Messages

  • Issue while perform the Task and Data Audit in Workspace

    Hi All,
    When we try to perform the task and data audit functions in workspace it throws the following error please advise how to fix this issue.
    An error has occurred. Please contact your administrator.
    Show Details:
    Error Reference Number: {110FD8BF-AD33-4257-BA3D-46FF30208EA2};User Name: admin@Native Directory
    Num: 0x80040e31;Type: 1;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: CHsvSystemActivity.cpp;Line: 479;Ver: 11.1.1.2.0.2207;DStr: Timeout expired;
    Num: 0x80004005;Type: 0;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: SystemInfoUsersActivities.cpp;Line: 617;Ver: 11.1.1.2.0.2207;
    Num: 0x80004005;Type: 0;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: CHsvSystemInfo.cpp;Line: 2664;Ver: 11.1.1.2.0.2207;
    Num: 0x80004005;Type: 0;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: CHFMwSystemInfo.cpp;Line: 1846;Ver: 11.1.1.2.0.2207;
    Thanks in advance,

    If you have any SELECT FOR UPDATE in your code, these will produce locks, and it will not be released until the session which has locked the record has released it. This can be one reason why you have this behavior.
    you need to trail your code if it's explicitly locking a row or more.
    Tony

  • View Customization Table changes and Data  table base changes in a report

    Hi All,
    How to view Customization Table changes and DAta  table base changes in a report ,
    Is it right transactions: SCU3 or RSVTPROT
    Also plz let me know the concept of audit trial,
    Thanks
    SD

    Hi,
    Changes to master data objects must be captured for the For compliance purposes. The auditor allows you to be able to view and print an audit log of changes to master data objects for a chosen period. It is very common for external auditors to focus on what has changed from one year or quarter to the next to help determine the nature, extent, and population for testing. To configure and access your audit log, perform the actions listed with each of the following utilities.
    Audit Trail is used to track the record changes.
    The report RPUAUD00 gives all the changes done to the masterdata by any user anytime.
    But, before using this audit trial, please ensure that the system hardware is well equipped as the audit trials activation would later become a performance issue as this would occupy a lot of space in the coming time.
    Best Regards,
    Venkat.

  • Splitting of catalogue and data tables in their core BO management database

    Hi All,
    Could you please suggest your ideas on the below recommendation suggested by our DBA's:-
    It is recommendable to split catalogue tables from data table and other data objects (e.g. indexes) using storage.
    splitting of catalogue and data tables in their core BO management database
    Is it a recommendable solution to splitting of catalogue and data tables in their core BO management database?
    Many Thanks,
    Madhu
    Edited by: Madhu P on Jun 11, 2008 11:56 AM

    Dear Madhu,
    it's really safe to separate the BO management tables ( the repository) from the tables containing the data.
    We went a step further and create a BO database (dedicated server) containing only the repository.
    Because the BO database may provide meta data form different universes pointing to different databases containing the data we have easier administration and some performance advantages
    with this separation.
    bye
    yk

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • Weird problem with mysql query and data table buttons !!!!

    Hi,
    I'm using jsc 2 update 1 on windows and mysql 4.1 . I have a page with a data table. One column of the data table contains "Details" buttons.
    Source query for the table is :
    SELECT tbl_tesserati.idtbl_tesserati idTesserato,
    tbl_tesserati.num_tessera,
    tbl_tesserati.nome,
    tbl_societa.codice_meccanografico
    FROM tbl_tesserati
    INNER JOIN tbl_rel_tesserato_discipline_societa ON tbl_tesserati.idtbl_tesserati = tbl_rel_tesserato_discipline_societa.id_tesserato
    INNER JOIN tbl_cariche ON      tbl_rel_tesserato_discipline_societa.id_carica = tbl_cariche.idtbl_cariche
    INNER JOIN tbl_qualifiche ON      tbl_rel_tesserato_discipline_societa.id_qualifica = tbl_qualifiche.idtbl_qualifiche
    INNER JOIN tbl_discipline ON      tbl_rel_tesserato_discipline_societa.id_disciplina = tbl_discipline.idtbl_discipline
    INNER JOIN tbl_societa ON      tbl_rel_tesserato_discipline_societa.id_societa = tbl_societa.idtbl_societa
    LEFT JOIN tbl_province ON tbl_societa.provincia_sede_sociale = tbl_province.idtbl_province
    LEFT JOIN tbl_comuni ON tbl_societa.comune_sede_sociale = tbl_comuni.idtbl_comuni
    LEFT JOIN tbl_rel_tesserato_discipline_praticate ON tbl_rel_tesserato_discipline_praticate.tessera_id=
    tbl_rel_tesserato_discipline_societa.idtbl_rel_tesserato_discipline
    LEFT JOIN tbl_discipline_praticate ON tbl_discipline_praticate.idtbl_disciplina_praticate=tbl_rel_tesserato_discipline_praticate.disciplina_praticata_id
    WHERE
    tbl_tesserati.cognome LIKE ?
    AND tbl_tesserati.nome LIKE ?
    AND tbl_rel_tesserato_discipline_societa.id_societa LIKE ?
    AND tbl_tesserati.idtbl_tesserati LIKE ?
    AND tbl_cariche.idtbl_cariche LIKE ?
    AND tbl_qualifiche.idtbl_qualifiche LIKE ?
    AND tbl_tesserati.data_nascita >= ?
    AND tbl_tesserati.data_nascita<= ?
    AND tbl_discipline.idtbl_discipline LIKE ?
    AND codice_affiliazione LIKE ?
    AND tbl_societa.denominazione LIKE ?
    AND YEAR(tbl_rel_tesserato_discipline_societa.data_scadenza) LIKE ?
    AND (tbl_province.nome LIKE ? OR tbl_province.nome IS NULL)
    AND ( tbl_comuni.nome LIKE ? OR tbl_comuni.nome IS NULL)
    The tbl_tesserati.data_nascita is a mysql date field.
    The click event handler code for the "Details" Button is:
    public String btnModificaTesserato_action() {
            try{
                TableRowDataProvider rowData= (TableRowDataProvider)getBean("currentRowTesserati");
                getRequestBean1().setId_tesserato((Long)rowData.getValue("idTesserato"));          
            } catch(Exception ex) {
                log("errore nella query",ex);
            return "dettaglioTesseratoSocieta";
        }When i run the project and open the page the table is correctly rendered and populated with some rows. But when i click on details button nothing happens, the page is simply reloaded.
    If i set a breakpoint in the code line   TableRowDataProvider rowData= (TableRowDataProvider)getBean("currentRowTesserati");the debbuger does not stop the code execution ! As if the button was never clicked!
    I tried to modify the source query to :
    SELECT tbl_tesserati.idtbl_tesserati idTesserato,
    tbl_tesserati.num_tessera,
    tbl_tesserati.nome,
    tbl_societa.codice_meccanografico
    FROM tbl_tesserati
    INNER JOIN tbl_rel_tesserato_discipline_societa ON tbl_tesserati.idtbl_tesserati = tbl_rel_tesserato_discipline_societa.id_tesserato
    INNER JOIN tbl_cariche ON      tbl_rel_tesserato_discipline_societa.id_carica = tbl_cariche.idtbl_cariche
    INNER JOIN tbl_qualifiche ON      tbl_rel_tesserato_discipline_societa.id_qualifica = tbl_qualifiche.idtbl_qualifiche
    INNER JOIN tbl_discipline ON      tbl_rel_tesserato_discipline_societa.id_disciplina = tbl_discipline.idtbl_discipline
    INNER JOIN tbl_societa ON      tbl_rel_tesserato_discipline_societa.id_societa = tbl_societa.idtbl_societa
    LEFT JOIN tbl_province ON tbl_societa.provincia_sede_sociale = tbl_province.idtbl_province
    LEFT JOIN tbl_comuni ON tbl_societa.comune_sede_sociale = tbl_comuni.idtbl_comuni
    LEFT JOIN tbl_rel_tesserato_discipline_praticate ON tbl_rel_tesserato_discipline_praticate.tessera_id=
    tbl_rel_tesserato_discipline_societa.idtbl_rel_tesserato_discipline
    LEFT JOIN tbl_discipline_praticate ON tbl_discipline_praticate.idtbl_disciplina_praticate=tbl_rel_tesserato_discipline_praticate.disciplina_praticata_id
    WHERE
    tbl_tesserati.cognome LIKE ?
    AND tbl_tesserati.nome LIKE ?
    AND tbl_rel_tesserato_discipline_societa.id_societa LIKE ?
    AND tbl_tesserati.idtbl_tesserati LIKE ?
    AND tbl_cariche.idtbl_cariche LIKE ?
    AND tbl_qualifiche.idtbl_qualifiche LIKE ?
    AND tbl_tesserati.data_nascita >= ?
    OR tbl_tesserati.data_nascita<= ?
    AND tbl_discipline.idtbl_discipline LIKE ?
    AND codice_affiliazione LIKE ?
    AND tbl_societa.denominazione LIKE ?
    AND YEAR(tbl_rel_tesserato_discipline_societa.data_scadenza) LIKE ?
    AND (tbl_province.nome LIKE ? OR tbl_province.nome IS NULL)
    AND ( tbl_comuni.nome LIKE ? OR tbl_comuni.nome IS NULL)
    Using this query everything works well !! The click handler works and the debugger too !!
    I changed only the AND in OR !!!
    I also tried to change mysql-x-x-connector driver but without solving my problem.
    Can someone help me ?
    Thanks
    Giorgio

    You'll find that it is more to do with the way MySql deals with dates than anything else! Depending on how your date field is setup, then try using a BETWEEN statement for those 2 lines in your first query e.g.
    AND ( tbl_tesserati.data_nascita BETWEEN ? AND ?)
    The date column needs to be in the ISO format to work. If you examine your second query output, you might discover that the output is only going to refer to one parameter (probably the OR one). Did you manage to view the output logs from the application server? You would have got an idea from there with a message like stating a conversion error'.
    Alternatively, you could try using the to_days() function and convert it directly to a number which would be a lot easier to deal with. For example:
    AND to_days(tbl_tesserati.data_nascita >= ? )
    AND to_days( tbl_tesserati.data_nascita<= ? )
    Or try the BETWEEN version with to_days() and see what you get.
    More info about date formatting (v5) here:
    http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_to-days
    Before I forget, sometimes you may need to treat dates as Strings rather 'Long' as you did.
    As a matter of interest, did you try your query in a different piece of software?
    If my queries are a little more complicated, I tend to try MySql queries out in the free MySql query browser and also double check in another to verify certain issues. I found it easier to develop SQL in a seperate program then import the final version to JSC making the required modifications for parameters.
    Message was edited by:
    aerostra

  • Business Graphics and Data Tables

    Hi All,
    I have a SAPGUI application which uses the SAP Chart to display a pretty simple column chart with a data table underneath.
    With the SAPGUI Chart Options this is set as so: [Chart Options |http://www.assetcentric.com.au/images/chart-options.png]
    The result is a chart looking like so:
    [Chart Example|http://www.assetcentric.com.au/images/chart-example.png]
    Does anybody know whether this is possible to achieve with the IGS from WebDynpro?
    Regards
    Andrew

    Hi,
    If you are using webdynpro for ABAP then we cna use the Business Graphics UI element where we have differenet chart optins( bar, pie, lines etc) to have the graphs.
    BG in Webdynpro -
    http://help.sap.com/saphelp_nw70/helpdata/EN/ed/258841a79f1609e10000000a155106/content.htm
    For WD4Java we need the separate IGS  server to render the graphs, but in WD4A we dont need it. we can design webdynpro applicaitons in SE80 and can run the applciations in browser direclty.
    we can use the data from BAPI by making a service call to that BAPI, and bind the tables to this BG UI element to display.
    Regards
    Lekha

  • How to make a jsf parameter form with a selectOneChoice and data control

    I want to make a parameter form with a selectOneChoice and store this value in a backing bean, so I can use as a bind variable in query. I already made it happen without data control
    Can someone show how to do it in adf data control. With the wizard it tries put the value
    in a viewobject but I don't have base or destination viewobject. I can change the pagedef
    so it works to fill the selectOneChoice but then I want to store the selected value in a session beab
    here is solution with the application module and request / session bean, I don't
    think it is the right solution
    thanks edwin
    jsf page
    <af:selectOneChoice label="Desk" value="#{selectDesk.desk}"
    id="DeskID"
    binding="#{selectDesk.deskBinding}"
    autoSubmit="true"
    valueChangeListener="#{selectDesk.deskChangeListener}">
    <f:selectItems value="#{selectDesk.deskSelectItems}"/>
    </af:selectOneChoice>
    Selektdesk backing bean
    public SelectDesk(){
    FacesContext facesContext = FacesContext.getCurrentInstance();
    ValueBinding valueBinding = facesContext.getApplication().createValueBinding("#{userInfo}");
    userInfo = (UserInfo) valueBinding.getValue(facesContext);
    public List<SelectItem> getDeskSelectItems() {
    if ( userInfo.getSelectItems() != null ) {
    selectItems = userInfo.getSelectItems();
    return selectItems;
    if ( getBindings() != null ) {
    userInfo.setBindings( getBindings());
    else {
    setBindings(userInfo.getBindings()) ;
    if ( selectItems == null ) {
    selectItems = new ArrayList<SelectItem>();
    DCBindingContainer bc = getBindings();
    SicmaService sicma = (SicmaService)bc.findDataControl("SicmaServiceDataControl").getDataProvider();
    ViewObject desk = sicma.findViewObject("DeskSelectView1");
    desk.executeQuery();
    RowSet rows = desk.getRowSet();
    while (rows.hasNext()) {
    Row a = rows.next();
    String DskId = a.getAttribute(0).toString();
    String DskOms = a.getAttribute(1).toString();
    selectItems.add( new SelectItem(DskId ,DskOms ));
    rows.closeRowSet();
    userInfo.setSelectItems(selectItems);
    return selectItems;
    faces-config.xml
    <managed-bean>
    <managed-bean-name>selectDesk</managed-bean-name>
    <managed-bean-class>org.tennet.sicma.view.backing.SelectDesk</managed-bean-class>
    <managed-bean-scope>request</managed-bean-scope>
    <managed-property>
    <property-name>bindings</property-name>
    <property-class>oracle.adf.model.binding.DCBindingContainer</property-class>
    <value>#{bindings}</value>
    </managed-property>
    </managed-bean>

    The SRDemo has an example of a selectOneRadio bound to a parameter being used to pass to an ExecuteWithParams action (to automatically feed a view object's named bind variable. This is in the SRStaffSearch.jspx page.
    Also, if you check out example # 72 from my blog:
    http://radio.weblogs.com/0118231/stories/2004/09/23/notYetDocumentedAdfSampleApplications.html#72
    There is another example of using a selectOneListbox to do the same type of thing.
    The steps involved in creating something like this are to:
    (1) Define your named bind variables on your view object
    (2) Drop the "ExecuteWithParams" action in the operations folder of that view object from the Data Control Palette to your page as a "Parameter Form"
    (3) Drop the specific attribute (nested child of the "ExecuteWithParams" action from step (2) as whatever kind of selectOneXXXX control you want
    (4) Delete the extra field on the form you don't want.

  • Transport of infotype and data table to other client

    Hi all,
    I have made a custom infotype - 9002 and a data base table zpa9002. I put some entries in this table. By mistake I saved both the infotype and table in local objects - $tmp. I have changed the package of database table and created the request for it.
    I am not able to change the attributes of infotype(9002) i.e. am not able to change its package from $temp to something else. Can anyone please let me know how to change that and what else to do to transport the complete infotype to other client.
    Anothet thing I want to ask is -  if I transport the database table to the other client, all the contents or data  will also be transported or I  have to do it manually?
    Thanks to all,
    Ribhu

    Hi,
    If you transport a table, by default, the data <b>will not be</b> transported. you will have to do it manually.
    You can include the entry R3TR TABU <table name> in the piecelist.
    Regards,
    Anand Mandalika.

  • Performance tuning of Master Data Table: VBAK LIPS VBFA MSEG MKPF

    Hi, ALL
         How to improve performance to following statements: inner join.
    select  LIPSVGBEL LIPSVBELN MSEGMATNR MKPFBUDAT
               MKPFUSNAM MSEGLBKUM MSEGBWART MSEGWERKS
               VBAKIHREZ MSEGMBLNR VBAKAUART LIPSPSTYV
              MSEGLGPLA MSEGMEINS MSEGMJAHR MKPFMJAHR
    into corresponding fields of table it_out
    from ( VBAK
           <b>inner join</b> LIPS
           on LIPSVGBEL = VBAKVBELN
          <b> inner join</b> VBFA
           on VBFAPOSNV = LIPSPOSNR
           and VBFAVBELV = LIPSVBELN
           <b>inner join</b> MSEG
           on MSEGMATNR = VBFAMATNR
           and MSEGMBLNR = VBFAVBELN
           <b>inner join</b> MKPF
           on MKPFMBLNR = MSEGMBLNR )
          where VBAK~AUART in S_AUART
             and VBAK~IHREZ in S_IHREZ
             and LIPS~PSTYV in S_PSTYV
             and LIPS~VBELN in S_VBELN
             and LIPS~VGBEL in S_VGBEL
             and MSEG~BWART in S_BWART
             and MSEG~LBKUM in S_LBKUM
             and MSEG~MATNR in S_MATNR
             and MSEG~MBLNR in S_MBLNR
             and MSEG~MENGE in S_MENGE
             and MSEG~WERKS in S_WERKS
             and MKPF~BUDAT in S_BUDAT
             and MKPF~USNAM in S_USNAM.
    Thank you very much.

    Thanks all of you and your suggestioin.
    Now I have modified my original select statements to two parts.
    1. select LIPS VBAK VBFA to an internal table it_A
    2. select MKPF MSEG to an internal table it_B for all entries in it_A
    3. disuse "into corresponding fields of table"
    After that, my performance has improved for about 10 times.
    Welcome any other suggestions.
    Performance is an forever topic, I think:)

  • How to set different colors for even odd rows in a data table

    hi,
    In my project i hav a data table. i am using jsf. i want the rows of the data table different colors for even, odd rows. can i do it just setting the row class property ? or there should be there some other way? please help.
    sailajoy

    Hope this helps
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-loose.dtd">
    <html>
    <head>
    <meta http-equiv="content-type" content="text/html; charset=iso-8859-1" />
    <title></title>
    <style type='text/css'>
    .scrollContent {
    height:100px;
    overflow-x:hidden;
    overflow-y:auto;
    .scrollContent tr {
    height: auto;
    white-space: nowrap;
    .scrollContent tr td:last-child {
    padding-right: 20px;
    .fixedHeader tr {
    position: relative;
    height: auto;
    top: expression( this.parentNode.parentNode.parentNode.scrollTop + 'px' );
    div.TableContainer {
    border: 1px solid #7DA87D;
    .headerFormat {
    background-color: white;
    color: #C8D7B5;
    margin: 3px;
    padding: 1px;
    white-space: nowrap;
    font-family: Helvetica;
    font-size: 16px;
    text-decoration: none;
    font-weight: bold;
    .headerFormat tr td {
    border: 1px solid #C8D7B5;
    background-color: #666633;
    .bodyFormat tr td {
         color: #000000;
         margin: 3px;
         padding: 1px;
         border: 0px none;
         font-family: Helvetica;
         font-size: 12px;
    .alternateRow {
    background-color: #C8D7B5;
    </style>
    <style type="text/css">
    div.TableContainer {
    height: 121px;
    overflow-x:hidden;
    overflow-y:auto;
    </style>
    </head>
    <body>
    <table cellpadding="0" cellspacing="0" border="0"><tr><td><div id="TableContainer" class="TableContainer" style="height:230px;">
    <table class="scrollTable">
    <thead class="fixedHeader headerFormat">
    <tr class="title">
    <td title="Sort" align="center"><b>NAME</b> </td>
    <td title="Sort" align="center"><b>Amt</b> </td>
    <td title="Sort" align="center"><b>Lvl</b> </td>
    <td title="Sort" align="center"><b>Rank</b> </td>
    <td title="Sort" align="center"><b>Position</b> </td>
    <td title="Sort" align="center"><b>Date</b></td>
    </tr>
    </thead>
    <tbody class="scrollContent bodyFormat" style="height:200px;">
    <tr class="alternateRow">
    <td>Maha</td>
    <td>
    <input type="text" name="textfield" size="7" />
    </td>
    <td align="right"><input type="text" name="textfield2" size="7" /></td>
    <td align="right"><input type="text" name="textfield3" size="7" /></td>
    <td><input type="text" name="textfield4" size="7" /></td>
    <td align="right"><input type="text" name="textfield5" size="7" /></td>
    </tr>
    <tr>
    <td>Thrawl</td>
    <td align="right">$9,550</td>
    <td align="right">159</td>
    <td align="right">100%</td>
    <td>Co-Owner</td>
    <td align="right">11/07/2003</td>
    </tr>
    <tr class="alternateRow">
    <td>Marhanen</td>
    <td align="right">$223.04</td>
    <td align="right">83</td>
    <td align="right">99%</td>
    <td>Banker</td>
    <td align="right">06/27/2006</td>
    </tr>
    <tr>
    <td>Peter</td>
    <td align="right">$121</td>
    <td align="right">567</td>
    <td align="right">23423%</td>
    <td>FishHead</td>
    <td align="right">06/06/2006</td>
    </tr>
    <tr class="alternateRow">
    <td>Jones</td>
    <td align="right">$15</td>
    <td align="right">11</td>
    <td align="right">15%</td>
    <td>Bubba</td>
    <td align="right">10/27/2005</td>
    </tr>
    <tr>
    <td>Supa-De-Dupa</td>
    <td align="right">$145</td>
    <td align="right">91</td>
    <td align="right">32%</td>
    <td>momma</td>
    <td align="right">12/15/1996</td>
    </tr>
    <tr class="alternateRow">
    <td>ClickClock</td>
    <td align="right">$1,213</td>
    <td align="right">23</td>
    <td align="right">1%</td>
    <td>Dada</td>
    <td align="right">1/30/1998</td>
    </tr>
    <tr>
    <td>Mrs. Robinson</td>
    <td align="right">$99</td>
    <td align="right">99</td>
    <td align="right">99%</td>
    <td>Wife</td>
    <td align="right">07/04/1963</td>
    </tr>
    <tr class="alternateRow">
    <td>Maha</td>
    <td align="right">$19,923.19</td>
    <td align="right">100</td>
    <td align="right">100%</td>
    <td>Owner</td>
    <td align="right">01/02/2001</td>
    </tr>
    <tr>
    <td>Thrawl</td>
    <td align="right">$9,550</td>
    <td align="right">159</td>
    <td align="right">100%</td>
    <td>Co-Owner</td>
    <td align="right">11/07/2003</td>
    </tr>
    <tr class="alternateRow">
    <td>Marhanen</td>
    <td align="right">$223.04</td>
    <td align="right">83</td>
    <td align="right">59%</td>
    <td>Banker</td>
    <td align="right">06/27/2006</td>
    </tr>
    <tr>
    <td>Peter</td>
    <td align="right">$121</td>
    <td align="right">567</td>
    <td align="right">534.23%</td>
    <td>FishHead</td>
    <td align="right">06/06/2006</td>
    </tr>
    <tr class="alternateRow">
    <td>Jones</td>
    <td align="right">$15</td>
    <td align="right">11</td>
    <td align="right">15%</td>
    <td>Bubba</td>
    <td align="right">10/27/2005</td>
    </tr>
    <tr>
    <td>Supa-De-Dupa</td>
    <td align="right">$145</td>
    <td align="right">91</td>
    <td align="right">42%</td>
    <td>momma</td>
    <td align="right">12/15/1996</td>
    </tr>
    <tr class="alternateRow">
    <td>ClickClock</td>
    <td align="right">$1,213</td>
    <td align="right">23</td>
    <td align="right">2%</td>
    <td>Dada</td>
    <td align="right">1/30/1998</td>
    </tr>
    <tr>
    <td>Mrs. Robinson</td>
    <td align="right">$99</td>
    <td align="right">99</td>
    <td align="right">(-10.42%)</td>
    <td>Wife</td>
    <td align="right">07/04/1963</td>
    </tr>
    <tr class="alternateRow">
    <td>Maha</td>
    <td align="right">-$19,923.19</td>
    <td align="right">100</td>
    <td align="right">(-10.01%)</td>
    <td>Owner</td>
    <td align="right">01/02/2001</td>
    </tr>
    <tr >
    <td>Thrawl</td>
    <td align="right">$9,550</td>
    <td align="right">159</td>
    <td align="right">-10.20%</td>
    <td>Co-Owner</td>
    <td align="right">11/07/2003</td>
    </tr>
    <tr class="alternateRow" >
    <td><strong>TOTAL</strong>:</td>
    <td align="right"><strong>999999</strong></td>
    <td align="right"><strong>9999999</strong></td>
    <td align="right"><strong>99</strong></td>
    <td > </td>
    <td align="right"> </td>
    </tr>
    </tbody>
    </table>
    </div></td></tr></table>
    </body>
    </html>

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Cube Performance and Data Explosion

    Hi Experts,
    One of the parterns developed a data warehouse application. And the DW application has some performance issue:
    when the report get the query of the high level dimensions, the performance is okey, and when the query get the very detail data in the cube, the performance gets bad.
    The aggregations and the detail data are all stored in the cube, and the cube data gets Explosion quite quickly since some detailed transaction data need to be queried and stored in the cube too.
    So, experts, do you have any good suggestion on this issue? or if the may be a better design for the cube? e.g. in DW, the cube only stores the aggregations or summary on coarse grained data and measure, for fine grained data , it can be got in ODS.
    another question, I google the architecture solution for the above issue, and someone said that if the DW is designed in a hypercube, there maybe data explosion issue, but instead of desinging the hypercube, multicube should be used, so I wonder if multicube can solve the data explosion issue, and how to solve it. And if the multicube has better performance than the hypercube or can also solve detail data query.
    Last question, do you have any experience on DW implementation on TB level Data, and any good suggestion for architecture design using Oracle OLAP or Essbase for good performance.
    Thanks,
    Royal.
    Edited by: Royal on 2012-11-4 上午4:01

    You have not asked any specific technical question. In my opinion all Oracle Datawarehouses should use Oracle OLAP option for the Aggregation strategy. Significant improvements in 11.2.0.2 (and later versions) have been made. It has become much easier now to create and maintain dimensions/cubes. On the reporting side, OBIEE 11g now understands OLAP metadata. Other reporting tools can use the CUBE_TABLE views.
    Here are some links that you may find useful.
    Comparing MVs and OLAP... Oracle White paper
    http://www.oracle.com/technetwork/database/bi-datawarehousing/comparison-aw-mv-11g-twp-130903.pdf
    Oracle OLAP Support page
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1107593.1
    Three demos done by OLAP Development which explains how OLAP can help in a DW.
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_1/OLAP_Features_and_Use_Cases_1.html
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_2/OLAP_Features_and_Use_Cases_2.html
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_3/OLAP_Features_and_Use_Cases_3.html
    Main OLAP page at Oracle OTN site
    http://www.oracle.com/technetwork/database/options/olap/index.html
    Recommended Releases for Oracle OLAP
    http://www.oracle.com/technetwork/database/options/olap/olap-certification-092987.html
    Accelerating Data Warehouses using OLAP option
    http://www.oracle.com/technetwork/issue-archive/2008/08-may/o38olap-085800.html
    What's new in 11.2.0.2 database OLAP option
    http://docs.oracle.com/cd/E11882_01/olap.112/e17123/whatsnew.htm
    Oracle 11.2 OLAP Documentation (scroll down to OLAP section)
    http://www.oracle.com/pls/db112/portal.portal_db?selected=6&frame=#online_analytical_processing_%28olap%29
    Excel reporting from OLAP using Simba tool. This was developed in partnership with Oracle.
    http://www.simba.com/MDX-Provider-for-Oracle-OLAP.htm
    There is a good demo for Simba Excel tool at:
    http://www.simba.com/demos/MDX-Provider-for-Oracle-OLAP-web-demo.html

  • I performed a time machine backup without plugging my labtop into a power source. My computer died and all the settings were changed, ie the clock and date were changed back to 2001. So I tried to restore my computer using a previous time machine backup.

    I performed a time machine backup without plugging my labtop into a power source. My computer died and all the settings were changed, ie the clock and date were changed back to 2001. So I tried to restore my computer using a previous time machine backup. (which I now know was wrong). However, when time machine tried to restore it said there was not enough room to do a backup. It seems that it did a half backup because some essential  files such as system profiler are now missing. Can I undo this restore...? What can I do to fix this

    You need to do a full system restore, per Time Machine - Frequently Asked Question #14.
    If that sends a message, please note the exact wording.

  • DAQ vi to perform digital write and read measurements using 32 bits binary data saved in a file

    Hi
    DAQ vi to perform digital write and read measurements using 32 bits binary data saved in a file
    Two main
    sections:
    1)     
    Perform
    write and read operations to and fro different spread sheet files, such that
    each file have a single row of 32bits different binary data (analogous to 1D
    array) where the left most bit is the MSB. I don’t want to manually enter the
    32 bits binary data, I want the data written or read just by opening a file
    name saves with the intended data.
          2)     
    And
    by using test patterns implemented using the digital pattern generator or  build digital data functions or otherwise, I need to
    ensure that the     
                binary data written to a spreadsheet file or any supported file type
    then through the NI-USB 6509 is same as the data read.
    I’m aware I can’t use the simulated
    device to read data written to any port but if the write part of the vi works I
    ‘m sure the read part will work on the physical device which I’ll buy later.
    My Plan
    of action
    I’ve
    created a basic write/read file task and a write/read DAQ task for NI USB 6509
    and both combine in a while loop to form a progress VI which I’m confuse of how
    to proceed with the implementation.
    My
    greatest problem is to link both together with the correct functions or operators
    such that there are no syntax/execution errors and thus achieve my intended
    result.
    This
    project is one of my many assignments for my master thesis, so please i’ll
    appreciate every help as I’m not really efficient with LabVIEW programming but
    I prefer it because is fun and interesting if I get to know it.
    Currently I’m
    practicing with LabVIEW 8.6/NI DAQmx 8.8 Demo versions and NI USB 6509
    simulated device.
    Please see
    the attached file for my novice progress, thanks in
    advance for the support
    Rgds
    Paul
    Attachments:
    DIO_write_read DAQ from file.vi ‏17 KB

    What does your file look like?  The DAQmx write is expecting a single U32 value, not an array of I64. 
    Message Edited by vt92 on 09-16-2009 02:42 PM
    "There is a God shaped vacuum in the heart of every man which cannot be filled by any created thing, but only by God, the Creator, made known through Jesus." - Blaise Pascal

Maybe you are looking for

  • Illustrator CS5 Mac OS X 10.6.7 terribly slow

    Hello, I have an iMac 27", Intel I7, 8 GB of RAM and use Mac OS 10.6.7. Using Illustrator CS5 on my machine is not really fun since it is slow, cannot work fluently. I designed a simple A6 page double sided. Using the hand-tool to move inside a zoome

  • Security Inheritance breaking in nested folder does not work

    I have a scenario where I have to break the inheritance twice within the document library folders. I appreciate if you provide me any suggestions on how to achieve this? Document library contains "employee folder1", "employee folder2", "employee fold

  • Wireless needs to be reset after every restart

    Hi all, I'm running 10.2.8, and use a Lucent waveLAN/ieee card with the Sourceforge drivers (WirelessDriverBeta5); they do a good job of connecting me to my wireless network. My one complaint is that every time I restart my Powerbook and want to go o

  • Did this resolve my Safari 5 issue with not loading pages as fast?

    I've been having this problem every since I got my laptop a month ago... I have a wireless router that's connected to a PC, and I use wireless on my laptop. My old laptop was a PC, and the internet connection was just fine on it. However, I'm having

  • Visual basic code problem

    Hello! I want to use visual basic to build a htm which can control labview throght datasocket, i set the switch as " Swithc until release " in visual basic, the code is Private Sub CWButton1_Click() CWDataSocket1.Data = CWButton1.Value End Sub but th