Dynamic lists performance impact

Hi
I am interested in finding out what the performance impact of using Site Studio dynamic lists fragments on a web page are. Is the query executed each time the page is loaded? If I place more than one dynamic lists on a page how will that impact the performance?
Thx,
Vaylee

Hi Vaylee,
unless you cache your results query executes each time. I guess the impact will not be that high, as pages are processed before they are served from UCM this will add only a little to the existing process.
cheers,
sapan

Similar Messages

  • Performance impacts of activating MDTB for MRP lists

    Does anyone have any input on the performance impacts of activating MDTB?  Clearly it will impact performance, but by how much?  What are the main drivers of performance number of MRP elements / MDTB records? 
    What methods can be used to mitigate?  Increasing the DB buffering? 
    Thanks

    Hi,
    I doubt if there could be a generic answer for your query. It would depend on a lot of parameters eg: the number of materials being planned, the frequency of planning, system resources available etc.
    All i could say is, load the entire set of materials into the system & then work with your basis personnel to fine tune the system.
    Regards,
    Vivek

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • How to create dynamic list element in Site Studio designer?

    Hi all,
    I have installed the Site Studio Designer(10gR4).The dynamic list element that i have added in the contributor region executes a query to search and display all the files in a particular folder.In the contributor mode when i try to add/edit the dynamic list element it gives an error saying :
    Unable to perform the action due to the following reasons:
    [+] Unable to retrieve search results. Unable to retrieve search results. Unable to create result set for query 'SELECT IdcColl2.dID, dDocName, dDocTitle, dDocType, dRevisionID, dSecurityGroup, dDocAuthor, dDocAccount, dRevLabel, dFormat, dOriginalName, dExtension, dWebExtension, dInDate, dOutDate, dCreateDate, dPublishType, dRendition1, dRendition2, VaultFileSize, WebFileSize, URL, dFullTextFormat, dFullTextCharset, DocMeta.* FROM IdcColl2, DocMeta WHERE IdcColl2.dID=DocMeta.dID AND (((((( xCollectionID >= 14 AND xCollectionID <= 14 ) AND NOT ( (CONTAINS(xDontShowInListsForWebsites, '{DIPP_Sample}') > 0) )))))) ORDER BY dDocTitle desc'. ORA-20000: Oracle Text error: DRG-10599: column is not indexed
    I have enabled the full-text search on the content server and also included xWebsites and xWebsiteObjectType columns to be full indexed in the Zone Fields Configuration.Is there any other setting to be done?Please help.
    Thanks,
    nithya

    Hi
    Include the xDontShowInListsForWebsites also from the zone filed and then update it.Then test it out.
    Hope it helps
    Srinath

  • Isolation level and performance impact?

    Hi
    I'm new to BDB JE and building some prototypes to evaluate it.
    Given a simple usecase of storing the following key/value pair <String,List<Event>> mapping a user to his/her list of events, in the db. New events are added for the user, this happens (although fairly rarely) concurrently.
    Using Serializable isolation will prevent any corruption to the list of events, since the events are effectively added serially to the user. I was wondering:
    1. if there are any lesser levels of isolation that would still be adequate
    2. using Serializable isolation, is there a performance impact on updating users non concurrently (ie there's no lock contention since for the majority of cases concurrent updates won't happen) vs the default isolation level?
    3. building on 2. is there performance impact (other than obtaining and releasing locks) on using transactions with X isolation during updates of existing entries if there are no lock contention (ie, no concurrent updates) vs not using transactions at all?
    Thanks!
    Peter

    Have you seen this section of the Getting Started Guide on isolation levels in JE? http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html
    Our default is Repeatable Read, and that could be sufficient for your application depending on your access patterns, and the semantic sense of the items in your list. I think you're saying that the data portion of a record is the list of events itself. With RepeatableRead, you'll always see only committed data, and retrieving that record from a JE database will always return a consistent view of a given list. See http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html#serializable for an explanation of what additional guarantee you get with Serializable.
    2. using Serializable isolation, is there a
    performance impact on updating users non concurrently
    (ie there's no lock contention since for the majority
    of cases concurrent updates won't happen) vs the
    default isolation level?Yes, there is an additional cost. When using Serializable isolation, additional locks are taken on adjacent data records. In addition to the cost of acquiring the lock (which would be low in a non-contention case), there may be additional I/O needed to fetch adjacent data records.
    3. building on 2. is there performance impact (other
    than obtaining and releasing locks) on using
    transactions with X isolation during updates of
    existing entries if there are no lock contention (ie,
    no concurrent updates) vs not using transactions at
    all? In (2) we compared the cost of Serializable to RepeatableRead. In (3), we're comparing the cost of non-transactional access to the default Repeatable Read transaction.
    Non-transactional is always a bit cheaper, even if there is no lock contention. On top of the cost of acquiring the locks, non-transactional operations use less memory and disk space, and execute some transaction setup and teardown code. If there are concurrent operations, even in there is no contention on a given lock, there could be some stress on the lock table latches and transaction tables. That said, if your application is I/O bound, the cpu differences between non-txnal and txnal operations becomes more of a secondary factor. If you're I/O bound, the memory and disk space overhead does matter, because the cache is more efficiently used with non-txnal operations.
    Regards,
    Linda
    >
    Thanks!
    Peter

  • Rec/client parameter & Performance Impact

    Hi all,
    We have been asked by our audit team to set the parameter Rec/client=300 (our production client) in our production system.
    But when we read few forums & notes, we feel that setting this parameter Rec/client=300 will definetly impact the performance.
    So before setting this parameters we need to evaluate the  performance impact in our ECC 6.0 system by setting this parameter, can anyone help us to evaluate the performace impact of this parameter settings in our below system
    - ECC 6.0 / oracle 10g / HPUX - 64 bit / 3500 users / 2.5 terabyte data
    Thanks
    Senthil

    Hello
    Normally only customizing tables should have the logging flag. You can verify it in SE13 -> <table> -> Log data changes.
    To list all tables with logging on, you can use this select statement:
    SQL> select tabname, protokoll from sapr3.dd09l where protokoll = 'X';
    As long as only so called customizing tables are logged, you should be fine. If you have some heavy traffic Z* tables, then two things might happen:
    - performance might suffer
    - the logging table DBTABLOG will explode
    So please make sure only the necessary tables are logged and if possible test on the QAS system, if the logging leads to performance problems.
    Best regards
    Michael

  • Performance impact of using Web Services?

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

  • Performance impact of Web Services

    As WebLogic adds support for Web Services to its platform, what is
    your plan for quantifying the performance impact and the functional
    correctness of using web services before going live with the final
    application.
    Empirix is hosting a free one hour web event discussion on web
    services testing and automated web services testing solutions on
    Thursday, January 17, 2-3pm Eastern time.
    To register for this web event or learn about other web events being
    offering by Empirix this month, go to:
    http://webevents.empirix.com
    The complete abstract is below:
    The advent of web services has brought the promises of integrating
    multiple software applications from heterogeneous networks and for
    exchanging information from vendor-to-vendor or vendor-to-consumer in
    a standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be critical that
    web services undergo performance testing. As with any enterprise
    software project, the adoption of proper test methodologies and use of
    testing tools will play a key part in the overall success or failure
    of projects utilizing web services. In a compressed software project
    schedule, an organization must quickly determine if its web services
    will operate successfully under a variety of load conditions. Like
    other web-based technologies, successful web services will need to
    respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing challenges
    created by this emerging technology, along with the variety of testing
    solutions available. Automated web service testing will be discussed
    and demonstrated using FirstACT, the first web services performance
    testing solution available on the market. Using a sample web service,
    automatic test case creation, scalability testing, and results
    analysis will be explored.

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • Select List Performance Issue in Apex 3.1.2

    Hi
    It would of great help if you could suggest us for the following
    We are facing some issues in Select List performance in Apex 3.1.2 version.
    We have 6 select list and all have been cascaded i.e., values of each select list based on pervious list value.
    Values in each select list are huge. Because of this, the performance is very slow.
    It takes huge time to fetch the data based on other list values.
    We cross verified in backend, the same query takes less time compared to the Application.
    Any recommedation to fine tune this?
    Thanks in advance
    Vijay

    if your select lists are very huge then it could be the browser that is causing the slow down.
    try and create an example using a static HTML file containing a select list of the same size and you may find the same performance issue.
    a quick way to create the test would be to save the source of your APEX page as a html file
    Craig
    [http://www.oracleapplicationexpress.com]

  • Referencing a "checked record" from my ADDT Dynamic List using a SPRY Menu

    I have just posted a Beta of my site to:
    http://www.clearwave.biz/Beta/T1COElogin.cfm
    - Username is: Beta
    - Password is: 123
    - Once you are logged in you should see the ADDT list w/one
    order listed.
    - Click on the Printer icon & the Order Agreement will
    open in PDF. (this works fine)
    - Click on the InfoSheet link & the Report will open in
    PDF as well. (this works fine)
    - Everything works fine if you use the links on the same line
    as the record.
    - But if you check the CheckBox on the left, then choose the
    Spry Menu above, Reports/T1 Agreement the report will not work?
    Question: What should the Link be to reference the checked
    item below, pass the CustID value to the report & print the
    PDF?
    (Note: the CF report has the following line in the SQL:
    (tblT1OrderProcessing2.T1CustID = #param.T1CustID#) and the report
    prints fine in Report Builder if I pass it the T1CustID, as well as
    if I click the links on the same line as mentioned above, so this
    is just an issue of grabbing the CustID value from the checked line
    & passing it to the report link in the Spry menu)
    Here is a visual picture of my Dynamic List page:
    http://cerberus.clearwave.com/jerry/Order_Management_Main_Page.jpg
    Thanks in advance for the help,
    jlig

    Here is the Report URL that works perfectly when clicking the
    Printer icon on the right of my List:
    http://www.clearwave.biz/Beta/reports/T1_Service_Agreement.cfm?T1CustID=1508
    and the link to the InfoSheet:
    http://www.clearwave.biz/Beta/reports/T1_Information_Sheet.cfr?T1CustID=1508
    Both of these work perfectly by grabbing the T1CustID value
    of 1508 from the line.

  • Dynamic list of values in CR 2008 - request of login to database

    I am using CR2008 with VB2005. I created simple application which generate report on different SQL Server 2005 databases.
    Report uses OLE.DB connection to SQL Server, I pass logon information in VB using "SA" username and password. The same simple report works great on different databases with one exception: if parameter (dynamic list of values) is used in report it runs correctly only on database it was created (connection to this database is saved in report and can be seen in Database->Set Datasource Location). Changing connection to other databases in VB during runtime causes CR to ask for username and password (in standard parameter window). It looks, like Crystal have one connection for report and the other for dynamic list of values parameter. I pass connection information to report using ConnectionInfo(). I loop through alll tables in report and apply connection:            
    For Each crTable In crTables
                    crTableLogOnInfo = crTable.LogOnInfo
                    crTableLogOnInfo.ConnectionInfo = crConnectionInfo
                    crTable.ApplyLogOnInfo(crTableLogOnInfo)
    Next
    I tried to pass connection information also to parameter but I couldn't find such possibility.

    This is a known issue (Tracking number is ADAPT01333806.) and a note has been written,  unfortunately it is not yet published. Below is the note content, including a work-around / resolution. You may also want to try FP 2.3, see that helps. (I'll break this post into two as I'll loose the formatting if I don't.
    Reproducing the Issue
    Use Crystal Reports 2008 SP2 to create a report with dynamic parameter(s)
    Use the following code from the Crystal Reports SDK for VS .NET
    Dim crDatabase As Database
    Dim crTables As TablesDim crTable As Table
    Dim crTableLogOnInfo As TableLogOnInfo
    Dim crConnectionInfo As ConnectionInfo 
    crReportDocument.Load("<path>")
    crReportDocument.Refresh()
    crConnectionInfo = New ConnectionInfo()
    With crConnectionInfo   
         .ServerName = "<New Server Name>"   
          .Password = "<password>"
    End With
    crDatabase = crReportDocument,Database
    crTables = crDatabase.Tables
    For Each crTable In crTables     
         crTableLogOnInfo = crTable.LogOnInfo     
         crTableLogOnInfo.ConnectionInfo = crConnectionInfo     
         crTable.ApplyLogOnInfo(crTableLogOnInfo)
    Next
    CrystalReportViewer1.ReportSource = crReportDocument
    The above code works with Crystal Reports 2008 SP 1

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance impact related to workbook

    Hi All,
    Please let me know the performance impact on workbooks if i insert 13 work sheets,
    How much performance impact will be there?
    If i insert all reports reports in 2 worksheets by splitting them,
    How much performance impact ll be there?
    Please let me know which one is better way
    Urgent!...
    Thanks,
    Sarau

    Hello Sarau,
    The performance is purely based on the Query selection, since that decide the number of records to be fetched.
    The more you filter the query the performance will be better.
    Since you are having many queries in a workbook, make sure that all have common variable screen.
    Thanks
    Chandran

  • How to add a column to a list created with the Dynamic List Wizard to display the values of the fiel

    Hi,
    ADDT, Vista, WAMP5.0
    We have 2 tables: clients_cli (id_cli, name_cli, tel_cli, and several more fields) and cases_cas (id_cas, idcli_cas, court_cas, and a lot of other fields).
    Clients may have many cases, so table cases_cas have a foreign key named idcli_cas, just to determine which case belongs to which client.
    We designed the lists of the two tables with the Dynamic List Wizard and the corresponding forms with Dynamic Form Wizard.
    These two forms are linked with the Convert Dynamic List and Form Wizards, which added a button to clients list named "add case".
    We add a client and then the system returns to the clients list displaying all clients, we look for the new client just added and then press "add case", which opens the Dynamic Form for cases, enter all case details and everything processes ok.
    However, when we view the cases list it display all the details of the case, including the column and values for the foreign key idcli_cas. As you can image, it is quite difficult for a human to remember the clients ids.
    So, in the cases list we added a another column, named it Name, to display the names of the clients along with cases details. We also created another recordset rsCli, selected the clients_cli table, displaying all columns, set filter id_cli = Form Variable = idcli_cas then press the Test button and everything displays perfect. Press ok.
    Then, we position the cursor inside the corresponding cell of the new Name column, go to Bindings, click on name_cli and then click on insert. The dynamic field is inserted into the table cell as expected, Save the page, and test in browser.
    The browser call the cases list but fails to display the values of the Name column. The Name column is simply empty.
    This issue creates a huge problem that makes our application too difficult to use.
    What are we doing wrong?
    Please help.
    Charles

    1.     Start transaction PM01, Create Infotype, by entering the transaction code.
    You access the Create Infotype screen.
    2.     Choose List Screen.
    3.     In the Infotype no. field, enter the four-digit number of the infotype you want to create.
    When you specify the infotype number, please remember to enter any leading zeros.
    4.     In the Screen Number field, enter the screen number of the list screen you want to enhance.
    5.     Choose Create.
    The Dictionary: Initial screen appears:
    6.     Create the list screen structure.
    7.     Choose Activate.
    8.     Return to the Enhance List Screen in the Enhance Infotypes transaction (PM01).
    9.     Choose Create All.
    The additional fields are displayed on the list screen, however, they contain no data.
    The fields can be filled in the FORM routine FILL-LISTSTRUCT in the generated program ZPnnnn00. The FORM routine is called for each data record in the list.
    Structure ZPLIS is identified when it is generated with a TABLES statement in the program ZPnnnn00.
    The fields can be filled from the Pnnnn structure or by reading text tables.

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

Maybe you are looking for