Large number of JSP performance

Hi,
a colleague of me made tests with a large number of JSP and identified a
performance problem.
I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
and SP3 and MS jview SDK 4.
The issue was related to the duration of the initial call of the nth JSP,
our situation as we are doing site hosting.
The solution is able to perform around 14 initial invocations/s no matter if
the invocation is the first one or the
3000th one and the throughput can go up to 108 JSPs/s when the JSP are
already loaded, the JSPs being the
snoopservlet example copied 3000 times.
The ratio have more interest than the values as the test machine (client and
WLS 5.1) was a 266Mhz laptop.
I repeat the post of Marc on 2/11/2000 as it is an old one:
Hi all,
I'm wondering if any of you has experienced performance issue whendeploying
a lot of JSPs.
I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
I took care to precompile them off-line.
To run my tests I used a servlet selecting randomly one of them and
redirecting the request.
getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
The response-time slow-down dramaticaly as the number of distinct JSPs
invoked is growing.
(up to 100 times the initial response time).
I made some additional tests.
When you set the properties:
weblogic.httpd.servlet.reloadCheckSecs=-1
weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
Then the response-time for a new JSP seems linked to a "capacity increase
process" and depends on the number of previously activated JSPs. If you
invoke a previously loaded page the server answers really fast with no
delay.
If you set previous properties to any other value (0 for example) the
response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
Intent
The package described below is design to allow
* Fast invocation even with a large number of pages (which can be the case
with Web Hosting)
* Dynamic update of compiled JSP
Implementation
The current implementation has been tested with JDK 1.1 only and works with
MS SDK 4.0.
It has been tested with WLS 5.1 with service packs 2 and 3.
It should work with most application servers, as its requirements are
limited. It requires
a JSP to be able to invoke a class loader.
Principle
For a fast invocation, it does not support dynamic compilation as described
in the JSP
model.
There is no automatic recognition of modifications. Instead a JSP is made
available to
invalidate pages which must be updated.
We assume pages managed through this package to be declared in
weblogic.properties as
weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
This definition means that, when a servlet or JSP with a .ocg extension is
requested, it is
forwarded to the package.
It implies 2 things:
* Regular JSP handling and package based handling can coexist in the same
Application Server
instance.
* It is possible to extend the implementation to support many extensions
with as many
package instances.
The package (ocgLoaderPkg) contains 2 classes:
* ocgServlet, a servlet instantiating JSP objects using a class loader.
* ocgLoader, the class loader itself.
A single class loader object is created.
Both the JSP instances and classes are cached in hashtables.
The invalidation JSP is named jspUpdate.jsp.
To invalidate an JSP, it has simply to remove object and cache entries from
the caches.
ocgServlet
* Lazily creates the class loader.
* Retrieves the target JSP instance from the cache, if possible.
* Otherwise it uses the class loader to retrieve the target JSP class,
create a target JSP
instance and stores it in the cache.
* Forwards the request to the target JSP instance.
ocgLoader
* If the requested class has not the extension ocgServlet is configured to
process, it
behaves as a regular class loader and forwards the request to the parent
or system class
loader.
* Otherwise, it retrieves the class from the cache, if possible.
* Otherwise, it loads the class.
Do you thing it is a good solution?
I believe that solution is faster than standard WLS one, because it is a
very small piece of code but too because:
- my class loader is deterministic, if the file has the right extension I
don't call the classloader hierarchy first
- I don't try supporting jars. It has been one of the hardest design
decision. We definitely need a way to
update a specific page but at the same time someone post us NT could have
problems handling
3000 files in the same directory (it seems he was wrong).
- I don't try to check if a class has been updated. I have to ask for
refresh using a JSP now but it could be an EJB.
- I don't try to check if a source has been updated.
- As I know the number of JSP I can set pretty accurately the initial
capacity of the hashtables I use as caches. I
avoid rehash.

Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

Similar Messages

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • TableView performance with large number of columns

    I notice that it takes awhile for table views to populate when they have a large number of columns (> 100 or so subjectively).
    Running VisualVM based on CPU Samples, I see that the largest amount of time is spent here:
    javafx.scene.control.TableView.getVisibleLeafIndex() 35.3% 8,113 ms
    next is:
    javfx.scene.Parent$1.onProposedChange() 9.5% 2,193 ms
    followed by
    javafx.scene.control.Control.loadSkinClass() 5.2% 1,193 ms
    I am using JavaFx 2.1 co-bundled with Java7u4. Is this to be expected, or are there some performance tuning hints I should know?
    Thanks,
    - Pat

    We're actually doing some TableView performance work right now, I wonder if you could file an issue with a simple reproducible test case? I haven't seen the same data you have here in our profiles (nearly all time is spent on reapplying CSS) so I would be interested in your exact test to be able to profile it and see what is going on.
    Thanks
    Richard

  • RE: Tab Groups. 1. What will erase saved tab groups unintentionally? E.g. : Clearing Cashe, running CCleaner? 2. Does keeping a large number of tab groups active degrade my computer's performance? 3. Are tab groups saved during back-ups?

    RE: Tab Groups.
    1. What will erase saved tab groups unintentionally? E.g. : Clearing Cache, running CCleaner, other actions?
    2. Does keeping a large number of tab groups active degrade my computer's performance?
    3. Are tab groups saved during back-ups?
    Running Win 7 Pro, browsing Firefox 7.0.1

    App (pinned) tabs and Tab Groups (Panorama) are stored as part of the session data in the file sessionstore.js in the Firefox profile folder.
    Make sure that you do not use "Clear Recent History" to clear the "Browsing History" when Firefox is closed because that prevails and prevents Firefox from opening tabs from the previous session.
    * https://support.mozilla.com/kb/Clear+Recent+History
    If you use cleanup software like CCleaner then make sure that Session is unchecked in the settings for the Firefox application.

  • BerkeleyDB + Tomcat + large number of databases.

    Hi all,
    for my bioinformatics project, I'd like to transform a large number of SQL databases (see http://hgdownload.cse.ucsc.edu/goldenPath/hg18/database/ ) to a set of read only BerkeleyDB JE databases.
    In my web application , the Environment would be loaded in tomcat and one can imagine a servlet/jsp querying/browsing each database.
    Then I wonder what are the best practices ?
    Should I open each JE Database for each http request and close it at the end of the request ?
    Or should I just let each Database open once it has been opened ? Wouldn't it be a problem if all the database and secondary databases are all open ? Can I share one Database for some multiple threads ?
    Something else ?
    Many thanks for your help
    Thanks in advance
    Pierre

    Hi Pierre,
    Normally you should keep the Environment and all Databases open for the duration of the process, since opening and closing a database (and certainly an environment) per request is expensive and unnecessary. However, each open database takes some memory, so if you have an extremely large number of databases (thousands or more), you should consider opening and closing the databases at each request, or for better performance keeping a cache of open databases. Whether this is necessary depends on how much memory you have and how many databases.
    You'll find the answer to your multi-threading question in the getting started guide.
    Please read the docs and also search the forum.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Communicate large number of parameters and variables between Verstand and Labview Model

    We have a dyno setup with a PXI-E chassis running Veristand 2014 and Inertia 2014. In order to enhance capabilities and timing of Veristand, I would like to use Labview models to perform tasks not possible by Veristand and Inertia. An example of this is to determine the maximum of a large number of thermocouples. Veristand has a compare funtion, but it compares only two values at a time. This makes for some lengthy and inflexible programming. Labview, on the other hand, has a function which aloows one to get the maximum of elements in an array in a single step. To use Labview I need to "send" the 50 or so thermocouples to the Labview model. In addition to the variables which need to be communicated between Veristand and Labview, I also need to present Labview with the threshold and confguration parameters. From the forums and user manuaIs understand that one has to use the connector pane in Labview and mapping in Veristand System Explorer to expose the inports and outports. The problem is that the Labview connector pane is limited to 27 I/O. How do I overcome that limitation?
    BTW. I am fairly new to Labview and Versitand.
    Thank you.
    Richard
    Solved!
    Go to Solution.

    @Jarrod:
    Thank you for the help. I created a simple test model and now understand how I can use clusters for a large number of variables. Regarding the mapping process: Can one map a folder of user channels to a cluster (one-step mapping)? Alternatively, I understand one can import a mapping (text) file in System Explorer. Is this import partial or does it replace all the mapping? The reason I am asking is that, if it is partial, then I can have separate mapping files for different configurations and my final mapping can be a combination of imported mapping files.
    @SteveK:
    Thank you for the hint on using a Custom Device. I understand that the Custom Device will be much more powerful and can be more generic. The problem at this stage is that my limitations in programming in Labview is far gretater than Labview models' limitations in Veristand. I'll definitely consider the Custom Device route once I am more provicient with LabView. Hopefully I'll be able to re-use some of the VI's I created for the LabView models.
    Thanks
    Richard

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Approach to parse large number of XML files into the relational table.

    We are exploring the option of XML DB for processing a large number of files coming same day.
    The objective is to parse the XML file and store in multiple relational tables. Once in relational table we do not care about the XML file.
    The file can not be stored on the file server and need to be stored in a table before parsing due to security issues. A third party system will send the file and will store it in the XML DB.
    File size can be between 1MB to 50MB and high performance is very much expected other wise the solution will be tossed.
    Although we do not have XSD, the XML file is well structured. We are on 11g Release 2.
    Based on the reading this is what my approach.
    1. CREATE TABLE XML_DATA
    (xml_col XMLTYPE)
    XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    2. Third party will store the data in XML_DATA table.
    3. Create XMLINDEX on the unique XML element
    4. Create views on XMLTYPE
    CREATE OR REPLACE FORCE VIEW V_XML_DATA(
       Stype,
       Mtype,
       MNAME,
       OIDT
    AS
       SELECT x."Stype",
              x."Mtype",
              x."Mname",
              x."OIDT"
       FROM   data_table t,
              XMLTABLE (
                 '/SectionMain'
                 PASSING t.data
                 COLUMNS Stype VARCHAR2 (30) PATH 'Stype',
                         Mtype VARCHAR2 (3) PATH 'Mtype',
                         MNAME VARCHAR2 (30) PATH 'MNAME',
                         OIDT VARCHAR2 (30) PATH 'OID') x;
    5. Bulk load the parse data in the staging table based on the index column.
    Please comment on the above approach any suggestion that can improve the performance.
    Thanks
    AnuragT

    Thanks for your response. It givies more confidence.
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    Example XML
    <SectionMain>
    <SectionState>Closed</SectionState>
    <FunctionalState>CP FINISHED</FunctionalState>
    <CreatedTime>2012-08</CreatedTime>
    <Number>106</Number>
    <SectionType>Reel</SectionType>
    <MachineType>CP</MachineType>
    <MachineName>CP_225</MachineName>
    <OID>99dd48cf-fd1b-46cf-9983-0026c04963d2</OID>
    </SectionMain>
    <SectionEvent>
    <SectionOID>99dd48cf-2</SectionOID>
    <EventName>CP.CP_225.Shredder</EventName>
    <OID>b3dd48cf-532d-4126-92d2</OID>
    </SectionEvent>
    <SectionAddData>
    <SectionOID>99dd48cf2</SectionOID>
    <AttributeName>ReelVersion</AttributeName>
    <AttributeValue>4</AttributeValue>
    <OID>b3dd48cf</OID>
    </SectionAddData>
    - <SectionAddData>
    <SectionOID>99dd48cf-fd1b-46cf-9983</SectionOID>
    <AttributeName>ReelNr</AttributeName>
    <AttributeValue>38</AttributeValue>
    <OID>b3dd48cf</OID>
    <BNCounter>
    <SectionID>99dd48cf-fd1b-46cf-9983-0026c04963d2</SectionID>
    <Run>CPFirstRun</Run>
    <SortingClass>84</SortingClass>
    <OutputStacker>D2</OutputStacker>
    <BNCounter>54605</BNCounter>
    </BNCounter>
    I was not aware of Virtual column but looks like we can use it and avoid creating views by just inserting directly into
    the staging table using virtual column.
    Suppose OID id is the unique identifier of each XML FILE and I created virtual column
    CREATE TABLE po_Virtual OF XMLTYPE
    XMLTYPE STORE AS BINARY XML
    VIRTUAL COLUMNS
    (OID_1 AS (XMLCAST(XMLQUERY('/SectionMain/OID'
    PASSING OBJECT_VALUE RETURNING CONTENT)
    AS VARCHAR2(30))));
    1. My question is how then I will write this query by NOT USING COLMUN XML_COL
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM po_Virtual t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                          <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x;
    2. Insetead of creating the view then Can I do
    insert into STAGING_table_yyy ( col1 ,col2,col3,col4,
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    insert into STAGING_table_yyy ( col1 ,col2,col3
    SELECT x."SectionOID",
    x."EventName",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionOID PATH 'SectionOID',
    EventName VARCHAR2 (30) PATH 'EventName',
    OID VARCHAR2 (30) PATH 'OID',
    ) x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    Same insert for other tables usind the OID_1 virtual coulmn
    3. Finaly Once done how can I delete the XML document from XML.
    If I am using virtual column then I beleive it will be easy
    DELETE table po_Virtual where oid_1 = '99dd48cf-fd1b-46cf-9983';
    But in case we can not use the Virtual column how we can delete the data
    Thanks in advance
    AnuragT

  • Large number of http posts navigating between forms

    Hi,
    i'm not really a forms person (well not since v3/4 running character mode on a mainframe!), so please be patient if I'm not providing the most useful information.
    An oracle forms 10 system that I have fallen into supporting has to me very poor performance in doing simple things like navigating between forms/tabs.
    Looking at the java console (Running Sun JRE 1.6.0_17), and turning on network tracing, I can see a much larger number of post requests than I would expect (I looked here first as initially we had an issue with every request going via a proxy server, and I wondered if we had lost the bypass proxy setting). Only a normal number of GETS though.
    Moving to one particualr detail form from a master record is generating over 300 post requests - I'v confirmed this looking at the Apache logs on the server. This is the worst one I have found, but in general the application appears to be extremely 'chatty'
    The only other system I work with which uses forms doesn't generate anything like these numbers of requests, which makes me think this isn't normal (As well as the fact this particular form is very slow to open)
    This is a third party application, so i don't have access to the source unfortunately.
    Is there anything we should look at in our setup, or is this likely to be an application coding issue? This app is a recent conversion from a forms 6 client server application (Which itself ran ok, at least this bit of the application did with no delays in navigation between screens).
    I'm happy to go back to the supplier, but it might help if I can point them into some specific directions, plus i'd like to know what's going on too!
    Regards,
    Carl

    Sounds odd. 300 Requests is by far too much. As it was a C/S application: did they do anything else except the recompile on 10g? Moving from C/S to 10g webforms seems to be easy as you just need to recompile but in fact it isn't. There are many things which didn't matter in a C/S environment but have disastrous effects once the form is deployed over the web. The synchronize built in for example. In C/S calls to synchronize wasn't that bad; But when you are using web deployed forms...each call to synchronize is a roundtrip. The usage of timers is also best kept on a low level in webforms for example.
    A good starting point for the whole do's and dont's when moving forms to the web is the forms upgrade center:
    http://www.oracle.com/technetwork/developer-tools/forms/index-095046.html
    If you don't have the source code available that's unfortune; but if you want to know what's happening behind the scenes there is the possibility to trace a forms session:
    http://download.oracle.com/docs/cd/B14099_19/web.1012/b14032/tracing002.htm#i1035515
    maybe this sheds some light upon what's going on.
    cheers

  • Slow record selection in tableView component with large number of records

    Hi experts,
    we have a Business Server Page (flow logic) with several htmlb:inputField's. As known from SAP standard we would like to offer value helper (F4) to the users for the ease of record selection.
    We use the onValueHelp() method of the inputField to open a extra browser window through JavaScript. In the popup another html-website is called, containing a tableView component with all available records. We use the SINGLESELECT mode for the table view.
    Everything works perfect and efficient, unless the tableView contains too many entries. If the number of possible entries is large the whole component performs very very slow. For example the selection of the record can take more than one minute. Also the navigation between pages through the buttons at the bottom of the component takes a lot of time. It seems that the tableView component can not handle so many entries.
    We tried to switch between stateful and stateless mode, without success. Is there a way to perform the tableView selection without doing a server-round-trip? Any ideas and comments will be appreciated.
    Best regards,
    Sebastian

    Hi Raja,
    thank you for your hint. I took a look at sbspext_table/TableViewClient.bsp but did not really understand how the Java-Script coding works. Where is the JavaScript code in that example? Which file, does it contain.
    Meanwhile I implemented another way to evite the server round trip.
    - Switch page mode of the popup window to "Stateful"
    - Use OnInitialization method like OnCreate (as shown in [using OnInitialization like OnCreate])
    - Limit the results of the SELECT statement with UP TO 1000 ROWS
    Best regards,
    Sebastian

  • Analyze table after insert a large number of records?

    For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
    For example:
    Insert into foo ...... //Insert one million records to table foo.
    analyze table foo COMPUTE STATISTICS; //analyze table foo
    select * from foo, bar, car...... //Execute a complex query whithout hints
    //after 1 million records inserted into foo
    Does this strategy help to improve the overall performance?
    Thanks.

    Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
    This is why you shouldn't test an application with a small number of 'fact' records.
    This can happen both with analyze table and dbms_stats.
    The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
    You can even overrule individual table and column statistics by artificial values.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • Should we create large number of folders within a list?

    For a custom list having 50K item it is more convenient for user to create folders to segregate custom list items logically. But if we consider performance, is it recommended to create large number folders of within a custom list?
    What are pros and cons of creating folders within a custom list?

    Hi SunilKumar,
    In a SharePoint list, the folder can also be seen as list item, considering to the large amount of list items existing in your list, the influence of these extra folders
    to your site won’t be so apparently.
    However, for a better item management, using extra columns for grouping items would be more recommended.
    Best regards,
    Patrick
    Patrick Liang
    TechNet Community Support

  • Fastest Way to Delete a large number of data

    I m trying to delete a large number of Data
    this Proc will be running every day without disturbing the other Process
    SET sql = 'DELETE TOP (100) FROM ' + @TableName + ' WHERE Id IN (Select Distinct Id FROM [state] WHERE ToDel = 1)'SET @Deleted_Rows = 1;SET @Rows = 0; WHILE (@Deleted_Rows > 0)BEGIN BEGIN TRANEXEC sp_executesql @sql SET @Deleted_Rows = (SELECT @@ROWCOUNT);SET @Rows = (@Rows + @Deleted_Rows);COMMIT TRANWaitFor DELAY '00:00:00:01' END
    Have you any Idea how can I optimize my Query ?
    Thank you very much

    Hi, following are few thoughts.
    1. If you can manage without dynamic sql then it may yield better performance as except table name there is no need for it as per your query.
    2. Ensure indexes are in place.
    3. If the table in question is not participant in replication then you may slightly go for higher batch size and again it depends on volume.
    Please take a look at the tweaked code.
    declare @Deleted_Rows int, @Rows int;
    SET @Deleted_Rows = 1;
    SET @Rows = 0;
    WHILE (@Deleted_Rows > 0)
    BEGIN
    BEGIN TRAN;
    DELETE TOP (100) t1
    FROM
    tabName as t1
    WHERE exists (Select * FROM [state] as t2 WHERE t2.ToDel = 1 and t2.Id = t1.id);
    SET @Deleted_Rows = @@ROWCOUNT ;
    SET @Rows = @Rows + @Deleted_Rows;
    COMMIT TRAN;
    WaitFor DELAY '00:00:00:01';
    END
    Print 'Total rows deleted : '+convert(varchar(50), @Rows);

  • Sending large number of emails from SharePoint

    Hello,
    I am in a situation where we have to develop a solution using client side/ootb features of SP 2013 to send emails to a large number of users from a SharePoint list. The list will have 10000+ user information logically grouped. The user should be able
    to select anyone or many groups to send emails notification. Each group can contain close to 5000+ user emails. The list can contain email ids of people outside the organization so SP groups cannot be used. Can you suggest a clean solution that will not
    have a serious performance impact? Nintex workflows can also be considered if it will not affect the performance and if it is reliable in this situation. Appreciate any pointers/approach.
    Thanks,
    Aj
    aju

    Hi
    create a powershell script to send these mails.
    http://www.ehow.com/how_8510132_send-email-powershell.html
    If you need more details, let me know
    Romeo Donca, Orange Romania (MCSE, MCITP, CCNA) Please Mark As Answer if my post solves your problem or Vote As Helpful if the post has been helpful for you.

  • Filtering large number of members

    Hi:
    I have a dimension (D1) which has a large number of members, currently 40K and I expect it to grow.
    I have another dimension (D2), has about 1400 members which is also a property of D1 and this is an authorization object. Each user has access to a either 2 or 3 members. When the users login and open any schedule, they are restricted to only the authorized members.
    In my Evdre, I need the users to enter data by D1 and so I filter D1 based on which D2 member is selected. This works fine, however, it takes a very long time to expand on open the schedule.
    Any idea how the filtering takes place? Where does this filtering take place - server or client?
    Is there any options to improve the performance?
    Thanks,
    Subramania

    Hi Subramania,
    While getting data from data base, bpc dosenot filter at the database level. It will retrive all the member's from the back end and will filter at the client side using excel funtionality.
    As of my knowledge, for now, bpc doesnt support filtering at data base level.
    Hope some one else can help in How to improve performance?
    Regards,
    Kranthi

Maybe you are looking for