Updating large number of Topics
Hello All,
I'm looking for some advice from the community regarding something my business users are looking at doing. They have an upcoming project, this upcoming project requires updating a large amount of procedures within a RoboHelp project, and during a project cutover weekend, these large number of changes will go live.
So up until the cutover weekend as users are updating the procedures, none of these changes can be made available until the cutover weekend; i.e. these changes must remain hidden. Along with these users making changes for this cutover weekend, users will be making daily changes and publishing these changes.
What's the best way to be working on these bulk changes, and come cutover weekend have these changes appear. Are there options to have these marked in draft, as the project is being generated, these changes are not be incorporated into the generated project. Also, for any of the updated topics, the previous version needs to be archived, and available under a separate archived section within the same project. Also, any links to the topic need to be maintained; i.e. if a topic is updated and replaced with a newer version, all links to the existing topic need to point to the new topic.
Just wondering if the community had some advice on how best to accomplish this.
Thanks in advance for the assistance!
Dave
Hi Dave. You have the following options:
Publish the help to a new separate location. Then at the appointed time change the link to the help file to point to it. This has the advantage that you have the archive and of course hiding the information from your users until the appointed time. You don't say how the help is accessed, but the disadvantage of this approach is that all links (i.e. any inside an application) would have to be changed.
Do not publish the output at all until the appointed time. You'd have to manually take a copy of the existing help for your archive. You could use the command line to generate the output at a specific time if need be.
My preferred option is solution 1.
The RoboColum(n)
@robocolumn
Colum McAndrew
Similar Messages
-
Issue in updating large number of rows which is taking a long time
Hi all,
Am new to oracle forums. First I will explain my problems as below:
1) I have a table of 350 columns for which i have two indexes. One is for primary key's id
and the other is the composite id (combination of two functional ids)
2) Through my application, the user can calculate some functional conditions and the result
is updated in the same table.
3) The table consists of all input, intermediate and output columns.
4) The only way of calculation is done through update statements. The problem is, for one
complete process, the total number of update statement hits the db is around 1000.
5) From the two index, one indexed column is mandatory in all update where clause. So one
will come at any case but the other is optional.
6) Updating the table is taking a long time if the row count exceeds 1lakh.
7) I will now explain the scenario:
a. Say there is 5lakh 100 records in the table in which mandatory indexed column id 1 has
100 records and id 2 has 5 lakhs record.
b. If I process id 1, it is very fast and executed within 10seconds. But if I process id 2,
then it is taking more than 4 minutes to update.
Is there any way to increase the speed of the update statement. Am using oracle 10g.
Please help me in this, Since I am a developer and dont have much knowledge in oracle.
Thanks in advance.
Regards,
Sethurefer the link:
http://hoopercharles.wordpress.com/2010/03/09/vsession_longops-wheres-my-sql-statement/ -
I have a query as follows:
UPDATE TABLE_1 A SET COLUMN_1 = (SELECT COLUMN_1 FROM TABLE_2 B WHERE A.COLUMN_2 = B.COLUMN_2)
Both tables have 400k to 500k rows and the update is taking a long time. How can I improve this update statement? Can I use a parallel query? How about using hints?
ThanksHow can I improve this update statement?You can add a WHERE clause to make sure you don't update an existing column to null if no row is found in the subquery:
UPDATE TABLE_1 A
SET a.COLUMN_1 = (SELECT b.COLUMN_1
FROM TABLE_2 B
WHERE A.COLUMN_2 = B.COLUMN_2)
WHERE EXISTS (SELECT b.COLUMN_1
FROM TABLE_2 B
WHERE A.COLUMN_2 = B.COLUMN_2)
;For the performance...you'll need to look at (and post) an explain plan to see what's going on. -
How to handle a large number of query parameters for a Browse screen
I need to implement an advanced search functionality in a browse screen for a large table. The table has 80+ columns and therefore will have a large number of possible query parameters. The screen will be built on a modeled query with all
of the parameters marked as optional. Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup. Is it possible for example to have a search
button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
the search results are returned to the table control? This would effectively make screen b an advanced modal window for screen a. In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
re-open screen b and have all of their original parameters still set. How would you implement this, or otherwise deal with a large number of optional query parameters in the html client? My initial thinking is to store all of the parameters in
an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work. TIAWow Josh, thanks. I have a lot of reading to do. What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired.
There is an excellent way to get at all of the query information in the Query_Executed() method. I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
the query name, user name, and parameter value pairs that the user entered. Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query.
I almost have it working. It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
item). I'll post an update once I get it. Probably will have some more questions as I go through it. Thanks again! -
I have a large number of photos imported into iPhoto with the dates wrong. How can I adjust multiple photos (with varying dates) to the same, correct, date?
If I understand you correctly, when you enter a date in the Adjust Date and Time window, the picture does not update with the date you enter. If that is the case then something is wrong with iPhoto or your perhaps your library.
How large a date change are you putting in? iPhoto currently has an issue with date changes beyond about 60 years at a time. If the difference between the current date on the image and the date you are entering is beyond that range that may explain why this is not working.
If that is not the case:
Remove the following to the trash and restart the computer and try again:
Home > Library > Caches > com.apple.iphoto
Home > Library > Preferences > com.apple.iPhoto (There may be more than one. Remove them all.)
---NOTE: to get to the "home > library" hold down option on the keyboard and click on "Go" > "Library" while in the Finder.
Let me know the results. -
How can I delete all my junk mail or a large number of emails in one go?
Please can someone tell me if I can delete all the mail in my junk or a large number of mail (100+) in go? I'm so fed up of spending 5-10 mins just selected each mail and deleting it one by one!
Please help or apple update ASAP. Even old school phones have a delete all button!
Thanks :)Lulu6094 wrote:
Even old school phones have a delete all button!
But the iPad mail app does not have one. You must select them one at a time.
http://www.apple.com/feedback/ipad.html -
How to show data from a table having large number of columns
Hi ,
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.user2602680 wrote:
Please update your forum profile with a real handle instead of "user2602680".
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.
Yes, this can be achieved using a custom named column report template. -
How to calculate the area of a large number of polygons in a single query
Hi forum
Is it possible to calculate the area of a large number of polygons in a single query using a combination of SDO_AGGR_UNION and SDO_AREA? So far, I have tried doing something similar to this:
select sdo_geom.sdo_area((
select sdo_aggr_union ( sdoaggrtype(mg.geoloc, 0.005))
from mapv_gravsted_00182 mg
where mg.dblink = 521 or mg.dblink = 94 or mg.dblink = 38 <many many more....>),
0.0005) calc_area from dualThe table MAPV_GRAVSTED_00182 contains 2 fields - geoloc (SDO_GEOMETRY) and dblink (Id field) needed for querying specific polygons.
As far as I can see, I need to first somehow get a single SDO_GEOMETRY object and use this as input for the SDO_AREA function. But I'm not 100% sure, that I'm doing this the right way. This query is very inefficient, and sometimes fails with strange errors like "No more data to read from socket" when executed from SQL Developer. I even tried with the latest JDBC driver from Oracle without much difference.
Would a better approach be to write some kind of stored procedure, that adds up all the single geometries by adding each call to SDO_AREA on each single geometry object - or what is the best approach?
Any advice would be appreciated.
Thanks in advance,
JacobHi
I am now trying to update all my spatial table with SRID's. To do this, I try to drop the spatial index first to recreate it after the update. But for a lot of tables I can't drop the spatial index. Whenever I try to DROP INDEX <spatial index name>, I get this error - anyone know what this means?
Thanks,
Jacob
Error starting at line 2 in command:
drop index BSSYS.STIER_00182_SX
Error report:
SQL Error: ORA-29856: error occurred in the execution of ODCIINDEXDROP routine
ORA-13249: Error in Spatial index: cannot drop sequence BSSYS.MDRS_1424B$
ORA-13249: Stmt-Execute Failure: DROP SEQUENCE BSSYS.MDRS_1424B$
ORA-29400: data cartridge error
ORA-02289: sequence does not exist
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 27
29856. 00000 - "error occurred in the execution of ODCIINDEXDROP routine"
*Cause: Failed to successfully execute the ODCIIndexDrop routine.
*Action: Check to see if the routine has been coded correctly.
Edit - just found the answer for this in MetaLink note 241003.1. Apparently there is some internal problem when dropping spatial indexes, some objects gets dropped that shouldn't be. Solution is to manually create the sequence it complains it can't drop, then it works... Weird error. -
Best practice for handling data for a large number of indicators
I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
I was curious what others have done in similar circumstances.
Bill
“A child of five could understand this. Send someone to fetch a child of five.”
― Groucho Marx
Solved!
Go to Solution.I can certainly feel your pain.
Note that's really what is going on in that png You can see the Action Engine responsible for updating the display to the far right.
In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified. So I worked it this way from no choice of mine. I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway. Defer Panel Updates was my very good friend. The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
(the GUI did scale poorly though! That is a lot of wires! I was greateful to Jack for the Idea to make align and distribute work on wires)
Jeff -
Mail freezes when updating large RSS feed boxes
This is something I encountered and how I have formulated a work around. I have discovered that Mail will freeze when updating large RSS feed boxes.
A little history. After discovery RSS and how they work, I began building a collection of feeds. They included feeds from many major news providers. Over the years, the number articles form some of these sources went into the thousands.
As my database of articles grew, I encountered more severe and more frequent freezes of the application Mail. I began to take the computer off-line in order to work in these files. But inevitably, a situation would arise that would lead to the Mail application freezing.
Isolation of the issue with Mail and to the RSS boxes within Mail. The freeze would not occur when RSS feeds boxes where collapsed. Also, the freeze only affected the Mail application. Mac OS was not affected and I was able to alway close Mail from with in the Force Quit menu. Also, the Force Quit menu affirmed that Mail was indeed frozen by listing it as also "not responding."
Work around. To resolve this issue, I first choose to remove the number of RSS feed that I had subscribed to but used very infrequently. Second, I choose to delete old feeds from "never" to "every two weeks" from within the Mail preference menu (RSS sub-menu).
I think it took a while for Mail to fully delete messages older than two weeks. In fact, when I began deleting whole feeds, it took some time for those feeds to be removed from my mail box tree. Within the Activity Monitor application, I could see that a lot of disk use was occurring, even though the OS was allowing me to continue to use Mail and other applications.
To assist this process, I took my computer off-line and stepped away from it. Upon my return, the disk use was down to normal, the number of article in many RSS boxes where greatly reduced, and my disk had recovered over a GB of space. Also, Mail seems to be behaving properly, with smooth and quick performance.
If you found this article, I hope the information provided has been helpful! After a quick search in to previous posts, an entirely similar post was not found. However, others are finding the Mail application will freeze, but not necessary for the same reason.Since I don't want to download any attachments from RSS feeds in Mail.app, is there any way to turn off the attachment download once and for all? I also get the beach ball for minutes when an item has a big attachment, and I fear my HD is cluttered with files I don't use.
-
Loading FF4 wiped out a large number of tabs, how do I restore?
I thought I was loading a security update. WHAT?! ALL MY TABS I WAS USING ARE WIPED OUT? Really? Or have you added a way to restore them?
Apparently the default setting on FF4 is to destroy the preserved tabs during the update.
My browser had a large number of tabs preserved. FF4 apparently wiped them out without asking. WTF people, why pull such unexpected stuff on your users. (Yes, I'm really pissed. It will take me hours to track down a couple of those pages again.) It wouldn't have been so bad if I knew I was getting FF4. Then I could bookmark the open pages. Again, I thought it was a security update.
The fix: have FF4 detect preserved tabs once (during the update,) then display a warning message to allow users to bring the tabs into the new updated browser.Thanks. I assumed I needed to set in and out points. I have not done an image sequence in FCP. If I have a 10 second image and a 2 second transition, but then change the duration of the transition to say 4 seconds then FCP will let me alter the transition duration?
With regards to movie clips, what happens with setting global in and out points? How would I do that?
Thanks in advance -
Problem fetch large number of records
Hi
I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
I know when I use DB_MULTIPLE fetch all of the information and performance improves but
I read that I can not use this flag when I use secondary database for index.
please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
thanks alot
regards
saeedHi Saeed,
Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
I have other question.is there any way that fetch information with number of record?
for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
Regards,
Andrei -
Large number of JSP performance
Hi,
a colleague of me made tests with a large number of JSP and identified a
performance problem.
I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
and SP3 and MS jview SDK 4.
The issue was related to the duration of the initial call of the nth JSP,
our situation as we are doing site hosting.
The solution is able to perform around 14 initial invocations/s no matter if
the invocation is the first one or the
3000th one and the throughput can go up to 108 JSPs/s when the JSP are
already loaded, the JSPs being the
snoopservlet example copied 3000 times.
The ratio have more interest than the values as the test machine (client and
WLS 5.1) was a 266Mhz laptop.
I repeat the post of Marc on 2/11/2000 as it is an old one:
Hi all,
I'm wondering if any of you has experienced performance issue whendeploying
a lot of JSPs.
I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
I took care to precompile them off-line.
To run my tests I used a servlet selecting randomly one of them and
redirecting the request.
getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
The response-time slow-down dramaticaly as the number of distinct JSPs
invoked is growing.
(up to 100 times the initial response time).
I made some additional tests.
When you set the properties:
weblogic.httpd.servlet.reloadCheckSecs=-1
weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
Then the response-time for a new JSP seems linked to a "capacity increase
process" and depends on the number of previously activated JSPs. If you
invoke a previously loaded page the server answers really fast with no
delay.
If you set previous properties to any other value (0 for example) the
response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
Intent
The package described below is design to allow
* Fast invocation even with a large number of pages (which can be the case
with Web Hosting)
* Dynamic update of compiled JSP
Implementation
The current implementation has been tested with JDK 1.1 only and works with
MS SDK 4.0.
It has been tested with WLS 5.1 with service packs 2 and 3.
It should work with most application servers, as its requirements are
limited. It requires
a JSP to be able to invoke a class loader.
Principle
For a fast invocation, it does not support dynamic compilation as described
in the JSP
model.
There is no automatic recognition of modifications. Instead a JSP is made
available to
invalidate pages which must be updated.
We assume pages managed through this package to be declared in
weblogic.properties as
weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
This definition means that, when a servlet or JSP with a .ocg extension is
requested, it is
forwarded to the package.
It implies 2 things:
* Regular JSP handling and package based handling can coexist in the same
Application Server
instance.
* It is possible to extend the implementation to support many extensions
with as many
package instances.
The package (ocgLoaderPkg) contains 2 classes:
* ocgServlet, a servlet instantiating JSP objects using a class loader.
* ocgLoader, the class loader itself.
A single class loader object is created.
Both the JSP instances and classes are cached in hashtables.
The invalidation JSP is named jspUpdate.jsp.
To invalidate an JSP, it has simply to remove object and cache entries from
the caches.
ocgServlet
* Lazily creates the class loader.
* Retrieves the target JSP instance from the cache, if possible.
* Otherwise it uses the class loader to retrieve the target JSP class,
create a target JSP
instance and stores it in the cache.
* Forwards the request to the target JSP instance.
ocgLoader
* If the requested class has not the extension ocgServlet is configured to
process, it
behaves as a regular class loader and forwards the request to the parent
or system class
loader.
* Otherwise, it retrieves the class from the cache, if possible.
* Otherwise, it loads the class.
Do you thing it is a good solution?
I believe that solution is faster than standard WLS one, because it is a
very small piece of code but too because:
- my class loader is deterministic, if the file has the right extension I
don't call the classloader hierarchy first
- I don't try supporting jars. It has been one of the hardest design
decision. We definitely need a way to
update a specific page but at the same time someone post us NT could have
problems handling
3000 files in the same directory (it seems he was wrong).
- I don't try to check if a class has been updated. I have to ask for
refresh using a JSP now but it could be an EJB.
- I don't try to check if a source has been updated.
- As I know the number of JSP I can set pretty accurately the initial
capacity of the hashtables I use as caches. I
avoid rehash.Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.
-
Large number of JSP performance [repost for grandemange]
Hi,
a colleague of me made tests with a large number of JSP and identified a
performance problem.
I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
and SP3 and MS jview SDK 4.
The issue was related to the duration of the initial call of the nth JSP,
our situation as we are doing site hosting.
The solution is able to perform around 14 initial invocations/s no matter if
the invocation is the first one or the
3000th one and the throughput can go up to 108 JSPs/s when the JSP are
already loaded, the JSPs being the
snoopservlet example copied 3000 times.
The ratio have more interest than the values as the test machine (client and
WLS 5.1) was a 266Mhz laptop.
I repeat the post of Marc on 2/11/2000 as it is an old one:
Hi all,
I'm wondering if any of you has experienced performance issue whendeploying
a lot of JSPs.
I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
I took care to precompile them off-line.
To run my tests I used a servlet selecting randomly one of them and
redirecting the request.
getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
The response-time slow-down dramaticaly as the number of distinct JSPs
invoked is growing.
(up to 100 times the initial response time).
I made some additional tests.
When you set the properties:
weblogic.httpd.servlet.reloadCheckSecs=-1
weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
Then the response-time for a new JSP seems linked to a "capacity increase
process" and depends on the number of previously activated JSPs. If you
invoke a previously loaded page the server answers really fast with no
delay.
If you set previous properties to any other value (0 for example) the
response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
Intent
The package described below is design to allow
* Fast invocation even with a large number of pages (which can be the case
with Web Hosting)
* Dynamic update of compiled JSP
Implementation
The current implementation has been tested with JDK 1.1 only and works with
MS SDK 4.0.
It has been tested with WLS 5.1 with service packs 2 and 3.
It should work with most application servers, as its requirements are
limited. It requires
a JSP to be able to invoke a class loader.
Principle
For a fast invocation, it does not support dynamic compilation as described
in the JSP
model.
There is no automatic recognition of modifications. Instead a JSP is made
available to
invalidate pages which must be updated.
We assume pages managed through this package to be declared in
weblogic.properties as
weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
This definition means that, when a servlet or JSP with a .ocg extension is
requested, it is
forwarded to the package.
It implies 2 things:
* Regular JSP handling and package based handling can coexist in the same
Application Server
instance.
* It is possible to extend the implementation to support many extensions
with as many
package instances.
The package (ocgLoaderPkg) contains 2 classes:
* ocgServlet, a servlet instantiating JSP objects using a class loader.
* ocgLoader, the class loader itself.
A single class loader object is created.
Both the JSP instances and classes are cached in hashtables.
The invalidation JSP is named jspUpdate.jsp.
To invalidate an JSP, it has simply to remove object and cache entries from
the caches.
ocgServlet
* Lazily creates the class loader.
* Retrieves the target JSP instance from the cache, if possible.
* Otherwise it uses the class loader to retrieve the target JSP class,
create a target JSP
instance and stores it in the cache.
* Forwards the request to the target JSP instance.
ocgLoader
* If the requested class has not the extension ocgServlet is configured to
process, it
behaves as a regular class loader and forwards the request to the parent
or system class
loader.
* Otherwise, it retrieves the class from the cache, if possible.
* Otherwise, it loads the class.
Do you thing it is a good solution?
I believe that solution is faster than standard WLS one, because it is a
very small piece of code but too because:
- my class loader is deterministic, if the file has the right extension I
don't call the classloader hierarchy first
- I don't try supporting jars. It has been one of the hardest design
decision. We definitely need a way to
update a specific page but at the same time someone post us NT could have
problems handling
3000 files in the same directory (it seems he was wrong).
- I don't try to check if a class has been updated. I have to ask for
refresh using a JSP now but it could be an EJB.
- I don't try to check if a source has been updated.
- As I know the number of JSP I can set pretty accurately the initial
capacity of the hashtables I use as caches. I
avoid rehash.
Cheers - WeiI dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
Thanks...
Shambhu -
DBA Reports large number of inactive sessions with 11.1.1.1
All,
We have installed System 11.1.1.1 on some 32 bit windows test machines running Windows Server 2003. Everything seems to be working fine, but recently the DBA is reporting that there are a large number of inactive sessions throwing alarms that we are reaching our Max Allowed Process on the Oracle Database server. We are running Oracle 10.2.0.4 on AIX.
We also have some System 9.3.1 Development servers that point at separate schemas in this environment and we don't see the same high number of inactive connections?
Most of the inactive connections are coming from Shared Services and Workspace. Anyone else see this or have any ideas?
Thanks for any responses.
Keith
Just a quick update. Originally I said this was only with 11.1.1.1 but we see the same high number of inactive sessions in 9.3. Anyone else see a large number of inactive sessions. They show up in Oracle as JDBC_Connect_Client. Does Shared Service, Planning Workspace etc utilize persistent connections or does it just abandon sessions when the windows service associated with an application is shutdown? Any information or thoughts are appreciated.
Edited by: Keith A on Oct 6, 2009 9:06 AMHi,
Not the answer you are looking for but have you logged it with Oracle as you might not get many answers to this question on here.
Cheers
John
http://john-goodwin.blogspot.com/
Maybe you are looking for
-
Ipod not showing any music, Itunes dies when connecting
Hey everybody, Here's my problem: Earlier on I loaded 40 songs on my Ipod Classic and was charging it for a little while. When I disconnected it, it told me no songs were in it's library. I reconnected the Ipod to my computer several times and everyt
-
Bluetooth problems all around - Modem in the first row.
Hi there, I've got a really annoying problem with the Bluetooth Setup of my Mac. Like you can see in my system description, I use Mac OS X 10.5.6. For a long time I used my old mobile via Bluetooth and GPRS as a Modem. Now it was time for a change. I
-
When opening a PDF in the browser, the tab name shows the root or path of the pdf... example... http://home.site.com/Blah/Blah/Blah/Document.pdf This creates a problem because only "htttp://home.site.com/Bl" will actually be displayed for every tab o
-
Hello, How to update the status of urgent change in CHARM when a transport is updated in satellite systems We have imported a transport request manually rather than using CHARM and the transport request was imported to production. After importing i w
-
Crawl KM repository without TREX
I want to build a plug in that will crawl the KM repository without using TREX. How do I connect to the KM repository from my external application .Can this plug in be installed on a different server.