Large number of entries in Queue BW0010EC_PCA_1

Dear BW experts,
Our BW system 2004s is extracting data from R3 700. I am a Basis guy and observing large number of entries in SMQ1 in R3 system under Queue BW0010EC_PCA_1. I observe in RSA7 similar number of entries for 0EC_PCA_1.
The number of entries for this queue in SMQ1 everyday are 50000+. The extraction job in BW is running everyday 5:00AM morning in BW but this only clears data which is lying before 2 days (Example on 10.09.2010, it will extract data of 08.09.2010)
My questions
1. Is it ok that such large number of entries lying in queue in SMQ1 and they are extracted once a day by a batch job.Then there is no mean of schedular pushing this entries.
2. Any idea why extraction job only fetches data of 2 days before.any setting somewhere missing.
Many thanks in advance for your valuable comments

Hi,
The entries lying in RSA7 and SMQ1 are one and the same. In SMQ1, BW0010EC_PCA_1 entry means that this data is lying to be sent across to you BW001 client system. Whereas in RSA7, same data is displayed as 0EC_PCA_1.
1. Is it ok that such large number of entries lying in queue in SMQ1 and they are extracted once a day by a batch job.Then there is no mean of schedular pushing this entries.
As I can understand from the data thats lying in your R/3 system in SMQ1, this datasource has delta mode as Direct Delta. SAP recommends that if the number of postings for a particular application is greater than 100000 per day, you should have delta mode as Queued Delta. Since in your system it is in some thousands, therefore BI guys would have kept it as direct delta. So, these entries lying in SMQ1 are not problem at all. As for scheduler from BI, it will pick up these entries every morning to clear the queue for the previous data.
2. Any idea why extraction job only fetches data of 2 days before.any setting somewhere missing.
I dont think that it is only fetching the data for 2 days before. The delta concept works in such a manner that once you have pulled the delta load from RSA7, this data would still be lying there under Repeat delta section until the next delta load has finished successfully.
Since in your system, data is pulled only once a day, therefore even though your today's dataload has pulled yesterday's data, it would still be lying in the system till tomorrow's delta load from RSA7 is successful.
Hope this was helpful.

Similar Messages

  • Large number of event Log entries: connection open...

    Hi,
    I am seeing a large number of entries in the event log of the type:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    Are these anything I should be concerned about? I have tried a couple of forum and Google searches, but I don't quite know where to start beyond pasting the first bit of the message. I haven't found anything obvious from those searches.
    DHCP table lists 192.168.1.78 as the desktop PC on which I'm writing this.
    Please could you point me in the direction of any resources that will help me to work out if I should be worried about this?
    A slightly longer extract is shown below:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:11, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] TIME_WAIT/CLOSED ppp0 NAPT)
    21:49:03, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [178.190.63.75:55535] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:00, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [2.96.4.85:23939] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:59, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.144.143.222:21617] CLOSED/TIME_WAIT ppp0 NAPT)
    21:48:58, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28188] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28288] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:18048] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:54199] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.144.91.49:60704] ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:50875] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:45, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:57656] ppp0 NAPT)
    21:48:39, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:56975] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:29, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [79.99.145.46:8368] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:27, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [90.192.249.173:45250] ppp0 NAPT)
    21:48:16, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [212.17.96.246:62447] ppp0 NAPT)
    21:48:10, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [82.16.198.117:49942] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:08, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:04, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [89.153.251.9:53729] TIME_WAIT/CLOSED ppp0 NAPT)
    21:47:54, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:37150] ppp0 NAPT)

    Hi,
    Thank you for the response. I think, but can't remember for sure, that UPnP was already switched off when I captured that log. Anyway, even if it wasn't, it is now. So I will see what gets captured in my logs.
    I've just had to restart my Home Hub because of other connection issues and I notice that the first few entries are also odd:
    19:35:16, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:04, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:46, 12 Mar.
    IN: BLOCK [12] Spoofing protection (IGMP 86.164.178.188->224.0.0.22 on ppp0)
    19:33:45, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:39, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:33, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:33:29, 12 Mar.
    IN: BLOCK [15] Default policy (UDP 111.252.36.217:26328->86.164.178.188:12708 on ppp0)
    19:33:16, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 193.113.4.153:80->86.164.178.188:49572 on ppp0)
    19:33:14, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:14, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:44266 on ppp0)
    19:33:14, 12 Mar.
    ( 164.240000) CWMP: session completed successfully
    19:33:13, 12 Mar.
    ( 163.700000) CWMP: HTTP authentication success from https://pbthdm.bt.mo
    19:33:05, 12 Mar.
    BLOCKED 106 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 213.1.72.209:80->86.164.178.188:49547 on ppp0)
    19:33:05, 12 Mar.
    BLOCKED 94 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 199.59.148.87:443->86.164.178.188:49531 on ppp0)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:04, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:04, 12 Mar.
    ( 155.110000) CWMP: Server URL: https://pbthdm.bt.mo; Connecting as user: ACS username
    19:33:04, 12 Mar.
    ( 155.090000) CWMP: Session start now. Event code(s): '1 BOOT,4 VALUE CHANGE'
    19:32:59, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:54, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:32:53, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:52, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:51, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:48, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:47, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:46, 12 Mar.
    BLOCKED 4 more packets (because of First packet is Invalid)
    19:32:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49461->199.59.149.232:443 on ppp0)
    19:32:44, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:44, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:43, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49398->193.113.4.153:80 on ppp0)
    19:32:42, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:42, 12 Mar.
    BLOCKED 3 more packets (because of First packet is Invalid)
    19:32:42, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49277->119.254.30.32:443 on ppp0)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:41, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:38, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:36, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:34, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:30, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:47022 on ppp0)
    19:32:30, 12 Mar.
    ( 120.790000) CWMP: session closed due to error: WGET TLS error
    19:32:30, 12 Mar.
    ( 120.140000) NTP synchronization success!
    19:32:30, 12 Mar.
    BLOCKED 1 more packets (because of Default policy)
    19:32:29, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49458->217.41.223.234:80 on ppp0)
    19:32:28, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:26, 12 Mar.
    ( 116.030000) NTP synchronization start
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49442->74.125.141.91:443 on ppp0)
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (TCP 192.168.1.78:49310->204.154.94.81:443 on ppp0)
    19:32:25, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 88.221.94.116:80->86.164.178.188:49863 on ppp0)

  • Problem in iterating through huge number of entries ... memory issue

    I am having large number of entries in lacs.
    I need to check each entry with previous ones , so I need to iterate through the each and every entry previous to current entry.
    Here I have put all the entries in arraylist and I am iterating though this arraylist and validating current entry.
    As I am using the arraylist for storing all the entries , the system is taking lot of memory because I am putting all in the memory.
    Is there anyway to resolve this memory issue?

    If you have a hundred thousand entries, to verify all the entries you would need to compare the later entries with the earlier ones such that:
    Entry 1 = 100,000 accesses
    Entry 2 = 99,999 accesses
    Entry 5 = 99,995 accesses
    Entry 10 = 99,990 accesses
    Entry 99,990 = 10 accesses
    Entry 99,995 = 5 accesses
    Entry 99,999 = 1 access (only to be compared with the 100,000th entry)
    It would then make sense to keep a certain (fixed) amount of entries in memory, those which get accessed the most, then write to file those which get accessed the least frequently. Have a method that can decide, based on the index of the entry, whether to get it from the ArrayList or to read it from the file. That will give you good performance.
    BufferedReader would be a good candidate to read the file, because it can skip more bytes at once should you need a line very late in the file. Also consider using RandomAccessFile and seek(entryNumber * ENTRY_SIZE) if your data can be represented with a fixed size in bytes.
    s
    Edited by Looce at 2009-03-25 20:33:11 GMT [Added BufferedReader advice]

  • Unable to delete entries from queue (SMQ2)

    Hi
    I am unable to delete entries from inbound queue due to number of entries are too high (Approximately 160,000). Deletion in dialog mode producing timeout error...Is there any way to delete these entris in background mode or any other idea about this situation..
    Your valuable time & annswer appreciated.
    Thanks
    Rakesh

    Hi Rakesh,
       Check these threads on SMQ2. It may give some help to you:
       Queue issue
       deleting the inbound queue tc smq2
       Also check this file:
      http://help.sap.com/saphelp_nw04s/helpdata/en/d9/b9f2407b937e7fe10000000a1550b0/frameset.htm
    Regards,
    Subhasha Ranjan

  • How to handle a large number of query parameters for a Browse screen

    I need to implement an advanced search functionality in a browse screen for a large table.  The table has 80+ columns and therefore will have a large number of possible query parameters.  The screen will be built on a modeled query with all
    of the parameters marked as optional.  Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup.  Is it possible for example to have a search
    button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
    the search results are returned to the table control?  This would effectively make screen b an advanced modal window for screen a.  In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
    re-open screen b and have all of their original parameters still set.  How would you implement this, or otherwise deal with a large number of optional query parameters in the html client?  My initial thinking is to store all of the parameters in
    an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work.  TIA

    Wow Josh, thanks.  I have a lot of reading to do.  What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired. 
    There is an excellent way to get at all of the query information in the Query_Executed() method.  I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
    the query name, user name, and parameter value pairs that the user entered.  Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query. 
    I almost have it working.  It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
    item).  I'll post an update once I get it.  Probably will have some more questions as I go through it.  Thanks again! 

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Large number of JSP performance

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.

    Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • Mail - how to send to large number of recipients?

    How can I send an email to a large number of recipients that I hold addresses for in a document without a) adding them to my contacts, and b) so that they can't all see each others addresses?

    I thought about using BCC, but it seems a little messy although it would do the job.
    Messy how?  That's exactly what it's for.  In fact, there's no other way to hide addresses from other recipients of the message.  The only other way to do it would be to script automated sending of one message per address, and that would get quite messy!
    What is the maximum number of recipients that Mail can handle?
    There's no limit enforced by Mail, AFAIK.  The limits come from the SMTP server you're using.
    One of the issues I had when using Outlook on Windows for this by copying and pasting from a text document, was that if there was an invalid/non-existent address in the list, it would just return an error and I was never quite sure if the whole thing didn't send, or if it was just to the dodgy address(es).
    In Mail, you'll just get a bounce message for any addresses that don't exist.  The one exception to that is addresses on the same server that you're sending from...  often, the server will simply reject the attempt to send to an address that it knows doesn't exist.  I'm not sure what kind of message the server returns in that case, though, and suspect it depends on the server.  It's been a while since I've seen such a problem.

  • Add tcode to large number of roles

    Hi all,
    Is there a way to add a tcode to large number of roles (around 20) using pfcg? It would be great if I can do it once instead of modifying role by role.
    Thanks.

    Yes, You can actually write a ABAP program, but bottom line is that you cant edit more than one role at a time in PFCG.
    The program would also have to edit one role at a time

  • I want to change the sharing and permissions of a large number of photos. How can I do this in bulk rather than one at a time?

    I want to change the sharing and permissions of a large number of photos. How can I do this in bulk rather than one at a time?

    Does this involve iPhoto in some way?

  • I am trying to delete a large number of songs from my Itunes library without having to delete them one at a time.  How do I do that?

    I am trying to delete a large number of songs from my itunes library without having to do it one at a time.  How can I do this.  All of my songs have check marks by them.  Help!

    Then I suspect you do not have iTunes Match enabled. Using that "delete all" command in the Settings does not remove the music from the cloud, nor does it disable iTunes Match. With iTunes Match enabled you will still see all the music.
    So go to Settings > iTunes and App Store and make sure iTunes Match is set to ON (green).

  • I have a large number of photos imported into iPhoto with the dates wrong.  How can I adjust multiple photos (with varying dates) to the same, correct, date?

    I have a large number of photos imported into iPhoto with the dates wrong.  How can I adjust multiple photos (with varying dates) to the same, correct, date?

    If I understand you correctly, when you enter a date in the Adjust Date and Time window, the picture does not update with the date you enter.  If that is the case then something is wrong with iPhoto or your perhaps your library.
    How large a date change are you putting in?  iPhoto currently has an issue with date changes beyond about 60 years at a time.  If the difference between the current date on the image and the date you are entering is beyond that range that may explain why this is not working.
    If that is not the case:
    Remove the following to the trash and restart the computer and try again:
    Home > Library > Caches > com.apple.iphoto
    Home > Library > Preferences > com.apple.iPhoto (There may be more than one. Remove them all.)
    ---NOTE: to get to the "home > library" hold down option on the keyboard and click on "Go" > "Library" while in the Finder.
    Let me know the results.

  • How can I move a large number of pics from icloud Photos Beta to a PC without having to select each photo?

    How can I move a large number of pics from icloud Photos Beta to a PC without having to select each photo?

    Hi,
    can you please explain why the solution from steve is not the right solution for you.
    regards
    Peter

  • TS1702 how can I remove large number of photos on my iPhone 4s from iTunes using a PC - Lenovo Think Pad - so that I can reduce the space used on the phone by 50%?

    how can I remove large number of photos on my iPhone 4s from iTunes using a PC - Lenovo Think Pad - so that I can reduce the space used on the phone by 50%?

    You don't need iTunes for that. Simply connect the iPhone to the PC where it will be recognized as a camera. You can then select the photos you wish to transfer, transfer them, and delete them.

Maybe you are looking for

  • How to display the fields in ALV Output without using Field catalog?

    How to display the fields in ALV Output without using Field catalog? Could you pls tell me the coding? Akshitha.

  • Empty cartridges?

    We have an HP LaserJet Pro color MFP M276nw.  We have allowed the color cartridges to run down as we do very little color copying.  SUPPOSEDLY the printer is designed to use only the black when the color cartridges are empty.  But we have been experi

  • Re: Transport problem of SAP Query in Global Area

    Hi All, I am facing a problem in transporting the query which is created in Global area. I created a Query in Global Area and saved in a Transport Request. Now i created a Transaction code for this query by taking the Program name generated by this Q

  • Problem in t.code MC82

    Hi all, I am facing some problem in copying data in t.code MC82. I am getting an error message which says "Date 2013/08/01 comes after end of factory calendar with ID" with message no. MA112. I checked the Factory Calendar. It is valid upto 2020. Ple

  • Berücksichtigung Beginn Geschäftsjahr in XL-Report

    Beim Aufruf eines XL-Reports muss der Kunde einen Aktivitätszeitraum (z.B. den April 2008) auswählen. Frage: Ist es möglich bei den Berechnungen im XL-Report anzugeben, dass er den Zeitraum vom Beginn des Geschaftsjahres (Wie lautet der Befehl?) bis