Large number of lwp_suspend calls.

Hi,
i have a sample test program which moves files between directories. The code is pasted below.
The mv is taking more time on one of the servers compared to others.
A truss reveals an unusally high number of lwp_supend sys calls on the problematic server.
What could be the reason for this? How can we trouble shoot this.
public class Test {
      * @param args
     public static void main(String[] args) {
          if (args.length != 2) {
               System.out.println("Usage::  InputDir OutputDir ");
               return;
          String commandEx, commandSym;
          String inDir = args[0];
          File inDirFile = new File(inDir);
          String outDir = args[1];
          while (true) {
               File[] files = inDirFile.listFiles();
               if (files.length != 0) {
                    System.out.println("No of files = " + files.length);
                    Date startTime = Calendar.getInstance().getTime();
                    System.out.println("Time Before moving files is :" + startTime);
                    for (int j = 0; j < files.length; j++) {
                         commandEx = "/bin/mv " + files[j] + " " + outDir;
                         Runtime rt = Runtime.getRuntime();
                         Process result = null;
                         try {
                              result = rt.exec(commandEx);
                              result.waitFor();
                         } catch (Exception e) {
                              System.out
                                        .println("Encountered exception while trying to copy files");
                         if (0 != result.exitValue()) {
                              System.out.println("Failed to copy file");
                    Date endTime = Calendar.getInstance().getTime();
                    System.out.println("Time after moving all the files:"
                              + Calendar.getInstance().getTime());
                    long diff = endTime.getTime() - startTime.getTime();
                    System.out.println("Time to move: " + diff / 1000 + " secs");
               } else {
                    System.out.println("No files in the input directory");
                    return;
               return;
}

Quite frankly, I'm surprised this code isn't causing a noticeable headache on all your machines, if any of them. It's hard to imagine they are all the same kind of system, with comparable loads, but only one is struggling with this task.
How many CPUs on the system? If your load averages, as shown in the prstat output you gave, exceed the number of CPUs, the box in question is probably saturated.
I'd first be curious how many LWPs are attributable to the Java process itself. I'm finding it hard to take my eyes off this section:
                    for (int j = 0; j < files.length; j++) {
                         commandEx = "/bin/mv " + files[j] + " " + outDir;
                         Runtime rt = Runtime.getRuntime();
                         Process result = null;
The code draws one Runtime object for every mv(1). That can scale to fork-bomb status depending on the number of files you're moving. And since each separate process runs in its own subshell, it's going to fight all other concurrent processes for access to the target directory's lock, in order to place and record its file entry safely. The many lwp_suspend calls may be attributable to all but one of those processes being told to wait until the directory lock is freed by the current owner.
It would be good for all systems to economize on the action here. Why this approach would afflict one system but not others that are effectively the same is a puzzler.
I'd suggest looking at prstat -L, but with 16k+ LWPs it's going to output a small phonebook of entries.

Similar Messages

  • How to handle a large number of query parameters for a Browse screen

    I need to implement an advanced search functionality in a browse screen for a large table.  The table has 80+ columns and therefore will have a large number of possible query parameters.  The screen will be built on a modeled query with all
    of the parameters marked as optional.  Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup.  Is it possible for example to have a search
    button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
    the search results are returned to the table control?  This would effectively make screen b an advanced modal window for screen a.  In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
    re-open screen b and have all of their original parameters still set.  How would you implement this, or otherwise deal with a large number of optional query parameters in the html client?  My initial thinking is to store all of the parameters in
    an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work.  TIA

    Wow Josh, thanks.  I have a lot of reading to do.  What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired. 
    There is an excellent way to get at all of the query information in the Query_Executed() method.  I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
    the query name, user name, and parameter value pairs that the user entered.  Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query. 
    I almost have it working.  It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
    item).  I'll post an update once I get it.  Probably will have some more questions as I go through it.  Thanks again! 

  • How to calculate the area of a large number of polygons in a single query

    Hi forum
    Is it possible to calculate the area of a large number of polygons in a single query using a combination of SDO_AGGR_UNION and SDO_AREA? So far, I have tried doing something similar to this:
    select sdo_geom.sdo_area((
    select sdo_aggr_union (   sdoaggrtype(mg.geoloc, 0.005))
    from mapv_gravsted_00182 mg 
    where mg.dblink = 521 or mg.dblink = 94 or mg.dblink = 38 <many many more....>),
    0.0005) calc_area from dualThe table MAPV_GRAVSTED_00182 contains 2 fields - geoloc (SDO_GEOMETRY) and dblink (Id field) needed for querying specific polygons.
    As far as I can see, I need to first somehow get a single SDO_GEOMETRY object and use this as input for the SDO_AREA function. But I'm not 100% sure, that I'm doing this the right way. This query is very inefficient, and sometimes fails with strange errors like "No more data to read from socket" when executed from SQL Developer. I even tried with the latest JDBC driver from Oracle without much difference.
    Would a better approach be to write some kind of stored procedure, that adds up all the single geometries by adding each call to SDO_AREA on each single geometry object - or what is the best approach?
    Any advice would be appreciated.
    Thanks in advance,
    Jacob

    Hi
    I am now trying to update all my spatial table with SRID's. To do this, I try to drop the spatial index first to recreate it after the update. But for a lot of tables I can't drop the spatial index. Whenever I try to DROP INDEX <spatial index name>, I get this error - anyone know what this means?
    Thanks,
    Jacob
    Error starting at line 2 in command:
    drop index BSSYS.STIER_00182_SX
    Error report:
    SQL Error: ORA-29856: error occurred in the execution of ODCIINDEXDROP routine
    ORA-13249: Error in Spatial index: cannot drop sequence BSSYS.MDRS_1424B$
    ORA-13249: Stmt-Execute Failure: DROP SEQUENCE BSSYS.MDRS_1424B$
    ORA-29400: data cartridge error
    ORA-02289: sequence does not exist
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 27
    29856. 00000 - "error occurred in the execution of ODCIINDEXDROP routine"
    *Cause:    Failed to successfully execute the ODCIIndexDrop routine.
    *Action:   Check to see if the routine has been coded correctly.
    Edit - just found the answer for this in MetaLink note 241003.1. Apparently there is some internal problem when dropping spatial indexes, some objects gets dropped that shouldn't be. Solution is to manually create the sequence it complains it can't drop, then it works... Weird error.

  • How can I form a group in address book. I need to transfer a large number of emails from an excel spread sheet to form a group to send emails to.

    I need to transfer a large number of emails from an excel spread sheet to form a group to send emails to. I can either use address book or transfer them to BTYahoo contacts and send from there.

    Hello, if you have the font that Photoshop is supposed to use to write the barcode, and each image is also listed in the spreadsheet, you can use the little known feature called variables: http://help.adobe.com/en_US/photoshop/cs/using/WSfd1234e1c4b69f30ea53e41001031ab64-7417a.h tml
    see this video: 
    http://www.youtube.com/watch?v=LMAeX5pexNk
    Or this one:
    http://tv.adobe.com/watch/adobe-evangelists-julieanne-kost/pscs5-working-with-variables/

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Large number of JSP performance

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.

    Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • Hello , FMS is how to prevent the client into a large number of bytes?

    Hello , FMS how to prevent the client to pass a large number of bytes , such as one person put a 1G file in the argument , I also silly to receive ?Although there Client.setBandwidthLimit ( ) limit his maximum traffic per second , but is there a way , one more than the maximum amount of bytes to disconnect his.I assume that methods to determine the length is also obtained all of his transfer is finished , in order to determine out of it .

    How to limit the size of the parameters of the method.I wrote a method in the main.asc then the client NetConnection.call assignment, but if the client is malicious to upload very large data, how to limit it, I view the document did not find the clues, I hope that those parameters up to100KB.

  • Large number of http posts navigating between forms

    Hi,
    i'm not really a forms person (well not since v3/4 running character mode on a mainframe!), so please be patient if I'm not providing the most useful information.
    An oracle forms 10 system that I have fallen into supporting has to me very poor performance in doing simple things like navigating between forms/tabs.
    Looking at the java console (Running Sun JRE 1.6.0_17), and turning on network tracing, I can see a much larger number of post requests than I would expect (I looked here first as initially we had an issue with every request going via a proxy server, and I wondered if we had lost the bypass proxy setting). Only a normal number of GETS though.
    Moving to one particualr detail form from a master record is generating over 300 post requests - I'v confirmed this looking at the Apache logs on the server. This is the worst one I have found, but in general the application appears to be extremely 'chatty'
    The only other system I work with which uses forms doesn't generate anything like these numbers of requests, which makes me think this isn't normal (As well as the fact this particular form is very slow to open)
    This is a third party application, so i don't have access to the source unfortunately.
    Is there anything we should look at in our setup, or is this likely to be an application coding issue? This app is a recent conversion from a forms 6 client server application (Which itself ran ok, at least this bit of the application did with no delays in navigation between screens).
    I'm happy to go back to the supplier, but it might help if I can point them into some specific directions, plus i'd like to know what's going on too!
    Regards,
    Carl

    Sounds odd. 300 Requests is by far too much. As it was a C/S application: did they do anything else except the recompile on 10g? Moving from C/S to 10g webforms seems to be easy as you just need to recompile but in fact it isn't. There are many things which didn't matter in a C/S environment but have disastrous effects once the form is deployed over the web. The synchronize built in for example. In C/S calls to synchronize wasn't that bad; But when you are using web deployed forms...each call to synchronize is a roundtrip. The usage of timers is also best kept on a low level in webforms for example.
    A good starting point for the whole do's and dont's when moving forms to the web is the forms upgrade center:
    http://www.oracle.com/technetwork/developer-tools/forms/index-095046.html
    If you don't have the source code available that's unfortune; but if you want to know what's happening behind the scenes there is the possibility to trace a forms session:
    http://download.oracle.com/docs/cd/B14099_19/web.1012/b14032/tracing002.htm#i1035515
    maybe this sheds some light upon what's going on.
    cheers

  • Internal Error 500 started appearing even after setting a large number for postParametersLimit

    Hello,
    I adopted a CF 9 web-application and we're receiving the Internal 500 Error on a submit from a form that has line items for a RMA.
    The server originally only had Cumulative Hot Fix 1 on it and I thought if I installed Cumulative Hot Fix 4, I would be able to adjust the postParametersLimit variable in the neo-runtime.xml.  So, I tried doing this, and I've tried setting the number to an extremely large number (last try was 40000), and I'm still getting this error.  I've tried putting a <cfabort> on the first line on the cfm file that is being called, but I'm still getting the 500 error.
    As I mentioned, it's a RMA form and if the RMA has a few lines say up to 20 or 25 it will work.
    I've tried increasing the following all at the same time:
    postParameterSize to 1000 MB
    Max size of post data 1000MB
    Request throttle Memory 768MB
    Maximum JVM Heap Size - 1024 MB
    Enable HTTP Status Codes - unchecked
    Here's some extra backgroun on this situation.  This is all that happened before I got the server:
    The CF Server is installed as a virtual machin and was originally part of a domain that was exposed to the internet and the internal network.  The CF Admin was exposed to the internet.
    AT THIS TIME THE RMA FORM WORKED PROPERLY, EVEN WITH LARGE NUMBER OF LINE ITEMS.
    The CF Server was hacked, so they did the following:
    They took a snapshot of the CF Server
    Unjoined it from the domain and put it in the DMZ.
    The server can no longer connect to the internet outbound, inbound connections are allowed through SSL
    Installed cumulative hot fix 1 and hot fix APSB13-13
    Changed the Default port for SQL on the SQL Server.
    This is when the RMA form stopped working and I inherited the server.  Yeah!
    Any ideas on what i can try next or why this would have suddenly stopped working after making the above changes on the server.
    Thank you

    Start from the beginning. Return to the default values, and see what happens. To do so, proceed as follows.
    Temporarily shut ColdFusion down. Create a back-up of the file neo-runtime.xml, just in case.
    Now, open the file in a text editor and revert postParametersLimit and postSizeLimit to their respective default values, namely,
    <var name='postParametersLimit'><number>100.0</number></var>
    <var name='postSizeLimit'><number>100.0</number></var>
    That is, 100 parameters and 100 MB, respectively. (Note that there is no postParameterSize! If you had included that element in the XML, remove it.)
    Restart ColdFusion. Test and tell.

  • Mail to a large number of

    HI
    I was wondering if there exists a software that handles group mails for a large number of recipients (like 400 adresses) at one time.
    I think mail doesn't accept more than 50 adresses /mail at a time.
    I used to use a PC based software called sarbacane designed for mass emailing (not spamming, that's understood .. !), with charts, statistics and remailing automatically to adresses that did not function etc ... does that exist for Mac ??
    Thank you anyone who has the answer !!
    Stephanie

    Hello Stephanie
    I do the same, and ran into the same problem. It was my ISP that was limiting how many emails I could send at once. I 'think' (because I'm not a tech person) that Mail takes the message and puts 50 or more address at the top. ISPs will see this as a possible spam mail out. I worked around this by creating groups of less than 50 email addresses (using Smart Groups), but after awhile it got very tedious having to sort and create criteria to keep each group to just less than 50.
    There are commercial programs that manage large emailings. I use 'Mailings', and it works fine. I know that there are others as well. Most have a free trial version.
    Instead of sending one message with lots of addresses, it sends the email multiple times - once to each person in your list. It takes longer (but it works in background, so who cares) but I have been able to send out to hundreds of addresses with no issues. Hope this helps.
    Seth

  • Always get MaxMessageSizeExceededException on large number of MBeans

    I have 60 apps deployed on my Cluster (with 2 ManagedServers), and each App has a large number of MBeans. If I go to the custom tree using wlst, I always get this error, even though the MaxMessageSize has been set way way higher.
    weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'
    The ManagedServers are started with the ClusterManager, and each ManagedServer has the Arguments set for ServerStart:
    -Xms4g -Xmx8g -XX:MaxPermSize=7g -d64 -XX:+UseCodeCacheFlushing -XX:NewSizeThreadIncrease=100 -Dweblogic.ThreadPoolPercentSocketReaders=50 -Dweblogic.ThreadPoolSize=100 -Dweblogic.SelfTuningThreadPoolSizeMin=1000 -Dweblogic.MaxMessageSize=20971520 -Djava.library.path=/install/Middleware/wlserver_10.3/server/native/linux/x86_64
    The AdminServer has been set with a similar MaxMessageSize, and the wlst from where I issue the custom() command, has been started with "-Dweblogic.MaxMessageSize=20971520" too.
    To top it off, I checked the JVM's with jinfo to double check the weblogic.MaxMessageSize, and all do comply to the specified bytes, none being 10M of the error message.
    I am able to reproduce the issue on Linux x64 as well as Solaris 11 Sparc.
    Could it be related to the MBeans (as I assume)? When I only deploy 55 apps, the issue does not occur. Any other tips are welcome.

    Here WLST would be acting as the client. It seems that you have placed the setting of the message max size on the server side (which is one part of the tuning), but you need to place the same to the WLST too or for that matter any other client like applet, stand-alone java code.
    In short you can try the following, if you wish:
    set the environment using setDomainEnv.sh/cmd and the run:
    java -Dweblogic.MaxMessageSize=20971520 weblogic.WLST
    Note: I read through your note about providing "-Dweblogic.MaxMessageSize=20971520" parameter to WLST, but not sure whether you have checked WLST process with jinfo or the WLS server nodes.
    Also I am not sure what type of application you are using by you can consider passing the above property via System.property() call to the application.
    Additionally, I would request you to share some more information, like the config (snippet), jinfo output, and WLST output if you try to locate the current MaxMessageSize value of the server by connecting to it, complete stack (if any).. in case the above does not help.
    Note: I have tried this multiple times and it works like a charm. But from your notes it seems to a case of overridding of parameters value.
    HTH,
    AJ

  • Number displayed while calling.

    I hate the fact that my number is displayed in LARGE numbers when i call out, but whoever I am calling is down below in 1/4 sized font!
    Why would i care to have my own number shown to me every time i call out at teh top of the screen, but have to wear a magnifying glass to see if I called "Judy" or worse yet try to identify if I called her home or mobile?
    Is there a way to switch it or increase the font of who I am calling?
    seems like nonsense or backwards to me.
    phone works great, but that drives me batty.

    Leslie8120 wrote:
    2manycars wrote:
    I hate the fact that my number is displayed in LARGE numbers when i call out, but whoever I am calling is down below in 1/4 sized font!
    Why would i care to have my own number shown to me every time i call out at teh top of the screen, but have to wear a magnifying glass to see if I called "Judy" or worse yet try to identify if I called her home or mobile?
    Is there a way to switch it or increase the font of who I am calling?
    seems like nonsense or backwards to me.
    phone works great, but that drives me batty.
    I`m not aware that the size of the numbers can be increased or decreased but you can turn it off if it bothers you.
    Tap on the phone icon and when it opens, pull down from the top. Tap on "Settings" then select "Show my number" then tap to open it. Now you have the choice to turn OFF the "Show my number" 
    "Show my number" is for call ID, your number still shows at the top, the font size is very similar in the 10.3 OS. 

  • UTL_HTTP Fails After Large Number of Requests

    Hello,
    The following code issues an HTTP request, obtains the response, and closes the response. After a significantly large number of iterations, the code causes the session to terminate with an "END OF FILE ON COMMUNICATIONS CHANNEL" error (Oracle Version 10.2.0.3). I have the following two questions that I hope someone can address:
    1) Could you please let me know if you have experienced this issue and have found a solution?
    2) If you have not experienced this issue, are you able to successfully run the following code below in your test environment?
    DECLARE
    http_req utl_http.req;
    http_resp utl_http.resp;
    i NUMBER;
    BEGIN
    i := 0;
    WHILE i < 200000
    LOOP
    i := i + 1;
    http_req := utl_http.begin_request('http://<<YOUR_LOCAL_TEST_WEB_SERVER>>', 'POST', utl_http.HTTP_VERSION_1_1);
    http_resp := utl_http.get_response(http_req);
    utl_http.end_response(http_resp);
    END LOOP;
    dbms_output.put_line('No Errors Occurred. Test Completed Successfully.');
    END;
    Thanks in advance for your help.

    I believe the end_request call is accomplished implicitly through the end_response function based on the documentation that I have reviewed. However, to be sure, I had attempted your suggestion as it also had occurred to me. Unfortunately, after attempting the end_request, I received an error since the request was already implicitly closed. Therefore, the assumption is that the end_request call is not applicable in this context. Thanks for the suggestion though. If you have any other suggestions, please let me know.

  • Large number of events (.ics files) caused our problems

    We were plagued with iCal server issues for over a year. iCal server would stop serving to clients, hang, eat CPU time, et al. I had experienced many of the errors in the logs reported her.
    One calendar had over 8000 events in it. We archived older events & put them in a local iCal calendar (since they're static), this lowered the # of events to below 2000. All the iCal server problems instantly and permanently disappeared.
    I had noticed that certain UNIX commands failed in the 8000+ events calendar's directory because of the large number of files (too many arguments error). For example, cat *.ics > All1.txt failed.
    Perhaps Python is similarly limited or the iCal code calls UNIX commands that are barfing because of the number of files.
    If you're having issues and have a calendar with over 2000 or so events, you may want to try breaking up the calendar to see if that fixes the problems.
    Sam

    Hi,
    Firstly check in your system if the standard job SAP_REORG_SPOOL which will delete the old spool files and this job needs to be scheduled in background on daily basis. Regarding the note its asking to check the patch levels of files which you can check at the os level in kernel directory. I am not much aware of AS400 directory structure, but normally kernle path could be /usr/sap/<SID>/SYS/exe/run, /usr/sap/<SID>/DVEBMGS00/exe. In  these path u can find the patch levels of the files.
    Regards,
    Sharath

Maybe you are looking for

  • Problem with Sender JDBC CC

    Hi, Sender JDBC communication channel is working fine first time to pull the data and from database.  After pulling the data from the database i just stop the communication channel and start it to pull the data again. but it is not pulling the data a

  • MSI K8D Master PCI-X Question

    Hi, I was just wandering if anyone could tell me if the MSI K8D Master is compatible with any PCI-X 16 video cards. Any information would be appreciated.

  • MacBook Leopard Bugs

    These are the bugs I have so far discovered in Leopard since I got it last night. I should note that I did an Upgrade install. AirPort Disk - Accounts do not work, only shows shared account. Does however accept user name and password for log in. - Ap

  • Rude and misleading information from the "vendor center".

    I am amazed that Verizon Wireless thinks so little of me, as to let THIS contractor answer questions on their behalf.  I have had a very poor success rate when calling Verizon on the weekends.  Now when I call, first and foremost, my question is, "Ar

  • DataGrid scroll positions resetting with dynamic dataprovider

    Hi - I have a DataGrid that is backed by an ArrayCollection that is fairly dynamic. Before doing my updates, i disable autoupdating, and though my size is typically fixed at 100 items, sometimes those items will be replaced. My problem is that a user