Unable to parse properly for a large number of requests

Hi all
in weblogic 9 .2 when a request comes which parses and reads a xml file then there is no problem. But at a time when a large number of request comes to read the xml file then it is behaving differently .One node it is unable to locate.The meaning of error is actually inappropriate
java.lang.NullPointerException
     at com.sun.org.apache.xerces.internal.dom.ParentNode.nodeListItem(ParentNode.java:814)
     at com.sun.org.apache.xerces.internal.dom.ParentNode.item(ParentNode.java:828)
     at com.test.ObjectPersistanceXMLParser.getData(ObjectPersistanceXMLParser.java:46)
     at com.test.testservlet.doPost(testservlet.java:634)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
     at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:225)
     at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:127)
     at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283)
     at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
     at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3214)
     at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
     at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
     at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:1983)
     at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:1890)
     at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1344)
     at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
     at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
But there is a node and is found while processing in a java program for the same xml file. But this occurs when a large number of request comes.
Please suggest.

Yes I parse xml as much as the request comes. and i do not want to synchronize here. The below code executes for each request. Do we have a solution here .
DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder();
Document configXML = docBuilder.parse(filePath);
I have tried with DOMParser and follwowing method is applied
setFeature( "http://apache.org/xml/features/dom/defer-node-expansion ",false).
Please suggest.

Similar Messages

  • Best practice for handling data for a large number of indicators

    I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
    I was curious what others have done in similar circumstances.
    Bill
    “A child of five could understand this. Send someone to fetch a child of five.”
    ― Groucho Marx
    Solved!
    Go to Solution.

    I can certainly feel your pain.
    Note that's really what is going on in that png  You can see the Action Engine responsible for updating the display to the far right. 
    In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified.  So I worked it this way from no choice of mine.  I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway.  Defer Panel Updates was my very good friend.  The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
    (the GUI did scale poorly though!  That is a lot of wires!  I was greateful to Jack for the Idea to make align and distribute work on wires)
    Jeff

  • Canvas Resize to Square for a large number of images using script. E.g. image is currently 1020 x 600, I would like to change this to 1020 x 600. PLEASE HELP

    Hi All,
    I have a large number of images that I need to resize the canvas sizes to a square, the images are currently in different sizes. For example, if an image is 1020 x 600 I would like to change the canvas to 1020 x 1020 so that the image becomes a square. I am using CS3 and all the images are jpeg's. I have done research on scripts but the ones I have tried have not worked. Please help.
    Thanks.

    Since you do not want to crop your images to a square 1:1 aspect ratio changing the canvas to be square will not make your images square they will retain their Aspect Ratio and  image size will be changer to fit within your 1020 px square. There will be a border or borders on a side or two borders on opposite sides.   You do not need a script because Photoshop ships with a Plug-in script to be used in Actions.   What is good about Plugins is the support Actions.  When you record the action the plug-in during action recording records the setting you use in its dialog into  the actions step.  When the Action is played the Plug-in use the recorded setting an bypasses displaying its dialog. So the Action can be Batch.  The Action you would record would have two  Steps.   Step 1  menu File>Automate>Fit Image... in the Fit Image dialog enter 1020 in the width and height  fields.  Step 2 Canvas size enter 1020 pixels in width and height  not relative leave the anchor point centered it you want even borders on two sides set color to white in the canvas size dialog. You can batch the action.
    The above script will also work. Its squares the document then re-sizes to 1020x1020  the action re-sizes the image to fit with in an area 1020 x 1020 then add any missing canvas. The script like the action only process one image so it would also need to be batched. Record the script into and action and batch the action. As the author wrote. The script re size canvas did not specify an anchor point so the default center anchor point is uses  like the action canvas will be added to two sides.

  • I have lost all my captioning and keywording for a large number of .dng and raw files on transfer to LR6 CC - any help!

    I have imported my old LR5 catalogue into LR6 CC but appear to have lost a lot of my captioning and keywording to a very large number of files which are either .dng or other raw formats. I am working on a Mac. Can somebody please help as I am very worried about losing vital information.
    Thanks!
    Gerry

    OK, I may have pinned this down to the problem, and it may not be LR6 related. Reimporting my LR5 catalogue shows that the information is there on files that are marked as missing from catalogue (i.e. marked '!'). These are missing because I have renamed them, and it would suggest that the renamed file is not linking with the old .xmp file. Do you know of a way I can link the renamed file with the old .xmp file? Would I have to laboriously go through all my .xmp files and rename them for instance?

  • UTL_HTTP Fails After Large Number of Requests

    Hello,
    The following code issues an HTTP request, obtains the response, and closes the response. After a significantly large number of iterations, the code causes the session to terminate with an "END OF FILE ON COMMUNICATIONS CHANNEL" error (Oracle Version 10.2.0.3). I have the following two questions that I hope someone can address:
    1) Could you please let me know if you have experienced this issue and have found a solution?
    2) If you have not experienced this issue, are you able to successfully run the following code below in your test environment?
    DECLARE
    http_req utl_http.req;
    http_resp utl_http.resp;
    i NUMBER;
    BEGIN
    i := 0;
    WHILE i < 200000
    LOOP
    i := i + 1;
    http_req := utl_http.begin_request('http://<<YOUR_LOCAL_TEST_WEB_SERVER>>', 'POST', utl_http.HTTP_VERSION_1_1);
    http_resp := utl_http.get_response(http_req);
    utl_http.end_response(http_resp);
    END LOOP;
    dbms_output.put_line('No Errors Occurred. Test Completed Successfully.');
    END;
    Thanks in advance for your help.

    I believe the end_request call is accomplished implicitly through the end_response function based on the documentation that I have reviewed. However, to be sure, I had attempted your suggestion as it also had occurred to me. Unfortunately, after attempting the end_request, I received an error since the request was already implicitly closed. Therefore, the assumption is that the end_request call is not applicable in this context. Thanks for the suggestion though. If you have any other suggestions, please let me know.

  • ITunes match for a large number of devices, 4 people, and as a backup solution.

    I am interested in using iTunes match for two central reasons:
    1.  Backing up my music library.  Is iTunes Match an solution for this?  Or not really?  I get the fact that my purchases are protected, but what about the music I uploaded from purchased CD's. 
    Question:  Can I consider iTunes Match at $24.99/year the cost of backing up 60GB of music to the cloud?
    2.  Sharing my music library across a number of devices.  We are a family of 4 with farily extensive MAC, iPhone, iPad usage, including ...
    1 iMac (Music server), 3 Power books (Indivudal work machines), 2 iPads, Apple TV,  and 4 iPhones.  We have 4 Mobile me accounts, moving to ICloud.
    The problem I have is that I can never effectively coordinate the music library across the devices.  80GB of iTunes library is too much to load across all the devices and I never seem to have the song I'm looking for at the time I desire it on a particular device, other then the family iMAC server which has the whole library on it.
    Question:  Seems like iTunes Match is an ideal solution, but can it handle the diverse spectrum and number of devices I have?  Ideally I want to thnk of iTunes Matchas, 1 central library avaialble on all devices, all the time.
    Thank you in advance for any opinions.
    MikeK

    mek wrote:
    Question:  Seems like iTunes Match is an ideal solution, but can it handle the diverse spectrum and number of devices I have?  Ideally I want to thnk of iTunes Matchas, 1 central library avaialble on all devices, all the time.
    Think of iTunes Match as a central library from which "satellite" libraries can pull any time they have an Internet connection.
    Functionally, if all those devices are in your house, iTunes Match doesn't give you much that Home Sharing doesn't already offer.  With Home Sharing all your Macs can easily "fill in the gaps" from the "master" library.  Of course, that requires that the machine with the "master" library is always online and iTunes is always running on it, which might not be the case.  AppleTVs and iPads sync to that "master" library and so what is on them is configured from the master library computer, not the device itself.
    What iTunes Match offers you is:
    1.  Your "canonical" library lives "in the cloud" and so you don't have to have a home server up and running iTunes at all times.
    2.  Your "canonical" library lives "in the cloud" and so you can update from it any time you have an internet connection, not just when you are at home.
    3.  The workflow for updating from the canonical library is much better than the clunky "Home Sharing" interface.
    4.  Any "matched" songs (I found that in my library only about 65% were successfully matched; I'm not sure why) will be available at 256kbps AAC, which might be higher quality than what you have in your library currently (especially if you have imperfections like CD skipping, etc, in the tracks).  Of course, anything iTunes can't match to its own library will remain just as imperfect as ever.
    The downside (versus Home Sharing) is:
    1.  Everything needs to get copied onto a device from the Internet, which is likely to take much longer than just Home Sharing over your home wireless network.
    Seems like a pretty good system.  I've signed up for it and am happy with what I've seen so far.

  • Best design pattern for a large number of options?

    Hi,
    I'm faced with the following straightforward problem, but I'm having trouble coming up with a design solution. Any suggestions?
    Two items in the database can be related in three possible ways (temp replacement, permanent replacement, substitute). Each item has three possible stock levels. The user can select one of two items.
    This comes out to 54 different prompts that need to be provided to the user (example: "The entered item has a preferrable temp replacement available that is in stock, sell instead of the entered item?", "The entered item is out of stock, but has a substitute item available, use instead?", etc. etc.)
    Does anybody have a suggestion of a good design pattern to use? In the legacy system it was implemented with a simple case statement, but I'd like to use something more maintainable.
    If anybody has any suggestions, I'd appreciate it.
    thanks,

    In the legacy system it was
    implemented with a simple case statement, but I'd like
    to use something more maintainable.Is it ever likely to change? If no, then a case statement is pretty maintainable.
    How is the data retrieved? I'm guessing it's a decision tree: if the desired object is in stock, return it, otherwise look for a permanent substitute, &c. In this case, perhaps you have a retrieval object that implements a state machine internally: each call to the retrieval operation causes a transition to the next-best state if unable to fulfill the request.
    If you do retrieve all possible data in a single query (and I hope not, as that would almost certainly be very inefficient), then think of some sort of "preference function" that could be used to order the results, and store them in a TreeMap or TreeSet ordered by that function.

  • LENGTH function for a large number returns 40 not the number of digits

    In SQL*Plus:
    SQL> select length(12345678901234567890123456789012345678901234567890)lngth
    2 from dual;
    LNGTH
    40
    SQL> select length('12345678901234567890123456789012345678901234567890')lngth
    2 from dual;
    LNGTH
    50
    It seems that the implicit conversion from number to char in the first query causes an unexpected result. From the documentation of the LENGTH function at http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions181.htm#i79330:
    "If you omit fmt, then n is converted to a VARCHAR2 value exactly long enough to hold its significant digits."
    Perhaps Oracle considers that only 40 digits are significant (?)

    Additional reason.
    There is implicit data conversion as like to_char(number).
    SQL> set numwidth 50
    SQL> select 12345678901234567890123456789012345678901234567890 num from dual;
                                                   NUM
    12345678901234567890123456789012345678900000000000
    SQL> select length(12345678901234567890123456789012345678901234567890) numlen from dual;
                                                NUMLEN
                                                    40
    SQL> select to_char(12345678901234567890123456789012345678901234567890) numlen from dual;
    NUMLEN
    1.2345678901234567890123456789012346E+49
    SQL> select length(to_char(12345678901234567890123456789012345678901234567890)) numlen from dual;
                                                NUMLEN
                                                    40
    SQL> select to_char(12345678901234567890123456789012345678901234567890,
      2                '09000000000000000000000000000000000000000000000000')
      3  num from dual;
    NUM
    12345678901234567890123456789012345678900000000000
    SQL> select length(to_char(12345678901234567890123456789012345678901234567890,
      2                '09000000000000000000000000000000000000000000000000'))
      3  num from dual;
                                                   NUM
                                                    51
    SQL> select to_char(12345678901234567890123456789012345678901234567890,
      2                'fm09000000000000000000000000000000000000000000000000')
      3  num from dual;
    NUM
    12345678901234567890123456789012345678900000000000
    SQL> select length(to_char(12345678901234567890123456789012345678901234567890,
      2                'fm09000000000000000000000000000000000000000000000000'))
      3  num from dual;
                                                   NUM
                                                    50

  • Firmware update for a large number of units.

    We will need to update firmware in some dozens of E 90.
    Nokia agent here is not dealing with firmware update and we want to give final users updated firmware.
    Does anyone has un idea how we can do it with out connecting unit by unit to network and download the same firmware to each unit?
    Any way to save the download in computer and just to install in each of the unit?
    Tnx in advance.

    03-Oct-200712:30 PM
    e90n95 wrote:
    We will need to update firmware in some dozens of E 90.
    Nokia agent here is not dealing with firmware update and we want to give final users updated firmware.
    Do you already have this famous new FW update ?
    Domdom

  • Internal Error 500 started appearing even after setting a large number for postParametersLimit

    Hello,
    I adopted a CF 9 web-application and we're receiving the Internal 500 Error on a submit from a form that has line items for a RMA.
    The server originally only had Cumulative Hot Fix 1 on it and I thought if I installed Cumulative Hot Fix 4, I would be able to adjust the postParametersLimit variable in the neo-runtime.xml.  So, I tried doing this, and I've tried setting the number to an extremely large number (last try was 40000), and I'm still getting this error.  I've tried putting a <cfabort> on the first line on the cfm file that is being called, but I'm still getting the 500 error.
    As I mentioned, it's a RMA form and if the RMA has a few lines say up to 20 or 25 it will work.
    I've tried increasing the following all at the same time:
    postParameterSize to 1000 MB
    Max size of post data 1000MB
    Request throttle Memory 768MB
    Maximum JVM Heap Size - 1024 MB
    Enable HTTP Status Codes - unchecked
    Here's some extra backgroun on this situation.  This is all that happened before I got the server:
    The CF Server is installed as a virtual machin and was originally part of a domain that was exposed to the internet and the internal network.  The CF Admin was exposed to the internet.
    AT THIS TIME THE RMA FORM WORKED PROPERLY, EVEN WITH LARGE NUMBER OF LINE ITEMS.
    The CF Server was hacked, so they did the following:
    They took a snapshot of the CF Server
    Unjoined it from the domain and put it in the DMZ.
    The server can no longer connect to the internet outbound, inbound connections are allowed through SSL
    Installed cumulative hot fix 1 and hot fix APSB13-13
    Changed the Default port for SQL on the SQL Server.
    This is when the RMA form stopped working and I inherited the server.  Yeah!
    Any ideas on what i can try next or why this would have suddenly stopped working after making the above changes on the server.
    Thank you

    Start from the beginning. Return to the default values, and see what happens. To do so, proceed as follows.
    Temporarily shut ColdFusion down. Create a back-up of the file neo-runtime.xml, just in case.
    Now, open the file in a text editor and revert postParametersLimit and postSizeLimit to their respective default values, namely,
    <var name='postParametersLimit'><number>100.0</number></var>
    <var name='postSizeLimit'><number>100.0</number></var>
    That is, 100 parameters and 100 MB, respectively. (Note that there is no postParameterSize! If you had included that element in the XML, remove it.)
    Restart ColdFusion. Test and tell.

  • Af:table Scroll bars not displayed in IE11 for large number of rows

    Hi. I'm using JDeveloper 11.1.2.4.0.
    The requirements of our application are to display a table potentially displaying very large numbers of rows (sometimes in excess 3 million). While the user does not need to scroll through this many rows, the QBE facility allows drill-down into specific information in the rowset. We moved up to JDeveloper 11.1.2.4.0 primarily so IE11 could be used instead of IE8 to overcome input latency in ADF forms.
    However, it seems that IE11 does not enable the vertical or horizontal scroll bars for the af:table component when the table contains greater than (approx) 650,000 rows. This is not the case when the Chrome browser is used. Nor was this the case on IE8 previously (using JDev 11.1.2.1.0).
    When the table is filtered using the QBE (to a subset < 650,000 rows), the scroll bars are displayed correctly.
    In the code the af:table component is surrounded by an af:panelCollection component which is itself surrounded by an af:panelStretchLayout component.
    Does anyone have any suggestions as to how this behaviour can be corrected? Is it purely a browser problem, or might there be a programmatic workaround in ADF?
    Thanks for your help.

    Thanks for your response. That's no longer an option for us though...
    Some further investigation into the generated HTML has yielded the following information...
    The missing scroll bars appear to be as a consequence of the setting of a style for the horizontal and vertical scroll bars (referenced as vscroller and hscroller in the HTML).  The height of the scrollbar appears to be computed by multiplying the estimated number of rows in the iterator on which the table is based by 16 to give a scrollbar size proportional to the amount of data in the table, although it is not obvious why that should be done for the horizontal scroller.  If this number is greater than or equal to 10737424 pixels then the scroll bars do not display in IE11.
    It would seem better to be setting this height to a sensible limiting number of pixels for a large number of rows?
    Alternatively, is it possible to find where this calculation is taking place and override its behaviour?
    Thanks.

  • How to design Storage Spaces with a large number of drives

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 
    1) Try to avoid parity and esp. double parity RAIDs with a typical VM workload. It's dominated by small reads (OK) and small writes (not OK as whole parity stripe gets updated with any "ready-modify-write" sequence). As a result writes would be DOG slow.
    Another nasty parity RAID characteristic is very long rebuild times... It's pretty easy to get second (third with double parity) drive failure during re-build process and that would render the whole RAID set useless. Solution would be to use RAID10. Much safer,
    faster to work and rebuild compared to RAID5/6 but wastes half of raw capacity...
    2) Creating "islands" of storage is an extremely effective way of stealing IOPS away from your config. Typical modern RAID set would run out of IOPS long before running out of capacity so unless you're planning to have a file dump of an ice cold data or
    CCTV storage you'll absolutely need all IOPS from all spindles @ the same time. This again means One Big RAID10, OBR10.
    Hope this helped a bit :) Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Communicate large number of parameters and variables between Verstand and Labview Model

    We have a dyno setup with a PXI-E chassis running Veristand 2014 and Inertia 2014. In order to enhance capabilities and timing of Veristand, I would like to use Labview models to perform tasks not possible by Veristand and Inertia. An example of this is to determine the maximum of a large number of thermocouples. Veristand has a compare funtion, but it compares only two values at a time. This makes for some lengthy and inflexible programming. Labview, on the other hand, has a function which aloows one to get the maximum of elements in an array in a single step. To use Labview I need to "send" the 50 or so thermocouples to the Labview model. In addition to the variables which need to be communicated between Veristand and Labview, I also need to present Labview with the threshold and confguration parameters. From the forums and user manuaIs understand that one has to use the connector pane in Labview and mapping in Veristand System Explorer to expose the inports and outports. The problem is that the Labview connector pane is limited to 27 I/O. How do I overcome that limitation?
    BTW. I am fairly new to Labview and Versitand.
    Thank you.
    Richard
    Solved!
    Go to Solution.

    @Jarrod:
    Thank you for the help. I created a simple test model and now understand how I can use clusters for a large number of variables. Regarding the mapping process: Can one map a folder of user channels to a cluster (one-step mapping)? Alternatively, I understand one can import a mapping (text) file in System Explorer. Is this import partial or does it replace all the mapping? The reason I am asking is that, if it is partial, then I can have separate mapping files for different configurations and my final mapping can be a combination of imported mapping files.
    @SteveK:
    Thank you for the hint on using a Custom Device. I understand that the Custom Device will be much more powerful and can be more generic. The problem at this stage is that my limitations in programming in Labview is far gretater than Labview models' limitations in Veristand. I'll definitely consider the Custom Device route once I am more provicient with LabView. Hopefully I'll be able to re-use some of the VI's I created for the LabView models.
    Thanks
    Richard

  • Mail to a large number of

    HI
    I was wondering if there exists a software that handles group mails for a large number of recipients (like 400 adresses) at one time.
    I think mail doesn't accept more than 50 adresses /mail at a time.
    I used to use a PC based software called sarbacane designed for mass emailing (not spamming, that's understood .. !), with charts, statistics and remailing automatically to adresses that did not function etc ... does that exist for Mac ??
    Thank you anyone who has the answer !!
    Stephanie

    Hello Stephanie
    I do the same, and ran into the same problem. It was my ISP that was limiting how many emails I could send at once. I 'think' (because I'm not a tech person) that Mail takes the message and puts 50 or more address at the top. ISPs will see this as a possible spam mail out. I worked around this by creating groups of less than 50 email addresses (using Smart Groups), but after awhile it got very tedious having to sort and create criteria to keep each group to just less than 50.
    There are commercial programs that manage large emailings. I use 'Mailings', and it works fine. I know that there are others as well. Most have a free trial version.
    Instead of sending one message with lots of addresses, it sends the email multiple times - once to each person in your list. It takes longer (but it works in background, so who cares) but I have been able to send out to hundreds of addresses with no issues. Hope this helps.
    Seth

  • BUG: Last Image of Large Number Being Moved fails

    This has happened several times in organizing some folders.  Moving over 100 images at a time, it seems that one image near the end fails - I get the screen that Lightroom can't move the image right now.  It's always just one image.  I can move it on it's own just a second later and it works just fine.
    While the Move operation is being fixed, consider that it could go way faster than it does now if the screen didn't have to be refreshed after each file has been moved.  I can see the value of the refresh if it's just a few images being moved, but for a large number, the refresh isn't helpful anyhow.
    Paul Wasserman

    I posted on this last week, and apparently a number of people have experienced this.
    http://forums.adobe.com/thread/690900
    Please report it on this bug report site so that it gets to the developers' attention sooner:
    https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform
    Bob

Maybe you are looking for

  • Copying single page ,region from one application to another application

    Hi, I need information regarding How to copy single page or region from one application to another application in the same workspace

  • Upgraded to iOS4.1; can't read or write e-mail

    Hi, I just recently did a full system restore, and then updated to iOS 4.1 (iPhone 4) Everything appeared to install with no problems, however now when I go to check my e-mail through the mail app, on the summary page that lists the e-mail titles and

  • Can I have some simple answers for a newbi Mac user!!

    I've just added some photo's in Iphoto 08 of my holiday which was about 3GB from my pictures folder, Now by importing them into Iphoto is kind of taken up 3gb of my memory from my hard drive. So 6gb in total for my pictures which seems to be a waste

  • Photoshop CS3 has begun crashing on launch

    Have been using CS3 for some time, but recently got a new camera (Canon EOS 60D). Photoshop crashed when attempting to open raw (.CR2) files from the camera, then continues to crash on launch. I have been able to convert the .CR2 files to .DNG using

  • Oracle Form read Database

    HI! Does anybody know an example how to interact with a database with OForms? (show data edit data and so on) thanx Wolfgang