Email Optimization for Large batch

I'm sending about 7,000 emails to clients each afternoon.
This is all opt-in.
The problem is the time it takes to process the emails out
of the spool, and the effect it has on other emails from the
system.
Regular CF business emails get held up for 30 or 45 minutes
waiting to get through the spool.
I have multiple smtp servers so I can send the messages to
different smtp servers, but I don't know how to assign a higher
priority to emails that are not in the large batch (get them
through the spool).
Any ideas?
Thanks.

You could a) dynamically redefine the smtp server to be used
within each iteration of the loop creating your emails, or b) write
a cf process which only sends out messages in blocks of 200 every
30 seconds, or c) a combination of a and b (I would opt for
'c').

Similar Messages

  • Preparing for Large Batch of Windows Server Updates going back to Aug 2013

    How often do updates to Windows servers interfere with a servers performance, services, and/or software?
    In other words, after installing updates, what level of validation should be involved in making sure the updates are not causing issues on the server? With such a large batch of updates to install this weekend, I want to be prepared for worst case scenarios
    involving systems not booting up, required services not starting, software conflicts, etc. We have a 2008R2 server and a 2003 server to test on, but they're not really representative of the production servers in terms of the roles they play and services they
    run. Just to clarify, this is my first time updating Windows servers.

    1. There is a long term mantra: Test updates in test environment before applying updates in production systems.
    2. Situation is complicated a bit by the fact that there are "direct"  and WSUS update packages.
    3. MS states that updates are tested thoroughly. I do believe they do a good job.
    4. I work with Windows for a very long period and I am pretty sure that the quality of software including updates is increasing.
    5. I understand your hesitation, my first updates caused the same tension.
    6. You have a possibility to download updates for "direct" application from catalog update repository. Problem is that you do not know the optimal order of installation. Fortunately information on problems and their solution comes fast from Internet if you
    listen on right channels.
    7. From my point of view the danger stems from some optional updates, namely drivers may send your server to unexpected operation. I do prefer drivers and firmware from server/computer vendor, though some irregularities exists.
    HTH
    Milos

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • HotSpot optimization for large amounts of array use

    Greetings all,
    I'm hoping some of the people using Java for crypto and/or image processing/media file processing might by wandering by to get some input.
    We are building a secured storage system for media files, currently using the Twofish algorithm (one of the AES candidates that wasn't selected). We are bulk encrypting files for entry into the storage system, and in that process, I was already using Java for some of the management. Thus I decided to use the JCE framework to get the cryptography working.
    I hate to say it, but this Java evangelist is a bit appalled at how slow it seems to go. By suspicion is the array bounds checking in the code, but I haven't gone byte-code snooping yet. I've tried a few of the symmetric algorithms (DES, AES/Rijndael and Twofish) and get the same sort of results. I've also tried the Cryptix library for the Twofish, as the Sun JCE doesn't have Twofish.
    In straight C code, a G4-700MHz and a US-III 750 MHz get around the 20-25 MB/s encoding.
    In Java, I'm getting about 1.3-1.9 MB/s. I reduced the Java code to be raw crypto without using the JCE, and it didn't make a noticable difference.
    That's a BIG gap. Not 10%, not 20%, but about 95% reduction in performance.
    The profiling with -Hprof shows about 90% of the time in the crypto routines, and all of it is compiled via HotSpot.
    Settings are with java 1.3.1 on the G4, and 1.4.0_01 on the UltraSparc-III.
    Settings were -server -Xmx128m -Xms128m
    Does the bounds checking impose that high of a penalty?
    Reading the book Java Platform Performance (excellent book BTW), Steve Wilson and Jeff Kesselman indicate that the array checking could be optimized out in certain loop constructs in HotSpot, but that preliminary examination showed that this would only benefit a narrow range of developers.
    With JMF, JCE, and the increasing capabilities of Java2D and 3D, I expect that this sort of processing will increase dramatically, but only if this level of performance penalty can be removed.
    Anything else I might try?
    Dallas

    Thanks for the reply. I've been looking into it further, but with no real advance. I've implemented a twofish JNI bridge that runs the actual encryption natively, at pretty much native speeds. The array access and twofish cipher does copy all elements locally, and the rest of the cipher is bit manipulation on bytes. I haven't quite gotten deep enough into the actual cipher to see where the performance penalty is being incurred.
    Possibly I'm mistaken in thinking it was the array access, but I didn't expect the raw bit twiddling and bitwise operators to be slow.

  • Batch fetch optimization for lazy collections

    Hi,
    I feel like this question must have been asked by somebody already but couldn't find any answers in the forum.
    We're trying to evaluate migrating from Hibernate to KODO JDO. One feature we use extensively from Hibernate is the "batch fetch optimization for lazy collections".
    For instance, there's a class User with collection "Set<String> permissions". In the DB, there're tables USER(id, name, etc) and USER_PERMISSIONS(user_id, permission) with one-to-many relationship b/w them.
    Suppose the code is the following:
    query = ....;
    List<User> users = (List<User>) query.execute();
    for (User user : users) {
    println("User: " + user.getName());
    println("Permissions: " + user.getPermissions());
    I've setup EagerFetchMode to parallel. For field "permissions" I had to specify default-fetch-group="true" b/c I wanted this to happen lazily. When I look through the logs, I see that permissions SQL query is executed for each user.
    With Hibernate we were to setup mapping the way that permissions are not fetched at first, but when they are requested for the first user (user.getPermissions()) they are automatically selected for several users in one query using SQL IN clause (similar to the parallel mode).
    Is this possible to recreate the same in KODO JDO? (JPA?) If it is, this would greatly simplify migration. Please notice, that we can't use explicit fetch groups for this, b/c we don't know ahead which collection can be navigated (in the real life User class have many relationships).
    thanks in advance,
    Dimitry

    Kodo doesn't have a direct analog for that behavior. The typical way to solve that problem is to keep the field out of the default fetch group, and then explicitly include the desired field in the current fetch group at runtime. In pure JDO, you can do this by creating a separate fetch group that includes the relationship field, and designating that the query should use that fetch group:
    <pre>
    query = ...;
    query.getFetchPlan().addGroup("relationshipGroup");
    List<User> users = (List<User>) query.execute();
    </pre>
    Kodo has JDO extensions that allow you to do this a bit more easily:
    <pre>
    query = ...;
    ((kodo.jdo.KodoFetchPlan) query.getFetchPlan()).addField("com.example.User.permissions");
    List<User> users = (List<User>) query.execute();
    </pre>
    Finally, you can do this with Kodo's JPA extensions like so:
    <pre>
    import org.apache.openjpa.persistence.OpenJPAQuery;
    query = (OpenJPAQuery) ...;
    query.getFetchPlan().addField("com.example.User.permissions");
    List<User> users = (List<User>) query.getResultList();
    </pre>
    Note that in all cases, you could also make this change to the current PersistenceManager / EntityManager's fetch plan instead, to make the change happen for the duration of the use of that manager. In that environment, the change would need to happen before the query was created.
    (And no, I have no idea why the edit box has a 'pre' button but does not seem to do anything with the resulting tags.)
    -Patrick

  • Email configuration for Batch bursting

    Hi All,
    Can anybody please let me know how to configure the email server or email settings for batch bursting the financial reports?
    Thanks
    Yashwant

    Hi,
    But we had not chosen this option while configuring the EPM product, so I think email server doesn't require authentication as it might not be protected.
    Can you please let me know what properties should be enabled on email server related to EPM system, as I have heared that anonymous authentication should be enabled on the email server?
    Thanks
    Yashwant

  • How can I convert a large batch of NEFs to JPGs in one step or action?

    How can I convert a large batch of NEFs to JPGs in one step or action?  I'm very new to Lightroom and know almost nothing about it, so I need to know step-by-step.  I've seen online advise about exporting but I don't know what that means - export from where to where?  Can anyone help me?  Thanks!

    I suggest you take a little time to watch some tutorials. If you are that new to Lightroom that you don't understand what export means then it would probably be a good idea for you to get some other information as well. If you don't I think you'll find Lightroom can be very confusing. You haven't indicated what version of Lightroom you are using. The link I'm providing you is a series of getting started videos for Lightroom 5. And just about everything in this series will apply to other versions. You will note that there is one video devoted exclusively to exporting images. But I think you really need to watch the whole series.
    Getting Started with Adobe Photoshop Lightroom 5 | Adobe TV

  • Is there a way to restore photos from Drop box to my desktop iPhoto in a large batch instead of one at a time? I tried and a zip file was downloaded but won't open. Says file format not recognized.

    Is there a way to restore photos from Drop box to my desktop iPhoto in a large batch instead of one at a time? I tried and a zip file was downloaded but won't open. Says file format not recognized. I see how to do it one at a time with the "download" button in Dropbox but that's so cumbersome for lots of photos.

    Have you tried these avenues?
    Contact us - Dropbox
    Dropbox Help Center
    Dropbox Forums
    Submit a help request - Dropbox
    OT

  • Attempted to mail an email with a large attachment file.  One of the addresses was bad.  When my Outlook is running, the Mac tries to send it and shows the progress.  However, when I look in my Outbox the files are not there.  It does show up in the Outb

    attempted to mail an email with a large attachment file.  One of the addresses was bad.  When my Outlook is running, the Mac tries to send it and shows the progress.  However, when I look in my Outbox the files are not there.  It does show up in the Outbox progress section but I can not delete it when it is there.
    Where do these files reside?
    Is there a hidden Outbox??
    MacBook Pro, Mac OS X (10.7.1)

    If you think getting your web pages to appear OK in all the major browsers is tricky then dealing with email clients is way worse. There are so many of them.
    If you want to bulk email yourself, there are apps for it and their templates will work in most cases...
    http://www.iwebformusicians.com/Website-Email-Marketing/EBlast.html
    This one will create the form, database and send out the emails...
    http://www.iwebformusicians.com/Website-Email-Marketing/MailShoot.html
    The alternative is to use a marketing service if your business can justify the cost. Their templates are tested in all the common email clients...
    http://www.iwebformusicians.com/Website-Email-Marketing/Email-Marketing-Service. html
    "I may receive some form of compensation, financial or otherwise, from my recommendation or link."

  • Oracle 10g Discoverer Reports & export to xls fails for large reports

    Hi ,
    We have following configurations:
    1: RHEL 5.4
    2: Discoverer :Version 10.1.2.48.18
    3: Oracle10g Apps Version : Version 10.1.2.0.2
    Issue:
    Most of small reports works fine ....but when large discoverer reports are executed the page
    keeps on refreshing for 15-20 hours but no output ....same for export to xls ......
    But same reports works fine in oracle9i for same data voulme....
    Observations:
    When on linux with top command the processes are monitored its observed that discoverer process
    dis51ws dies for large reports after 1-2 minutes ...& the page keeps on refreshing but no output....
    for 1-2 minutes it consumes 50-80% cpu utilisation then process disappears & cpu 80% idle ...
    It seems that as 10g Apps is installed on RHEL 5.4 ...non certfified OS causing an issue...
    Can any one adds more inputs in this regards......
    we have checked logs : below are log details :
    Below logs gives "Logkeys: exceptions discoiv.servlet_exceptions" for this report ...
    1:
    OC4J~OC4J_BI_Forms~default_island~1:
    10/04/11 12:11:29 Oracle Application Server Containers for J2EE 10g (10.1.2.0.2) initialized
    10/04/11 12:11:59 Using oracle.reports.util.EnvironmentGlobal class
    10/04/11 14:23:37 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:23:38 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114240115278
    10/04/11 14:25:42 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:25:42 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114254315439
    10/04/11 14:28:53 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:28:54 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114285615691
    10/04/11 14:29:13 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:29:13 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114291315728
    10/04/11 14:32:35 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:32:35 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114323615949
    10/04/11 14:32:48 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:32:48 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114324815982
    10/04/11 14:34:44 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:34:44 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114344616128
    10/04/11 14:34:55 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:34:55 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114345516155
    10/04/11 14:36:25 Tutalii: /oracle10gas/app/oracle10g/discoverer/lib/discoverer5.jar archive
    2:
    Discoverer~SessionServer~12
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Active Eul: EULADMIN
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    DCSCORBAInterface::Delete called
    DCSCORBAInterface destructor called
    DCSCORBAInterface::Delete called
    DCSCORBAInterface destructor called
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    3:
    application.log
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Finding async request action forward
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Calling externalize
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_TOOL: dvtb xk-1ml versionzw1.0w kyxdvtbyxbisltyxbicho vzwwjyxjbisltyxjdvtby
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_VIEW: dv xk-1ml versionzw1.0w kyxdv bazw0w cszw25wyxpc vzw1wjyxjdvy
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_VIEW: lc
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_HS: dvhs
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_DS: dv_ds &qls_z!2=New GL Report.Clndr Id&arq=false&qls_x!14=Sheet 1.Closing Balance (Credit)&qls_x!3=New GL Report.Voucher No&qls_z!3=New GL Report.Frm Prd&fm=xml&qls_z!4=New GL Report.To Prd&qls_x!2=New GL Report.Prd Desc&qls_x!11=Sheet 1.Debit Amount&qls_x!4=New GL Report.Gl Voucher No&qls_x!9=Sheet 1.Opening Balance Debit&tss_s!0=New GL Report.Acct Id,lh,group,false&qls_x!1=New GL Report.Acct Sdesc&qls_z!1=New GL Report.Loc Desc&qls_x!10=Sheet 1.Opening Balance (Credit)&qls_x!12=Sheet 1.Credit Amount&aow=false&qls_x!7=New GL Report.Shrt Code&qls_x!13=Sheet 1.Closing Balance (Debit)&qls_x!6=New GL Report.Pmt Rct Dt&qls_x!8=New GL Report.Dtl Nrtn&sss=true&qls_x!5=New GL Report.Voucher Dt&qls_x!0=New GL Report.Acct Id&qls_z!0=New GL Report.Loc Id&ddsver=1
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.ViewerModelImpl.externalize [EXTERN_STATE]: $wksht$%24dv_ds%24%26qls_z%212%3DNew+GL+Report.Clndr+Id%26arq%3Dfalse%26qls_x%2114%3DSheet+1.Closing+Balance+%28Credit%29%26qls_x%213%3DNew+GL+Report.Voucher+No%26qls_z%213%3DNew+GL+Report.Frm+Prd%26fm%3Dxml%26qls_z%214%3DNew+GL+Report.To+Prd%26qls_x%212%3DNew+GL+Report.Prd+Desc%26qls_x%2111%3DSheet+1.Debit+Amount%26qls_x%214%3DNew+GL+Report.Gl+Voucher+No%26qls_x%219%3DSheet+1.Opening+Balance+Debit%26tss_s%210%3DNew+GL+Report.Acct+Id%2Clh%2Cgroup%2Cfalse%26qls_x%211%3DNew+GL+Report.Acct+Sdesc%26qls_z%211%3DNew+GL+Report.Loc+Desc%26qls_x%2110%3DSheet+1.Opening+Balance+%28Credit%29%26qls_x%2112%3DSheet+1.Credit+Amount%26aow%3Dfalse%26qls_x%217%3DNew+GL+Report.Shrt+Code%26qls_x%2113%3DSheet+1.Closing+Balance+%28Debit%29%26qls_x%216%3DNew+GL+Report.Pmt+Rct+Dt%26qls_x%218%3DNew+GL+Report.Dtl+Nrtn%26sss%3Dtrue%26qls_x%215%3DNew+GL+Report.Voucher+Dt%26qls_x%210%3DNew+GL+Report.Acct+Id%26qls_z%210%3DNew+GL+Report.Loc+Id%26ddsver%3D1%24dv%24xk-1ml+versionzw1.0w+kyxdv+bazw0w+cszw25wyxpc+vzw1wjyxjdvy%24wd%24false%24lc%24%24dvtb%24xk-1ml+versionzw1.0w+kyxdvtbyxbisltyxbicho+vzwwjyxjbisltyxjdvtby%24wvs%241101%24dvhs%24$cn$&vct=svd&cnk=cf_a101$ap$%26df%3D%26l%3D%26s%3D%26nc%3D%26dl%3D$expl$&sp=&node=&event=focus&state=(117)&root=63&wbt=2$prid$NEW_GL_REPORT%2F31$ctyp$viewer
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Storing state
    10/04/11 14:37:25 discoverer: [INFO] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Externalize Perf: 8ms
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving Attributes
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving ApplicationRequest
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Checking for SSO mode
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving errors, messages
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Returning final forward: "/ExportProgress.uix" redirect: "false"
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute ----------------------- End Request --------------------------
    10/04/11 14:37:25 discoverer: [INFO] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Total Request Time in AppCtrl: 18
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] org.apache.struts.action.RequestProcessor.processForwardConfig processForwardConfig(ForwardConfig[name=long running operation,path=/ExportProgress.uix,redirect=false,contextRelative=false])
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.view.DiscovererPageBroker.isCacheable Setting cacheable to: true
    4: XML log:
    log2010041114285615691.xml
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>184</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer started.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>162</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="TRACE"></MSG_TYPE><MSG_LEVEL>5</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Return DCSModelInterface::SendReceiveData(kScheduleInterface, inTable, outTable)
    outTable = DCITable
         Length=0
    ]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcsmdli.cpp</FILE_NAME> <LINE_NUMBER>259</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[DCSModelInterface::SendReceiveData(kScheduleInterface)]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS </MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcsmdli.cpp</FILE_NAME> <LINE_NUMBER>226</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> <LOG_SIZE>0</LOG_SIZE> <EXTRA_INFO><MethodEnd duration="0.1" sizeChange="0" >
    real 0m0.1s
    user 0m0.900s
    sys 0m0.109s
    </MethodEnd></EXTRA_INFO> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:37:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer stopped.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>184</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:37:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:37:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer started.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>162</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:37:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    Regards,

    Hi ,
    as per Note: 466697.1 ....its an memory error....& need to increase MaxVirtualDiskMem and MaxVirtualHeapMem
    But we already have slowly increased MaxVirtualDiskMem and MaxVirtualHeapMem to below very high values ...but the issue remains same.......
    as per note we are getting Logkeys: exceptions discoiv.servlet_exceptions error but
    after that we are not getting below error ............
    Unexpected error in state machine: java.lang.OutOfMemoryError
    Hence I presume that its different issue rather than memmory....
    Below are pref.txt values..........
    CacheFlushPercentage          = 25          # Percent of cache flushed if the cache is full. valid values 0 - 100%.
    MaxVirtualDiskMem          = 9294967296     # Maximum amount of disk memory allowed for the data cache. Should be greater than or equal to MaxVirtualHeapMem
    MaxVirtualHeapMem          = 4294967296     # Maximum amount of heap memory allowed for the data cache.
    QueryBehavior = 0          # Action to take after opening a workbook (0 = Run Query Automatically, 1 = Don't Run Query, 2 = Ask for Confirmation)
    Also we have raised an SR & as per SR .............all below settings are tried as per SR except for applying a recent patch....
    ====================================================================
    Discoverer performance is largely determined by how well the database
    has been designed and tuned for queries.
    A well-designed database will perform significantly better than a poorly
    designed database.
    Workbook design can also affect query performance.
    1. Apply latest Discoverer patch as documented in <<Note:237607.1>>
    'ALERT: Required and Recommended Patch Levels For All Discoverer Version'.
    2. Increase the maximum JVM heap memory:
    In general, the default values for the minimum heap (-Xms) memory and
    maximum heap (-Xmx) memory are sufficient.
    However, if your organization consistently runs large Discoverer queries
    then you may benefit from increasing the maximum heap memory from the
    default values.
    This is recomended if your users are typically running large queries via
    Discoverer Viewer.
    Increasing the JVM memory can help to avoid "java.lang.OutOfMemoryError"
    in Discoverer Viewer:
    Please see <<Note 563960.1>>, Best Practice: Configuring The
    OC4J_BI_Forms JVM For
    Discoverer Viewer/Portlet 10g Performance And Stability for specific
    details.
    3. Disable Query Prediction:
    Query Prediction provides an estimate of the time required to retrieve
    the information in a query.
    The Query Prediction appears before the query and consumes time.
    Edit the <oracle_home>/discoverer/util/pref.txt on the middle-tier
    server and set:
    QPPEnable=0
    Also set:
    QPPObtainCostMethod = 0
    4. Uncheck the 'Enable fantrap detection' checkbox.
    When the box is checked, every query generated by Discoverer is
    interrogated. Discoverer will detect a fan trap and rewrite the query to
    ensure that the aggregation is done at the correct level.
    Please refer to <<Note:210553.1>>, Oracle BI Discoverer: Fan Trap
    Resolution - Correct Results
    Everytime for more information on fantraps.
    To disable, in Plus go to Tools -> Options -> Advanced -> Fan Trap
    settings.
    In Viewer go to Preferences and uncheck the box.
    5. Disable Materialized Views/Summaries
    In pref.txt add parameter:
    MaterializedViewRedirectionBehavior = 0
    Value equal to 0 ensures that Materialized View (MV) Redirection is
    always performed when MVs are available.
    6. Improve query performance by optimizing the SQL.
    In pref.txt modify/add following parameters:
    SQLFlatten = 0
    SQLItemTrim = 0
    SQLJoinTrim = 0
    UseOptimizerHints = 0
    If value of SQLFlatten is 1 then Discoverer will merge inline views in
    the query SQL where ever possible.
    In case of SQLItemTrim, Discoverer will remove unused folder items from
    the query SQL where possible and for SQLJoinTrim Discoverer will remove
    unnecessary joins from the query where possible.
    UseOptimizerHints will add Optimizer hints to SQL if set >
    0.Unnecessarily making Discoverer perform these checks consumes
    resources and rather than increasing the performance may reduce the same. So unless you feel these checks have to be performed depending on
    requirements, assign zero to these parameters.
    7. When Discoverer builds a query, Discoverer makes a database security
    check to confirm that the user has access to the tables referenced in
    the folders. Avoiding this check can save time.
    So in pref.txt underDatabase section add:
    ObjectsAlwaysAccessible = 1
    8. Whenever you are creating conditions, ensure that you match the Case.
    This in turn can reduce the time Discoverer spends on changing the Case
    and matching.
    For example:
    option Upper(Department) in (Upper('VIDEO SALES')
    9. Ensure that summaries are refreshed periodically in Discoverer
    Administrator.
    10. Increase the amount of memory available for the Discoverer data cache. Please refer to <<Note 245752.1>>, Explaining Oracle BI Discoverer
    Session Memory Management
    And Server Cache Settings.
    11. Performance may be enhanced by enabling OracleAS Web Cache.
    ==================================================================
    Thanks for your reply....................
    Regards,

  • IMAP message - Email is too large

    I'm been on Windows 8 and Outlook 2013 for the past 6 months without issue. Recently I started receiving the following pop-up message every 2-3 minutes: Your IMAP server wants to alert you to the following: Email is too large, limit size to less than 20MB.
    I've made multiple attempts to fix this, but without any luck. Can someone please offer insight on how they removed this pop-up? Outbox is cleared as is trash - please advise.

    It sounds like one folder contains an item that is larger than 20MB and therefor doesn't sync by the limit set on your mail server. You can do a search based on size across your mailbox to find the item.
    Robert Sparnaaij
    [MVP-Outlook]
    Outlook guides and more: HowTo-Outlook.com
    Outlook Quick Tips: MSOutlook.info

  • What design aids for large-scale application (using LV6i) exist?

    I have a large-scale application that includes analog and digital I/O, motion control, multiple temperature readings, Ethernet communication, RS-232, DDE and ActiveX controls for communicating with other commercial software. We have to improve the system performance and ease the pain of maintaining and upgrading. What aids are there for large-scale application design and development?

    This doesn't exactly count as development "tools" but I can send you copies of three papers that I found when I was just getting started with LV. They can show you how to think about your problem--and that is really the hardest (and most important) part.
    Contact me directly and I'll email them to you. The archive is to large to post.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • SharePoint Library for Large Amounts of Engineering Data

    We are currently using traditional project directory folders for large projects with sometimes tens of thousands of documents. 
    We are planning on migrating the data to SharePoint and the path forward in unclear.
    Initially it was recommended to use a library, not numerous folders, to contain the data so that searching of data in improved. 
    That sounded great.  The 1<sup>st</sup> project used to pilot this for other project is divided into 20 different modification packages. 
    A library category was created for MODS with selectable options of the 20 mod package names and “No Defined” (default value). 
    Some data items are shared between more than one MOD so this category can have more than one assignment.
    When we looked at the directory structure in place we found no consistency in folder names, no consistency in directory structure. 
    Many folders have 5 or 6 (or more) levels of subdirectories. 
    Ideally we want no more than 4 or 5 categories of meta data to define all data. 
    Mapping from chaos into a comparatively small number of categories is daunting.
    When searching this forum I find that libraries should be limited to 2,000 items. 
    There are tens of thousands of items in our pilot project. 
    Surely someone somewhere has encountered this organizational problem. 
    I could use some advice from someone who have been there before.

    John,
    The limit of 2000 is not a hard limit, the actual no of items you can store in a list is 30,000,000. however more item would have impact on performance on rendering and lock on the SQL table.
    Also the limit that you have mentioned (2000) is list view threshold limit and  actually it is 5000.
    One important aspect is Boundaries are hard limit, which you cannot exceed and Supported limits are limits based on tests, which can be exceeded but may cause issues.
    Being said that , I would suggest you to check out this link on
    SharePoint Server 2010 capacity management: Software boundaries and limits
    http://technet.microsoft.com/en-us/library/cc262787(v=office.14).aspx
    and explore other ways of optimizing your list
    here are some references that would help you to optimize -
    http://office.microsoft.com/en-us/sharepoint-foundation-help/manage-lists-and-libraries-with-many-items-HA010377496.aspx
    http://technet.microsoft.com/en-us/library/cc262813(v=office.14).aspx
    http://office.microsoft.com/en-us/sharepoint-server-help/sharepoint-lists-v-techniques-for-managing-large-lists-RZ101874361.aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - http://www.SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • HS connection to MySQL fails for large table

    Hello,
    I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
    I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-00942: table or view does not exist
    [Generic Connectivity Using ODBC][MySQL][ODBC 3.51
        Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
    (SQL State: S1T00; SQL Code: 2013)
    ORA-02063: preceding 2 lines from MYSQL_RMG
    I traced the HS connection and here is the result from the .trc file:
    Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
    Heterogeneous Agent Release
    10.2.0.1.0
    (0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
    (0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
    (0) IDENTIFIER_QUOTE_CHAR="",
    (0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
    (0) tasource name='MYSQL_RMG' type='ODBC'
    (0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
    (0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
    (0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
    (0) tokenSize='1000' noInsertParameterization='true'
    noThreadedReadAhead='true'
    (0) noCommandReuse='true'/></environment></binding></navobj>
    (0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
    (0) hoadtab(26); Entered.
    (0) Table 1 - PROGRESSNOTES
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
    (0) memory (SQL State: S1T00; SQL Code: 2008)
    (0) (Last message occurred 2 times)
    (0)
    (0) hoapars(15); Entered.
    (0) Sql Text is:
    (0) SELECT * FROM "PROGRESSNOTES"
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
    (0) server during query (SQL State: S1T00; SQL Code: 2013)
    (0) (Last message occurred 2 times)
    (0)
    (0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
    (0) log file for details.
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
    (0) the log file for details.
    (0) Closing log file at THU JUN 12 11:20:38 2008.
    I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-02068: following severe error from MYSQL_RMG
    ORA-28511: lost RPC connection to heterogeneous remote agent using
    SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
    Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
    Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
    # HS init parameters
    HS_FDS_CONNECT_INFO = MYSQL_RMG
    HS_FDS_TRACE_LEVEL = ON
    My SID_LIST_LISTENER entry is:
    (SID_DESC =
    (PROGRAM = HSODBC)
    (SID_NAME = MYSQL_RMG)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    Finally, here is my TNSNAMES.ORA entry for the HS connection:
    MYSQL_RMG =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
    (CONNECT_DATA =
    (SID = MYSQL_RMG)
    (HS = OK)
    Your advice will be greatly appeciated,
    Thanks,
    Luis
    Message was edited by:
    lmconsite

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • Optimize for ad hoc workloads.

    Hello all,
       I was refering BOL documentation on 'optimize for adhoc workloads' option and when this is enabled the first run of the adhoc query gets 'compiled plan stub'(plan is not cached) and upon on second run,it gets 'compiled plan' (plan is cached).
    I am wondering if there is any option where we can tell sql server, to make it a 'complied plan'  at 5 run(instead on the second run).
    Thanks.
    Hope it Helps!!

    If you feel that this is important, submit a request on
    http://connect.microsoft.com/SqlServer/Feedback.
    You should make an effort to clearly spell out why you think this would be a benefit.
    I guess that one reason they did it the way they did is that this is simpler. For instance, they don't need to maintain a counter for the shell query.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for

  • Flex 3 and QuickTest Pro 9.5

    I've searched through this forum but didn't find an answer to my questions. I have read the documentation on Flex Builder 3's support for automated testing, specifically for HP's QTP. While it specifically mentions support for QTP 9.1 and 9.2, can so

  • Not being notified of bad email addresses or non delivery

    I recently changed my email to a mac account from a outlook address. I sent out a number of bulk notifications with 30+ addresses. I suspect that some of the email addressed are not good but I have not been notified of any failures due to bad address

  • Why is mac mail so myopic regarding contact emails?

    i would very much like to organize all my contacts which i am doing very slowly with software called Cardscan that lets me scan and sort my business cards. i also have a personally oriented database in my Mac Contacts (Mountain Lion)/Mac Address Book

  • PDF visibility issue in Firefox and Safari on Windows

    Hi Guys, I am using iframe to embed pdf in html:- <iframe visible="true" id="ipdf" src="pdf/filename.pdf" height="600" width="665" runat="server" frameborder="0" scrolling="auto" allowtransparency="true"> </iframe> My problem is in some browsers it a

  • How can I store iTunes movies in my iCloud?

    Please help!!!