IMAP alerts not being processed?

I recently went near storage quota on my IMAP account (they use CIRUS server software) and the server sent notifications via the IMAP alert facility. These were not processed by the iPhone or Mavericks Mail.   As a result the account went over quota which caused emails to be bounced and the mail clients apparently cannot expunge messages when in this state (nor display the trash folder).  The vendor's webmail does...  but thats worthless to us day to day. 
We noticed an issue when there wasn't any new emails in a day. 
Anyway that's what the email vendor is telling me.  Are there any vendors still providing a IMAP service that has less issues with Apple mail apps?  Seems that, beside this issue, the current one works fine.  Are there any that will also send warning emails in addition to the IMAP Alerts?
Perhaps there is a mail configuration item not set properly?
Thanks

I guess I should be careful what I ask for. It turns out numerous clients now block or do not process IMAP alerts because they are extremely annoying and are worse than not using them. Perhaps a violation of a standard but seems to be a much more user friendly approach when coupled with email status messages:
"The warnings are displayed obtrusively by standard. This is most harmful when doing a seach of the entire mail account: the warning is issued every time the mail client enters a new folder, and thus one has to guard the computer as it goes through the dozens of folders, being there to 'ok' moving to each of the folders. The standard specify explicitly that the application must wait the user to confirm they have understood the warning, before the client can do anything. "
Those using TB have been complained for years:
"This is insanse. The issue has been discussed for 5+ years now and STILL the popups appear and makes it impossible to use TB after an arbitrary level of use of the IMAP server. I read the same IMAP mail from Apple Mail and I dont get spammed by these idioitc warnings. "

Similar Messages

  • Popularity trend/usage report is not working in sp2013. Data was not being processed to EVENT STORE folder which was present under the Analytics_GUID folder.

    Hi
     I am working in a sharepoint migration project. We have migrated one SharePoint project from moss2007 to sp2013. Issue is when we are clicking on Popularity trend > usage report,  it is throwing an error.
    Issue: The data was not being processed to EVENT STORE folder which was present under the
    Analytics_GUID folder. Also data was not present in the Analytical Store database.
    In log viewer I have found the bellow error.
    HIGH -
    SearchServiceApplicationProxy::GetAnalyticsEventTypeDefinitions--Error occured: System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly
    secured fault was received from the other party.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    HIGH - Getting Error Message for Exception System.Web.HttpUnhandledException
    (0x80004005): Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly secured fault was received from the other party.
    CRITICAL - A failure was reported when trying to invoke a service application:
    EndpointFailure Process Name: w3wp Process ID: 13960 AppDomain Name: /LM/W3SVC/767692721/ROOT-1-130480636828071139 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-
    UNEXPECTED - Could not retrieve analytics event definitions for
    https://XXX System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    I have verified few things in server which are mentioned below
    Two timer jobs (Microsoft SharePoint Foundation Usage Data Processing, Microsoft SharePoint Foundation Usage Data Import) are running fine.
    APPFabric Caching service has been started.
    Analytics_GUID folder has been
    shared with
    WSS_ADMIN_WPG and WSS_WPG and Read/Write access was granted
    .usage files are getting created and also the temporary(.tmp) file has been created.
    uasage  logging database for uasage data being transported. The data is available.
    Please provide pointers on what needs to be done.

    Hi Nabhendu,
    According to your description, my understanding is that you could not use popularity trend after you migrated SharePoint 2007 to SharePoint 2013.
    In SharePoint 2013, the analytics functionality is a part of the search component. There is an article for troubleshooting SharePoint 2013 Web Analytics, please take a look at:
    Troubleshooting SharePoint 2013 Web Analytics
    http://blog.fpweb.net/troubleshooting-sharepoint-2013-web-analytics/#.U8NyA_kabp4
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Release notes for 2.16 states that there was a fix for alerts not being modal. We are using 3.0.6 and are experiencing the same issue; was there a regression to the modal fix. What version needs to be used to make sure that alert messages are modal?

    Release notes for 2.16 states that there was a fix for alerts not being modal. We are using 3.0.6 and are experiencing the same issue; was there a regression to the modal fix. What version needs to be used to make sure that alert messages are modal?

    We are trying to determine why alert boxes are not modal
    The fix states it's for Firefox 2.0 - 3.7a1pre
    We are using 3.0.6 not the current version of 3.6.13.
    Add-on release notes 2.16.1
    https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/versions/
    Repeating 2.16 release notes since 2.16 was not made available for more than a couple of hours
    Fixed bug whereby some alert boxes weren't properly parented/owned. This led to some alerts not being properly modal
    with respect to the window/dialog that issued the alert.

  • Change to rules provisioning not being processed

    I updated a import rules extension, but the change is not being processed when the MA runs.
    Example: mventry["attribute"].Value = "A";
    Changed to: mventry["attribute"].Value = "B";
    Run full sync, the value remains A.  New accounts also continue to get "A".  Disable/enable metaverse rules extensions under options, no difference.
    Any ideas?

    Rules Extension dll has to be replaced in the extension folder. Typically the FIM Sync Service picks up the new dll "on the fly", but if replaced with new dll and no effect is showing [PS] Restart-Service FIMSynchronizationService makes sure the
    new dll is loaded.
    Metaverse Rules extension has nothing to do with advanced Import Rules.

  • Issue: Runtime Alerts not being received in Alert Framework

    Hello Experts,
    I have configured the required settings in ALRTCATDEF and SCOT. Also I have set up and activated the alert rules in 'Alert configuration' with suppress multiple alerts parameter checked out.
    Alll the required services for receiving alerts are active in SICF. I am able to receive standard test alerts in my alert inbox and on my mail id by running the standard Alert triggering programs like RSALERTEST etc. Aler Rules ahev been set up correctly too. However I am not able to receive the actual runtime IE or AE errors that occur a t runtime while an actual message is being processed.
    I am using SAP PI 7.3..
    Kindly advice.
    Thanks in advance,
    Elizabeth.

    Hi Rajendra,
    I am using single stack PI 7.3. can you guide on where to check the adapter cache alert category status, whether active or not? Also in SXI_CACHE i cannot see my Alert category listed.  Is this a problem?
    i checked all other settings as indicated by you and everything else looks correct.
    Pls advice.
    regards,
    Elizabeth.

  • BPM alert not being generated

    Hi guys,
    I'm having quite a little trouble in doing one thing which has been done for several times but which is giving  me a hard time...
    I've created a bpm process which in an error condition will trigger a simple alert.
    I've tested the process and everything goes allright, except the alert. What is happening? Well, the alert is not being sent to my inbox. Why? Maybe because the alert is not being generated.
    I've went to sxmb_moni_bpe and everything is OK. I can see the message of the alert being fired. After that I went to slg1 to see if my alert is being generated, and the answer is no. It's not being generated....
    I've gone through Michal's blogs and everything was checked...The question is, if I execute RSALERTTEST with my alert category, the alert is generated and send to my inbox. Do you have any ideas?
    Thanks a lot for your attention

    Hi,
    Yes the job scheduling is needed only for Sp14 and below.
    Can you recheck if the Alert is actually getting triggered or not in the BPM in the control step? Make sure that you have given the correct Alert Category in your Control Step and also, make sure that the recipient in your alert as well as the Alert Inbox entry you are seeing is the same.
    Finally, refresh the Alert Inbox.
    Regards,
    Bhavesh

  • Alerts not being sent

    Alerts are currently only being sent if the person who created the alert is a recipient. This is a huge problem. Alerts are a big selling feature of Business One, however we just learned that alerts are not being sent unless the person that created the alert is also a recipient. We have clients who might send 75 - 100 alerts per day. Especially useful for sending email alerts to sales personnel or technicians in the field who do not have access to SAP Business One. To say that the person who created the alerts (normally manager) must be a recipient is not practical. Most clients have a user who must use the manager login so they get the alerts.
    If this is the only solution, SAP should provide a free user that can only create and setup alerts thus would no longer be a problem

    Yes I am sure and the following is the information received from SAP escalation pasted into this post
    Case 1
    Manager is logged in ONLY
    Manager configures an alert and Sends the alert to Sophie( tick Email box )
    Results => alert will not be received via email
    Case 2
    Manager is logged in ONLY
    Manager configures an alert and Sends the alert to Sophie and Manager
    Results => alert will be received via email  by Sophie and manager as manager is one of the recipients.
    Case 3
    Manager is logged in ONLY
    Sophie Logs in
    Results => alert from Case 1+ 2 are received via email, as Sophie is a recipient
    => It seems that one of the recipients needs to be logged to B1 in order to trigger the email sending process.

  • SQL alerts not being sent

    I have set up an alert in SQL 2008 R2 to send an email if the tempdb encounters error 1105 (primary file group full) or 9002 (log full). I specified the database as "tempdb". Even though the condition is occurring, the alert is not being triggered
    unless I change the database to "all databases". When it's set to "tempdb" the email is not sent, and the history tab shows 0 occurrences even though I can see them in the SQL Server Logs.
    How to I get these alerts to work with the database set to "tempdb"?
    TIA
    Chuck

    I see the error message in the error log showing "tempdb" as the DB who's primary file group filled.
    Error: 1105, Severity: 17, State: 2.
    Could not allocate space for object 'dbo.#mytable' in database 'tempdb' because the 'PRIMARY' filegroup is full.
    I think I've also figured out whats happening. Whether or not the alert fires is based on the session's database context,
    not the database who's primary file group filled. :( On a test server I ran the following...
    -- prior to running this, I set tempdb's primary file
    -- group to 10m and disabled autogrowth
    use master
    set nocount on
    create table #mytable (x char(8000))
    while 1 = 1
    insert into #mytable values ('x')
    If I set the alert's database to "tempdb" and run the code, the alert does not fire. If I set the alert's database to "master", it does fire.
    This is not the behavior described in BOL :(.
    Chuck

  • Messages not being processed

    Hi,
    I experience an issue with 10g soa, about not having processed some messages sent to receive activities of my process.
    I have my process deployed, I can initiate the instance of the process, I can move the flow of the process by calling some receive activities, but sometimes it seems it stucks on one of these receives. It happens randomly on various processes on various receive activities, like one day one receive operation doesn't proceed at all, and on another day it is another receive operation, and that is without any change to the process and redeployment.
    I can see the message has been received by server, because I can query it from DLV_MESSAGE table from BPEL data source, and i see its STATE = 0, which is unprocessed. I am also sure that message has valid content and the correlation bits on the message are also correct, really there is no reason why such message should not be processed, it always worked until the moment it decides not to for unknown reason.
    My question:
    1. What causes messages to be not processed?
    2. What are known options to deal with this situation?
    Currently I am in a condition where I can spot unprocessed messages, which is fine, so I am aware of the problem, but I am not able to deal with the problem. Even if I tried to submit the message via BPEL Console, it was not processed either.
    Thank you for any suggestions

    Some blog information to support this case
    http://soacrux.blogspot.com/2010/08/automatic-recovery-program-for-pending.html
    Content repaste:
    Automatic recovery program for pending BPEL call back messages
    BPEL engine maintains all async call back messages into database table called dlv_message. You can see such all messages in BPEL console call-back manual recovery area.The query being used by bpel console is joined on dlv_message and work_item tables.This query simply picks up all call back messages which are undelivered and have not been modified with in certain threshold time.
    Call-back messages are processed in following steps
    BPEL engine assigns the call-back message to delivery service
    Delivery service saves the message into dlv_message table with state 'UNDELIVERED-0'
    Delivery service schedules a dispatcher thread to process message asynchronously
    Dispatcher thread enqueues message into JMS queue
    Message is picked up by MDB
    MDB delivers the message to actual BPEL process waiting for call-back and changes state to 'HANDLED=2'
    So given above steps, there is always possibility that message is available in dlv_message table but MDB is failed in delivering it to BPEL process which keeps message always in state= 0.
    Following program can be tailored to suite one's own requirements to recover from such state-0 messages.
    Note:- This program contains logic to recover from invocation and call-back messages. Please comment out appropriately.
    package bpelrecovery;
    import com.oracle.bpel.client.*;
    import com.oracle.bpel.client.util.SQLDefs;
    import com.oracle.bpel.client.util.WhereCondition;
    import java.util.ArrayList;
    import java.util.Hashtable;
    import java.util.List;
    import javax.naming.Context;
    public class bpelrecovery {
    public bpelrecovery() {
    public static void main(String[] args) {
    bpelrecovery recover = new bpelrecovery();
    String rtc = "";
    try{
    rtc = recover.doRecover();
    catch (Exception e){
    e.printStackTrace();
    rivate void recoverCallbackMessages(List messages)
    throws Exception
    String messageGuids[] = new String[messages.size()];
    for(int i = 0; i < messages.size(); i++)
    ICallbackMetaData callbackMetadata = (ICallbackMetaData)messages.get(i);
    String messageGuid = callbackMetadata.getMessageGUID();
    messageGuids[i] = messageGuid;
    System.err.println((new StringBuilder()).append("recovering callback message =
    ").append(messageGuids).append(" process
    [").append(callbackMetadata.getProcessId()).append("(").append(callbackMetadata.getRevisionTag()).ap
    pend(")] domain [").append(callbackMetadata.getDomainId()).append("]").toString());
    Locator locator = getLocator();
    IBPELDomainHandle domainHandle = locator.lookupDomain();
    domainHandle.recoverCallbackMessages(messageGuids);
    public String doRecover() throws Exception{
    // Connect to domain "default"
    try{
    System.out.println("doRecover() instantiating locator...");
    Locator locator = getLocator();
    System.out.println("doRecover() instantiated locator for domain " +
    locator.lookupDomain().getDomainId());
    // look for Invoke messages in need of recovery
    StringBuffer buf1 = new StringBuffer();
    WhereCondition where = new WhereCondition(buf1.append(SQLDefs.IM_state).append( " = "
    ).append(IDeliveryConstants.STATE_UNRESOLVED ).toString() );
    System.out.println("doRecover() instantiating IInvokeMetaData... with where = "+ where.getClause());
    IInvokeMetaData imd1[] = locator.listInvokeMessages(where);
    System.out.println("doRecover() instantiated IInvokeMetaData");
    // iterate thru the list
    List l1 = new ArrayList();
    for (Object o:imd1){
    l1.add(o);
    // See how many INVOKES are in the recovery zone
    System.out.println("doRecover() instantiated IInvokeMetaData size = " +l1.size());
    // look for Callback messages in need of recovery
    StringBuffer buf = new StringBuffer();
    where = new WhereCondition(buf.append(SQLDefs.DM_state).append( " = "
    ).append(IDeliveryConstants.TYPE_callback_soap ).toString() );
    System.out.println("doRecover() instantiating ICallbackMetaData... with where = "+
    where.getClause());
    ICallbackMetaData imd[] = locator.listCallbackMessages(where);
    System.out.println("doRecover() instantiated ICallbackMetaData");
    // recover
    List l = new ArrayList();
    for (Object o:imd){
    l.add(o);
    recoverCallbackMessages(l);
    catch (Exception e){
    e.printStackTrace();
    return "done";
    public Locator getLocator(){
    System.out.println("getLocator() start");
    Locator locator = null;
    // set JNDI properties for BPEL lookup
    String jndiProviderUrl = "opmn:ormi://localhost:6003:oc4j_soa/orabpel";
    String jndiFactory = "com.evermind.server.rmi.RMIInitialContextFactory";
    String jndiUsername = "oc4jadmin";
    String jndiPassword = "welcome1";
    Hashtable jndi = new Hashtable();
    jndi.put(Context.PROVIDER_URL, jndiProviderUrl);
    jndi.put(Context.INITIAL_CONTEXT_FACTORY, jndiFactory);
    jndi.put(Context.SECURITY_PRINCIPAL, jndiUsername);
    jndi.put(Context.SECURITY_CREDENTIALS, jndiPassword);
    jndi.put("dedicated.connection", "true");
    try{
    System.out.println("getLocator() instantiating locator...");
    locator = new Locator("default", "welcome1", jndi);
    System.out.println("getLocator() instantiated locator");
    catch (Exception e){
    System.out.println("getLocator() error");
    e.printStackTrace();
    return locator;

  • Delivery note Being processed

    Dear All,
    Could you please advice me why In my some Delivery note status is showing "Being processed"?
    But status "Completed" is showing in Invoice, Accounting document and sales order.
    Pls help me.
    Tks and B.Rgds
    Bishnu

    Hi Bishnu,
    Please check status of each item in delivery note item data and note the item which has status being processed. If the item with status being processed is not billed then answer is straight forward, you need to bill that item.
    If all the items are billed and still one of the item has status being processed, then reason could be one of the dimensions (weight, volume etc.) of material are changed in master data after delivery and before invoicing. If your case is later please refer to SAP notes or write back to me.
    Thanks
    Murali

  • Applications not being processed in CustomSettings.ini or missing from ZTIApplications

    Using MDT 2012 Update 1 and started having a weird issue where MandatoryApplications properties are being skipped over during the deployment process.
    CustomSettings.ini is configured to process [Default] first, followed by model specific subsections for fixing up minor things that are specific to that model. There are MandatoryApplications settings under each subsection, and it ends up looking similar
    to this...
    [Settings]
    Priority=Default, Model
    [Default]
    MandatoryApplications001={f3389f14-6071-414b-b989-08e35cf82f27}
    [HP EliteBook 840 G1]
    MandatoryApplications002={d13e2dc4-3b27-4173-b143-30b5f57d7e53}
    Applications defined under [Default] install just fine, but nothing under the model subsections installs. No errors, also.
    BDD.log shows...
    ------ Processing the [DEFAULT] section ------ ZTIGather 4/17/2014 9:48:41 AM 0 (0x0000)
    Property MANDATORYAPPLICATIONS001 is now = {f3389f14-6071-414b-b989-08e35cf82f27} ZTIGather 4/17/2014 9:48:42 AM 0 (0x0000)
    Added value from [DEFAULT]: MANDATORYAPPLICATIONS = {f3389f14-6071-414b-b989-08e35cf82f27} ZTIGather 4/17/2014 9:48:42 AM 0 (0x0000)
    ------ Processing the [HP EliteBook 840 G1] section ------ ZTIGather 4/17/2014 9:48:42 AM 0 (0x0000)
    ------ Done processing \\Server\ProductionShare$\Control\CustomSettings.ini ------ ZTIGather 4/17/2014 9:48:43 AM 0 (0x0000)
    ZTIApplications.log shows...
    Microsoft Deployment Toolkit version: 6.1.2373.0 ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    The task sequencer log is located at C:\Users\ADMINI~1\AppData\Local\Temp\SMSTSLog\SMSTS.LOG. For task sequence failures, please consult this log. ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    Write all logging text to \\Server\productionshare$\logs ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    Validating connection to \\Server\productionshare$\logs ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    Already connected to server Server as that is where this script is running from. ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    Language/Locale Identified (in order of precedence): 1033,0409,0x0409,9,0009,0x0009 ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    Processing Application Type: MandatoryApplications ZTIApplications 4/17/2014 10:06:35 AM 0 (0x0000)
    Ready to install applications: ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    ################ ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Entry: {f3389f14-6071-414b-b989-08e35cf82f27} ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Name: SysConfigs ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    ################ ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Using a local or mapped drive, no connection is required. ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Change directory: Z:\Applications\SysConfigs ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Run Command: \\Server\ProductionShare$\Tools\X64\bddrun.exe sysconfigs.bat ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    ZTI installing application ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Event 41031 sent: ZTI installing application ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    About to run command: \\Server\ProductionShare$\Tools\X64\bddrun.exe sysconfigs.bat ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Command has been started (process ID 3624) ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Return code from command = 0 ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Application SysConfigs installed successfully ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Event 41033 sent: Application SysConfigs installed successfully ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Property InstalledApplications001 is now = {f3389f14-6071-414b-b989-08e35cf82f27} ZTIApplications 4/17/2014 10:06:36 AM 0 (0x0000)
    Processing Application Type: Applications ZTIApplications 4/17/2014 10:06:37 AM 0 (0x0000)
    Application List is empty, exiting ZTIApplications.wsf ZTIApplications 4/17/2014 10:06:37 AM 0 (0x0000)
    ZTIApplications processing completed successfully. ZTIApplications 4/17/2014 10:06:37 AM 0 (0x0000)
    Event 41001 sent: ZTIApplications processing completed successfully. ZTIApplications 4/17/2014 10:06:37 AM 0 (0x0000)
    So it ends up looking like the Applications List is not being properly populated, or something is just not processing CustomSettings.ini correctly. I've double-checked that the applications are hidden and enabled on the share, and have deleted/recreated
    the applications on the share.
    Anyone have any ideas?

    For each section in your CS.ini, start your applications list with 001, like this :
    [Settings]
    Priority=Default, Model
    [Default]
    MandatoryApplications001={f3389f14-6071-414b-b989-08e35cf82f27}
    [HP EliteBook 840 G1]
    MandatoryApplications001={d13e2dc4-3b27-4173-b143-30b5f57d7e53}

  • Email alerts not being sent

    have configured some alerts on grid control but even though I can see the metric thresholds being broken there are no emails being sent
    anyone got any ideas why this might be happening?
    thanks

    hi,
    try the following procedure
    if grid control alerts are not being sent out to email etc try running the
    following scripts as the sysman user on the primary database
    exec emd_maintenance.remove_em_dbms_jobs
    exec emd_maintenance.submit_em_dbms_jobsregards
    Alan

  • Online Archive Warning Alerts not being sent to Users

    We have exchange 2010 SP3 Rollup 7 and we utilized Online Archive of Mailbox. Each user has 2 GB of Archive Quota and 1.5 GB of Archive Warning Quota. Users don't receive any warning alert if they exceeds Archive Warning Quota. I suspect this is default
    behavior in Exchange 2010.
    Is there any change in this behavior in Exchange 2013 ?
    Regards, Sourabh Kumar Jha | Please mark it as an answer if it solves your problem or vote as helpful if you like it. |

    Does user receive warning message via OWA?
    Ans. - No
    How do you set the Archive Quota and Warning Quota?
    Ans. - It's
    the same way you have mentioned, We have archivequota of 2 GB and ArchiveWarningQuota of 1.8 GB.
    Generally, Warning Quota Alert works well, I find a blog on mailbox quota trouleshooting for your reference, hope it is helpful.
    Ans. - Have you seen any archive warning quota email, I don't find any sample email
    or reference for the same. Could you try to reproduce if you have any testing environment.
    Mailbox Quota in Outlook 2010 - general information and troubleshooting
    tips
    Ans. - Information
    in outlook shows correct, I'm concerned because user's do not receive alert when they exceed their archive warning quota.
    Regards, Sourabh Kumar Jha | Please mark it as an answer if it solves your problem or vote as helpful if you like it. |

  • IFS Queue Items Not Being Processed After Restarting BPEL PM

    This issues mentioned in the CS custom workflow development guide, but the solution mentioned(restarting BPEL PM) does not work.
    If I have to restart the process manager, and I have restarted it several times due to outofmemory errors, the queue just continues to build in the IFSQUEUE and none of the custom workflow requests are processed.
    The only workaround I have found is to redeploy each custom workflow. This is obviously unacceptable in a production environment.
    Anyone have any ideas?

    Can you please provide more details of the OutofMemory Errors that you see in BPEL ? This is more of a AQ adapter problem than anything else. The log file to track the information would be under $ORACLE_HOME/integration/orabpel/domains/default/logs [where ORACLE_HOME is that of BPEL PM installation]. You can increase the log level of all the components from BPELConsole. Click on the Manage BPEL Domain -->Logging and update the logging level to debug.
    Hope this helps,
    Ravikiran.

  • Long hang time, videos and photos not being processed in playback

    I just purchased a new iMac desktop complete with the 3.06GHz processor and iLife '09. Expecting to be editing in fast speeds, I am real disappointed in that when I import photos and various video clips, then playback what I've compiled (either in the small playback window or full screen), the software does not show the clips being played. Oddly, the software might display a clip from a few sections back or forward that have nothing to do with (as an example), the photo that should actually be playing at that point in time.
    Furthermore, when I export the project into a Quicktime, all the clips are correctly being shown, but this only happens during export. Lastly, there appears to be significant wait times (spinning color wheel) for no reason whatsoever, even when the project is very short in length with few complex edits and/or fancy effects taking place.
    iMac has 2GB 800 MHz RAM, plenty of room on a 500 GB HD and all software is up to date.
    Any insight anyone has would be very much appreciated. I basically purchased this new desktop for the sole reason to edit and compile projects faster than an iMac from several years ago, and this is really frustrating to say the least.
    Thank you.

    It may help you to click /Software Update. If you Mac is brand new, it may not have the latest updates.

Maybe you are looking for