SunOneDS-5.2 bind authmethod=ssl doesn't appear to work

Having set up DS-5.2 with SSL + allow user authentication + user entry with (binary) certificate + certmap.conf setup for user cert CA cert + ACI entry with bind rule set to
allow (all) (userdn = ....) and (authmethod="ssl")
client connections ARE authenticated by client certificate when passed.
However, if the client binds with bindDN=userdn in ACI, but NOT using either a certificate nor SSL connection, the associated search request is still allowed. In other words, it appears that the "authmethod=ssl" bind rule HAS NO AFFECT!? Other ACI's on the entry deny anonymous access and allow other groups without SSL.
The SunOneDS-5.2 Administration Guide states for authmethod="ssl",
SSL - The client must bind to the directory over a Secure Sockets
Layer (SSL) or Transport Layer Security (TLS) connection
but this doesn't appear to be working in forcing an SSL bind/connection.
It is also interesting to note that the client certificate is offered at the LDAP connection step and not at the LDAP bind step and that a subsequent bind appears to be necessary to properly affect a subsequent search. That is, with the above ACI bind rule, connect with certificate over SSL but bind as annonymous and the search fails. ANY connection with proper bindDN and bindPW satisfies the ACI and the search succeeds.
The intention is to FORCE SSL connections for certain subtrees where the LDAP server ALSO allows NON-SSL connections for OTHER subtrees. A secondary desire is to allow SSL + certificate authentication to provide the forced SSL connection and user binding without requiring a subsequent bindDN and bindPW.
An example of SSL + certificate connection + full user binding is:
conn=30 op=-1 msgId=-1 - SSL 128-bit RC4; client 0.9.2342.19200300.100.1.4=Personal/User SSL Certificate, CN=......., OU=......, O=...., L=...., ST=...., C=..; issuer 0.9.2342.19200300.100.1.4=localnet10 SSL only CA Certificate, CN=SSLonlyCA, OU=......., O=..., L=...., ST=..., C=..
conn=30 op=-1 msgId=-1 - SSL client bound as uid=NEngineer1,ou=People,o=..,c=..
conn=30 op=0 msgId=1 - BIND dn="uid=NEngineer1,ou=People,o=..,c=.." method=128 version=3
conn=30 op=0 msgId=1 - RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=nengineer1,ou=people,o=..,c=.."
conn=30 op=1 msgId=2 - SRCH base="o=..,c=.." scope=2 filter="(dc=.............)" attrs="dn ........."
conn=30 op=1 msgId=2 - RESULT err=0 tag=101 nentries=1 etime=

Corrected - what was needed was a bind rule, boolean change from "and" to "or":
(targetattr = "*") (version 3.0;acl "DENY all but Network Administrators access";deny (all)(groupdn != "ldap:///cn=Network Administrators,ou=Groups,o=xx,c=xx")or(authmethod!="ssl");)

Similar Messages

  • The VAT ID we have doesn't appear to work

    Hello,
    I can't enter my VAT number in my Dev Center account.
    I keep getting the message 'The VAT ID we have doesn’t appear to work. To update it, go here. Learn more.'.
    It's also impossible for me to create a new support ticket.
    Dave Claessens

    Hello Dave,
    If you double checked your VAT ID to make sure that it is valid then please contact
    Dev Center Support for assistance with this. 
    -Miles
    Windows and Windows Phone Dev Center Support
    Send us your feedback about the Windows Platform

  • PowerShell script doesn't appear to work as scheduled task in sharepoint 2013

    PowerShell script doesn't appear to work as scheduled task in sharepoint 2013, it works as normal manual execution
    MCTS Sharepoint 2010, MCAD dotnet, MCPDEA, SharePoint Lead

    Hi,
    To run PowerShell Script as scheduled task in SharePoint 2013, you can take the demo below for a try:
    http://blogs.technet.com/b/meamcs/archive/2013/02/23/sharepoint-2013-backup-with-powershell-and-task-scheduler-for-beginners.aspx
    Thanks
    Patrick Liang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Patrick Liang
    TechNet Community Support

  • I just upgraded my Internet service to 12 MPS. It doesn't appear to work any faster than the 3MPS I had before. The AT

    I just upgraded my Internet service to 12 MPS. It doesn't appear to work any faster than the 3MPS I had before. The AT&T guy said it may be the computer rather than the connection. I have an iMac intel core 2 duo, running OS 10.5.8. It has 1 GB of memory. I have plenty of memory left. Is there something I can check on the computer to see if it's capable of running faster with this new Internet upgrade?

    You certainly are going to see improvements if you download big files, i.e. Apple updates, or watching trailers also at higher resolution.
    You won't see much difference if you use peer to peer download.

  • IMR Doesn't Appear to Work

    All,
              One machine running netscape and proxy. All requests are forwarded to
              the cluster. Two machines, one WL instance per. WL 510 SP4. All URLs
              go through NS/proxy.
              I can see the proxy round-robin the requests to the cluster, but IMR
              doesn't appear to be working. I've set the following cluster props:
              weblogic.httpd.clustering.enable=true
              weblogic.httpd.session.persistence=true
              weblogic.httpd.session.persistentStoreType=replicated
              When a user logs in on the first server, a session is created. The
              next request seems to be proxied to the second server and, since the
              session doesn't exist on the second server, my application returns a
              UserNotAuthenticatedException. If IMR was working it seems the
              original log in request that caused the session to be created would be
              replicated to the secondary server, but it's apparently not working.
              I'm obviously missing something. I'm suspicious of the NS/proxy config
              for some reason, but I haven't verified that setup yet. Any ideas are
              appreciated. Thanks!
              Jason
              

    Jason,
              This is a doc bug.
              I will ask our docs team to correct it.
              Currently all WLS's in a cluster should be listening on the same port
              number.
              We are planning to add this feature in the next major release.
              i.e. WLS's can listen on different ports in a cluster.
              Kumar
              Jason Jonas wrote:
              > After reading the documentation, I'm confused...
              >
              > In a WL cluster all instances must listen on the same port. But in the
              > NSAPI-plugin documentation one example from the obj.conf file is:
              >
              > <Object name="si" ppath="*/servletimages/*">
              > Service fn=wl-proxy WebLogicCluster="myweblogic.com:7001,
              > yourweblogic.com:6999,theirweblogic.com:6001"
              > </Object>
              >
              > What's up? Are we supposed to list all servers in the cluster in the
              > above prop? If so, why are the ports different in the above example?
              > If not, do we just list a single member of the cluster and rely on the
              > plug-in to dynamically determine the other instances in cluster to
              > route to?
              >
              > Jason
              >
              > On Wed, 09 Aug 2000 23:16:05 GMT,
              > [email protected] (Jason Jonas) wrote:
              >
              > >All,
              > >
              > >One machine running netscape and proxy. All requests are forwarded to
              > >the cluster. Two machines, one WL instance per. WL 510 SP4. All URLs
              > >go through NS/proxy.
              > >
              > >I can see the proxy round-robin the requests to the cluster, but IMR
              > >doesn't appear to be working. I've set the following cluster props:
              > >
              > >weblogic.httpd.clustering.enable=true
              > >weblogic.httpd.session.persistence=true
              > >weblogic.httpd.session.persistentStoreType=replicated
              > >
              > >When a user logs in on the first server, a session is created. The
              > >next request seems to be proxied to the second server and, since the
              > >session doesn't exist on the second server, my application returns a
              > >UserNotAuthenticatedException. If IMR was working it seems the
              > >original log in request that caused the session to be created would be
              > >replicated to the secondary server, but it's apparently not working.
              > >
              > >I'm obviously missing something. I'm suspicious of the NS/proxy config
              > >for some reason, but I haven't verified that setup yet. Any ideas are
              > >appreciated. Thanks!
              > >
              > >Jason
              

  • Iphoe 3g power switch doesn't appear to work

    I have an older 3g.  The power switch doesn't seem to work, so I can't turn it off.  Nor can I re-boot it.
    Neither holding the power switch nor the home and power switches together seem to have any effect.
    Any ideas what to do next?
    Thanks

    Thanks for this.  I tried it but it didn't work .  I guess it's off to the menders.

  • Check box in matrix column bind correctly but doesn't appear check sign

    Dear Sirs,
    I have a check box in a matrix column (the matrix is placed i an extra folder in the item master data form).
    The column is bound to a DBDataSource related to the user defined table @IIT_ITM1 as reported in the following code. The table field bound to the check box column is alphanumeric of size 1.
    I use the following code:
                    oCln = oMtx.Columns.Add("Per_coll", SAPbouiCOM.BoFormItemTypes.it_CHECK_BOX);
                    oCln.DisplayDesc = true;
                    oCln.Description = "For test";
                    oCln.TitleObject.Caption = "For test";
                    oCln.ValOn = "Y";
                    oCln.ValOff = "N";
                    oCln.Width = 60;
                    oCln.DataBind.SetBound(true, "@IIT_ITM1", "U_IIT_PerColl");
                    oCln.Editable = true;
    The problem is: the binding to the database work (if I click on the check box and save the data, then the database content change accordingly) but I CANNOT LET THE USER SIGN APPEAR on the check box control!
    Does anyone have the solution?
    Thank you for help
    Massimo

    No response from the forum

  • My numlock key doesn't appear to work.

    i brought home a microsoft comfort curve 2000 keyboard from work; i thought i'd try it out with my macbook. some of the special keys (volume, mute) work fine but the numlock key doesn't toggle the numeric keypad. it's stuck in numeric mode and the light isn't on.
    any ideas?
    macbook   Mac OS X (10.4.8)  

    contrary to expectations, i found intellitype 6.0 on microsoft's web site and i have installed it.
    now all of the non-standard keys work but numlock still doesn't work. it makes the unhappy key sound.
    macbook   Mac OS X (10.4.6)  

  • HT5037 I have downloaded the iphoto upgrader but it doesn't appear to work

    when I launch iphoto 11 on my new imac it still says I need to use the upgrader - even tho it appear to have executed.  Any ideas? 
    Many thanks, Don

    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In early versions of Library Manager it's the File -> Rebuild command. In later versions it's under the Library menu.)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one.  
    Regards
    TD

  • Email notifications within Adobe Forms doesn't appear to work

    I have created a form and set up a notification for this to be emailed to my email address whenever completed but responses not coming through. These are not going into a spam folder so don't appear to be sending at all.
    Can you advise what is causing this?

    I think this might be an attempt by IE9 to be more 'standards compliant'.
    The original proposal (/standard?) back in 1996 can be found at:
    http://web.archive.org/web/20061218002753/wp.netscape.com/eng/mozilla/2.0/relnotes/demo/proxy-live.html
    And includes:
    >> which will be called by the Navigator in the following way
    for every URL that is retrieved by it:
    More info here:
    http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/web-browser-auto-proxy-configuration.html
    So, theoretically, the proxy list should be checked for every single object fetch attempted by the browser.
    In practice, some level of caching has historically been employed - As discussed in articles above.
    It looks to me like someone in the IE team said "Lets follow the letter of the original design, and check every time - Heck, CPUs are fast enough these days to cope with a few lines of Javascript..."
    But 'probing' a dead proxy every time seems to be the problem. The above articles mention a 30 minute 'dead' proxy cache, that would help a bit, but still -every 30 minutes you would get 1 terrible page load time... A proxy that doesn't respond within a
    few seconds should probably be considered 'dead', at least for a short time...
    Interestingly, Chrome instantly detects a dead proxy in the blink of an eye and explicitly tells you the problem (unlike IE9, which gives a typically cryptic message). Is it any wonder
    Chrome is trouncing IE...
    So is it an IE fault, or a fault in the original design and IE9 is just being "very standards compliant"? I'll let the reader decide!
    (By the way - I was just 'passing through' trying to figure out my own proxy auto-config problems...)
    I wonder if you could build your own 'dead proxy detector' (With necessary caching) into the FindProxyForURL() code...

  • Help...Mass storage doesn't appear to work

    Hi,
    Just recently updated my 8900 last night where it asked to do a back-up and then did what it needed to do. didn't appear to be a problem.. until I hooked it up today. 
    To make a long story short.. Now when I hook it up to my USB I am no longer prompted to go into Mass storage Mode...Which is what I want...cause now it doesn't allow me to transfer files from my PC using file explorer like it use to.
    My mass storage settings appear to be alright, I tried disabling it and enabling a couple times with no change.
    Any advice out there on how to fix this?
    Thank you.

    Hello,
    I am having something of the same problem.  About two months ago my phone started to mess up (changing call profiles on it's own, going crazy on re-notifying (?) facebook and email notifications, etc) So I took it into Vodafone (my carrier) and they tried to just update the software, but it wasn't connecting properly, so it was sent away for repairs. It came back and all things seem to be a go - a few changes like the way the icons look on the main screen and that the lock button on top of the phone only locks the keyboard and not the whole phone anymore are not a problem, but the biggest is that I really can't connect to mass storage. 
    I have media card support: on
    Mass Storage Mode Support: On
    Auto Enable Mass Storage Mode when connected: prompt
    I followed all the advice from this post: http://supportforums.blackberry.com/t5/BlackBerry-Desktop-Software/HOWTO-use-your-blackberry-as-a-US...
    And still nothing is working. I have just had the phone connected to the computer for a while and was charging when all of a sudden I got a message saying that because of a problem with the USB hub, my settings on the computer, etc, it couldn't charge. 
    Could this be a problem related to the software updates that were done when I sent the computer away for repairs? Any ideas for how to get it to work again?
    Thanks, 

  • DRCP with cx_Oracle doesn't appear to work as expected

    I'm trying to use DRCP in Oracle 11.2 with a Python client using cx_Oracle 5.1.2.
    If I create the connection by calling cx_Oracle.connect and adding a cclass argument and a purity argument, then the record in sys.v_$cpool_cc_stats with cclass_name set to my cclass will show an increase in num_requests and num_misses corresponding to the number of calls I make, with num_hits staying at 0.
    connection = cx_Oracle.connect(user=db['USER'], password=db['PASSWORD'], dsn=db['NAME'], cclass=db['OPTIONS']['CCLASS'], purity=cx_Oracle.ATTR_PURITY_SELF)
    If however I create an instance of a cx_Oracle.SessionPool, then pass that instance into the same cx_Oracle.connect call as an extra 'pool' argument, then num_misses goes up by 1, and num_hits goes up by num_requests - 1 (I assume this means the first request is a new connection, all the rest are using that connection).
    pool = cx_Oracle.SessionPool(user=db['USER'], password=db['PASSWORD'], dsn=db['NAME'], min=1, max=2, increment=1)
    connection = cx_Oracle.connect(user=db['USER'], password=db['PASSWORD'], dsn=db['NAME'], pool=pool, cclass=db['OPTIONS']['CCLASS'], purity=cx_Oracle.ATTR_PURITY_SELF)
    Is this correct?  Do I need to be creating a SessionPool client side, then using that to acquire and release connections?
    This article doesn't mention SessionPool at all.  I came across SessionPool in this post, but that isn't official documentation.
    FWIW, when I run select * from dba_cpool_info, I get the following:
    "CONNECTION_POOL"
    "STATUS"
    "MINSIZE"
    "MAXSIZE"
    "INCRSIZE"
    "SESSION_CACHED_CURSORS"
    "INACTIVITY_TIMEOUT"
    "MAX_THINK_TIME"
    "MAX_USE_SESSION"
    "MAX_LIFETIME_SESSION"
    "SYS_DEFAULT_CONNECTION_POOL"
    "ACTIVE"
    4
    40
    2
    20
    300
    120
    500000
    86400

    I've written the following script to help demonstrate.  It requires python 3.  If you only have python 2, then it must be at least 2.5 (as it uses contextlib.contextmanager) and you will have to change all the print function calls to print statements (if you have python 2.6, you can add from future import print_function as print, although I haven't tested this).
    When running the script, once with the SessionPool, and once without, I have the following records in sys.v_$cpool_cc_stats:
    "CCLASS_NAME"
    "NUM_REQUESTS"
    "NUM_HITS"
    "NUM_MISSES"
    "NUM_WAITS"
    "WAIT_TIME"
    "CLIENT_REQ_TIMEOUTS"
    "NUM_AUTHENTICATIONS"
    "DEV_DRCP.WITHOUT_POOL"
    100
    0
    100
    62
    0
    0
    100
    "DEV_DRCP.WITH_POOL"
    100
    88
    12
    0
    0
    0
    100
    The script requires 2 arguments, the DSN and USER, and has an optional argument --pool which if included will use a SessionPool.  Example usage (assuming code is saved to file test_drcp.py, my_drcp_db is the TNS entry referring to an Oracle database with DRCP started, and some_user is a user with read access in that database:
    ./test_drcp.py my_drcp_db some_user
    ./test_drcp.py --pool my_drcp_db some_user
    Copy the following code to a file, modify the #! to point to the python interpreter in an environment with cx_Oracle, and make sure it's executable.
    #! /home/john/envs/drcptest/bin/python
    import os
    import time
    import argparse
    from getpass import getpass
    from contextlib import contextmanager
    import cx_Oracle
    @contextmanager
    def oracle_db(use_pool, dsn, user, password):
        if use_pool:
            pool = cx_Oracle.SessionPool(
                user=user, password=password, dsn=dsn, min=1, max=2, increment=1)
            connection = cx_Oracle.connect(
                user=user, password=password, dsn=dsn, pool=pool,
                cclass="WITH_POOL", purity=cx_Oracle.ATTR_PURITY_SELF)
        else:
            connection = cx_Oracle.connect(
                user=user, password=password, dsn=dsn, cclass="WITHOUT_POOL",
                purity=cx_Oracle.ATTR_PURITY_SELF)
        cursor = connection.cursor()
        try:
            yield cursor
            connection.commit()
        except cx_Oracle.OracleError:
            connection.rollback()
        finally:
            cursor.close()
            if use_pool:
                pool.release(connection)
            else:
                connection.close()
    def run_query_and_sleep(use_pool, dsn, user, password):
        print('> Starting {}'.format(os.getpid()))
        with oracle_db(use_pool, dsn=dsn, user=user, password=password) as cursor:
            print('> Querying {}'.format(os.getpid()))
            cursor.execute("select to_char(systimestamp) from dual")
            print(cursor.fetchall())
        print('> Sleeping {}'.format(os.getpid()))
        time.sleep(10)
        print('> Finished {}'.format(os.getpid()))
    def main(*args, **kwargs):
        for x in range(100):
            pid = os.fork()
            if not pid:
                run_query_and_sleep(**kwargs)
                os._exit(0)
    if __name__ == '__main__':
        parser = argparse.ArgumentParser('Test connection pooling with Oracle DRCP')
        parser.add_argument('dsn', help='TNS entry to use')
        parser.add_argument('user', help='Username to use for the connection')
        parser.add_argument('--pool', action='store_true', help='Use session pool')
        args = parser.parse_args()
        password = getpass('Enter password for {}> '.format(args.user))
        main(use_pool=args.pool, dsn=args.dsn, user=args.user, password=password)

  • SBS2011 (Exchange 2010 SP2) - limiting cache size doesn't appear to work

    Hi All,
    Hoping for some clarification here, or extra input at least.  I know there are other posts about this topic such as
    http://social.technet.microsoft.com/Forums/en-US/smallbusinessserver/thread/5acb6e29-13b3-4e70-95d9-1a62fc9304ac but these have been
    incorrectly marked as answer in my opinion.
    To recap the issue.  The Exchange 2010 store.exe process uses a lot of memory.  So much in fact it has a negative performance impact on the server (sluggish access to the desktop etc).  You can argue about this all day - it's by design
    and shouldn't be messed with etc but the bottom line is that it does use too much memory and it does need tweaked.  I know this because if you simply restart the Information Store process (or reboot the server) it frees up the memory and the performance
    returns (until its cache is fully rebuilt that is).  I have verified this on 4 different fresh builds of SBS2011 over the last 6 months. (all on servers with 16GB RAM)
    I have scoured the internet for information on limiting how much memory exchange uses to cache the information store and most articles point back to the same two articles (http://eightwone.com/2011/04/06/limiting-exchange-2010-sp1-database-cache/
    and
    http://eightwone.com/2010/03/25/limiting-exchange-2010-database-cache) that deal with exchange 2010 and exchange 2010 SP1, notably not exchange 2010 SP2.  Ergo most articles are out of date since exchange 2010 SP2 has been released since these articles
    were posted.
    When testing with our own in house SBS2011 server (with exchange 2010 SP2) I have found that specifying the min, max and cache sizes in ADSIEDIT has varying results that are not in line with the results documented in the articles I mentioned above. 
    I suspect the behaviour of these settings has changed with the release of exchange 2010 SP2 (as it did between the initial release and SP1).
    Specifically here's what I have found using ADSIEDIT;
    If you set the msExchESEParamCacheSize to a value - it doesn't have any effect.
    If you set the msExchESEParamCacheSizeMax to a value - it doesn't have any effect.
    If you set the msExchESEParamCacheSizeMin to a value - it always locks the store.exe process to using exactly this value.
    I have also tested using combinations of these settings with the result that the size and max size values are always ignored (and the store.exe process uses the maximum available amount of memory - thus causing the performance degradation) but as soon as
    you specify the min value it locks it to this value and it doesn't change.
    As a temporary solution on our in-house SBS2011 I have set the min value to 4GB and it appears to be running fine (only 15 mailboxes though).
    Anyone got some input on this ? thank you for your time.

    I concur with Erin. I'm seeing the same behaviour across all SBS2011 boxes, whether running SP1, SP2 or SP3.
    If a minimum value is set, the store cache size barely rises above the minumum. I have one server with 32GB RAM. Store.exe was using 20GB of RAM, plus all the other Exchange services which total 4GB+. That left virtually no free RAM and trying to do
    anything else on the server was sluggish at best.
    All the advise is that setting a maximum alone has no effect and a minimum must be set too. But when set, the store cache size barely rises above the minimum. I have set a 4GB minimum and 16GB max, but 5 days later it's still using only slightly more than
    4GB and there's 8GB free. Now the server as a whole is responsive, but doing anything with Exchange is sluggish.
    Just saying leave Exchange to manage itself is not an answer. The clue is in the name - Small Business Server. It's not Exchange Only Server - there are other tasks an SBS must handle so leaving Exchange to run rampant is not an option. Besides, there are
    allegedly means to manage the Exchange cache size - they just don't apparently work!
    I'm guessing nobody has an answer to this so the only solution is to effectively fix the cache size to a sensible value by setting min and max to the same value.
    Adam@Regis IT

  • [SOLVED] Roundcube doesn't appear to work with PHP 5.6

    Edit: I got some help from someone on the Roundcube list.  It turns out my problem involved some Roundcube configuration options that don't seem to be documented.  See the last comment in this thread for the solution.  I will update the Roundcube wiki page to prevent others from having to go through this.
    My setup is postfix for SMTP and cyrus for IMAP.  The cyrus IMAP server is set up for plain text authentication over STARTTLS (using a self-signed SSL certificate)
    I'm pretty sure I've configured Roundcube correctly, however, I can't get it to authenticate.  Looking in the roundcube error log, I see
    [31-Jan-2015 10:27:14 America/Chicago] PHP Warning: stream_socket_enable_crypto(): SSL operation failed with code 1. OpenSSL Error messages:
    error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed in /usr/share/webapps/roundcubemail/program/lib/Roundcube/rcube_imap_generic.php on line 915
    [31-Jan-2015 10:27:14 -0600]: IMAP Error: Login failed for [email protected] from 67.198.113.124. Unable to negotiate TLS in /usr/share/webapps/roundcubemail/program/lib/Roundcube/rcube_imap.php on line 184 (POST /?_task=login?_task=login&_action=login)
    Googling for a solution, I found this stackoverflow thread suggesting that this is a problem with PHP 5.6 not being able to find self-signed SSL certificates:
    http://stackoverflow.com/questions/2682 … ify-failed
    The roundcube wiki page tells me nothing (in fact, is quite incomplete; I've already made several changes to bring it up to speed a bit).
    Is anyone else successfully using Roundcube with PHP 5.6.5 and an IMAP server that only allows STARTTLS connections using a self-signed certificate?
    Last edited by pgoetz (2015-02-03 16:30:58)

    So, in order to get Roundcube to use TLS authentication with a self-signed certificate, you must configure $config['imap_conn_options'] in /etc/webapps/roundcubemail/config/config.inc.php.  You must also make sure to set $config['default_host'] using a tls:// prefix, as illustrated below.
    $config['default_host'] = 'tls://mail.my_domain.org';
    $config['imap_conn_options'] = array(
    'ssl' => array(
    'verify_peer' => true,
    'allow_self_signed' => true,
    'peer_name' => 'mail.my_domain.org',
    'ciphers' => 'TLSv1+HIGH:!aNull:@STRENGTH',
    'cafile' => '/etc/ssl/certs/ssl-cert-cyrus.my_domain.org.pem',
    I'm not sure that the ciphers entry is necessary (I have the same ciphers set in /etc/cyrus/imapd.conf), but this configuration works and I've already spent too much time fiddling with this configuration.
    The complete list of PHP SSL options can be found here: http://php.net/manual/en/context.ssl.php
    Last edited by pgoetz (2015-02-03 16:42:02)

  • Always on doesn't appear to work properly after downgrade from Standard to Basic

    I was running a Standard tier setup for my webapps, with the always on option set to "on" without issue - sites would respond almost immediately even after not having been accessed for more than the 20 minute sleep period of the lower subs.
    However, since I downgraded to basic (due to simply not making use of any of the extra features) I've found that my sites seem to take a while to load again after a period of inactivity. I've double checked the always-on option and it is on, and I've also
    tried turning it off and on again.
    Is this a known issue and does anyone have any suggested solutions?
    Thanks, Daniel.

    Hi Daniel,
    Azure unloads your site if it is idle for the standard 20 minute timeout, which can cause slow responses for the initial site users after it is unloaded.  Enabling Always On essentially causes Azure to automatically ping your site periodically to ensure
    that it remains in a running state.  Always on is not an available option on lower end plans.  
    However in Basic or Standard mode, you can enable Always On to keep the app loaded all the time.
    It is not a known issue, I suppose in theory it could consume more CPU cycles and thus could require the deployment of more infrastructure.
    Couple of things you might want to check :
    - Increase the Instance count
    - Check the Monitor Tab in Azure Website to see how much of CPU, DataIN and Data Out time is taken for a request to the site.
    Recommend you to see this thread discussion on  Failed request tracing
    for troubleshooting on Slow requests  and a video  on  
    Troubleshooting Slow Requests with Failed Request Tracing that might help troubleshoot the slowness of website.
    Regards,
    Shirisha Paderu

Maybe you are looking for

  • "no disk" error - LV 8.20 bug?

    I came accross an interesting problem with LV today (LV 8.20, Professional Development system, Win XP Pro) I opened a .vi directly from CD by double clicking the file in Windows explorer while LV 8.20 was showing its start screen. [I don't think it's

  • How to get list in single column in a query

    Hello. I wonder if there is a way of getting with one query (no pl/sql code) a whole list of columns with a separator between them. I know I can get that without the separators with a cursor expression: SQL> with test_table as (   2  select 1 col1, '

  • Regarding opening the remote folder and files

    hello, I am developing a struts web application through which a user would be able to see the remote files and folders and view it upon selecting the particular file into the JSP. I have created a onclick event in javascript which will present a user

  • Simple java program to post meesage to MQ series

    Hi.. Somdbody can give me to connect to MQ series. Maubly i want to know... what are all required to connect to MQ queue(not from Hardware side.), want to know line.. we require servername, qname what else....... plase let me know litte urgent requir

  • Removing .html extension when using MUSE

    After creating my website with MUSE, I uploaded to the Adobe Business Catalyst (BC), but couldn't remove the .html extension. However, if you create a page using just BC, the extension will automatically be gone by default. Once you've uploaded your