LDAP/SSL performance degradation with 1.6.29/1.6.30

Hi,
we are running an application within a Tomcat 6.0.35 server on RHEL 5.7/i386 that queries our company's Active Directory using LDAP over SSL. One of the queries involves expanding a large distribution list. Since the upgrade from JDK 1.6.27 to 1.6.29 (or 1.6.30) the performance of this LDAP query has degraded dramatically, from about 8 seconds to more than 300 seconds. This only happens when encrypting the LDAP connection.
We are not sure how to debug this further. Which information would we need to provide to get to the root of this? I was thinking that perhaps the Tomcat output with the javax.net.debug=ssl,handshake property set for 1.6.27 and 1.6.29/30 would be sufficient?
With Java 1.6.29/30, the basic response/reply between the Tomcat and the AD server looks like:
TP-Processor11, WRITE: TLSv1 Application Data, length = 32
TP-Processor11, WRITE: TLSv1 Application Data, length = 160
Thread-270, READ: TLSv1 Application Data, length = 16368
Thread-270, READ: TLSv1 Application Data, length = 16368
Thread-270, READ: TLSv1 Application Data, length = 11920
TP-Processor11, WRITE: TLSv1 Application Data, length = 32
TP-Processor11, WRITE: TLSv1 Application Data, length = 160
Thread-270, READ: TLSv1 Application Data, length = 16368
Thread-270, READ: TLSv1 Application Data, length = 16368
Thread-270, READ: TLSv1 Application Data, length = 11920
When using Java 1.6.27, we see:
TP-Processor12, WRITE: TLSv1 Application Data, length = 208
Thread-42, READ: TLSv1 Application Data, length = 16368
Thread-42, READ: TLSv1 Application Data, length = 16368
Thread-42, READ: TLSv1 Application Data, length = 5696
TP-Processor12, WRITE: TLSv1 Application Data, length = 208
Thread-42, READ: TLSv1 Application Data, length = 16368
Thread-42, READ: TLSv1 Application Data, length = 16368
Thread-42, READ: TLSv1 Application Data, length = 5696
Looking at the 32 bytes long requests (with javax.net.debug=all set), we see:
Padded plaintext before ENCRYPTION: len = 32
0000: 30 0C C2 32 83 6E 9F D8 8F 5E E8 47 7A 0B 9A F1 0..2.n...^.Gz...
0010: 7D 44 78 0B 9E 0A 0A 0A 0A 0A 0A 0A 0A 0A 0A 0A .Dx.............
TP-Processor1, WRITE: TLSv1 Application Data, length = 32
Which doesn't make a whole lot of sense to us...
Any help debugging this further would be most welcome.
Cheers
Stefan
Edited by: user9158206 on Jan 12, 2012 6:06 AM

Since you've determined that your problem is related to the use of TLS, your posting is likely to get a quicker response on the Java Secure Socket Extension (JSSE) forum. When you do get a resolution, please post a link to it on this thread to close the loop. Thanks.
Arshad Noor
StrongAuth, Inc.

Similar Messages

  • Performance degraded with VirtualListView control

    Hi,
    We are using VirtualListView control for retrieving LDAP entries from SunOne directory server. We observed that with VirtualListView control, search performance degraded considerabaly (almost down by 95%) as compared to retrieving same result without using Paging mechanism.
    We have configured the directory server for better performance. Also added the index on attributes which we are retrieving using search operation. But still performance is very bad. Does any one has faced this issue earlier? Are there any settings which we can use to improve the performance?
    We do not want to retrieve all records without using paging to avoid any memory issue.
    Thanks,
    Kiran

    "Do i need to some setting adjustments ?"Probably not.
    "The performace degraded drastically."Could you elaborate a bit more please? Could you give an example please?
    /r

  • Performance degradation with addition of unicasting option

    We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
    *-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
    As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
    *-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
    and the following in the remote client node that point to one of the server node like this
    *-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
    But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
    From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
    Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
    Is performance degradation in well-known addressing is a limitation and expected?

    Hi Mahesh,
    From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
    Thanks,
    Mark
    Oracle Coherence

  • Performance degradation with -g compiler option

    Hello
    Our mearurement of simple program compiled with and without -g option shows big performance difference.
    Machine:
    SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
    Compiler:
    CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
    #include "time.h"
    #include <iostream>
    int main(int  argc, char ** argv)
       for (int i = 0 ; i < 60000; i++)
           int *mass = new int[60000];
           for (int j=0; j < 10000; j++) {
               mass[j] = j;
           delete []mass;
       return 0;
    }Compilation and execution with -g:
    CC -g -o test_malloc_deb.x test_malloc.c
    ptime test_malloc_deb.xreal 10.682
    user 10.388
    sys 0.023
    Without -g:
    CC -o test_malloc.x test_malloc.c
    ptime test_malloc.xreal 2.446
    user 2.378
    sys 0.018
    As you can see performance degradation of "-g" is about 4 times.
    Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
    This will give us possibility to open customer core files using non-stripped exe.
    But our tests shows that stripping does not give performance of executable compiled without '-g'.
    So we are losing performance by using this compilation method.
    Is it expected behavior of compiler?
    Is there any way to have -g option "on" and not lose performance?

    In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
    If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
    If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
    If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
    HTH,
    Darryl.

  • Performance Degradation with EJBs

    I have a small J2EE application that consists of a Session EJB calling 3 Entity EJBs that access the database. It is a simple Order capture application. The 3 Entity beans are called Orders, OrderItems and Inventory.
    A transaction consists of inserting a record into the order table, inserting 5 records into the orderitems table and updating the quantity field in the inventory table for each order item in an order. With this transaction I observe performance degradation as the transactions per second decreases dramatically within 5 minutes of running.
    When I modify the transaction to insert a single record into the orderitems table I do not observe performance degradation. The only difference in this transaction is we go through the for loop 1 time as opposed to 5 times. The code is exactly the same as in the previous case with 5 items per order.
    Therefore I believe the problem is a performance degradation on Entity EJBs that
    get invoked in a loop.
    I am using OC4J 10.1.3.3.
    I am using CMP (Container Managed Persistence) and CMT (Container Managed Transactions). The Entity EJBs were all generated by Oracle JDeveloper.
    EJB version being used is 2.1.

    One thing to consider it downloading and using the Oracle AD4J utility to see if it can help you identify any possible bottlenecks, on the application server or the database.
    AD4J can be used to monitor/profile/trace applications in real time with no instrumentation required on the application. Just install it into the container and go. It can even trace a request from the app server down into the database and show you the situation is down there (it needs a db agent installed to do that).
    Overview:
    http://www.oracle.com/technology/products/oem/pdf/wp_productionappdiagnostics.pdf
    Download:
    http://www.oracle.com/technology/software/products/oem/htdocs/jade.html
    Install/Config Guide:
    http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e11085/toc.htm
    Usage Scenarios:
    http://www.oracle.com/technology/products/oem/pdf/oraclead4j_usagescenarios.pdf

  • Performance degradation with Oracle EJB

    Wonder if someone has done any benchmark on the performance degradation as the number of connection into EJB based application increases. We are experiencing rather severe degradation in one such implementation. Will appreciate if you could share your experience with regard to this.

    Try to see is there any contention on the MTS configuration. Try to increase the number of MTS if the number user is very high

  • Performance degradation with 11.0.2 CS5 update

    Hi!
    Has anyone else run into a problem with performance reduction/degradation in GPU mode after updating to 11.0.2 of Flash Pro CS5? I've been working on breakout style game (unBrix), which I carefully built up to run at very close to 60 fps, and it has been fine - until I updated Flash to 11.0.2 (to get Android publish to work, and fix a few other issues).
    I have confirmed that downgrading back to the release version of Flash CS5 fixes the performance stuttering that I have noticed since updating to 11.0.2.
    I'd love to know if anyone else has noticed a similar problem - or even better isolated the cause - or if anyone has tried downgrading back to see if there is an improvement (I test this by installing the CS5 trial on a VMWare image if that helps).
    Thanks,
    Kevin N.

    I found the same problem with new update and already wrote about this problem in this forum.
    You can find my post here: http://forums.adobe.com/message/3214594#3214594
    But I same as you don't have any answers.
    Also I tried some benchmark test and found that the FPS result is the same for previous & updated packager.
    So, I think the problem is only visually. New packager drops a lot of frames. Looks very slowly (((

  • Performance degradation with COGNOS and BW

    Hello,
    Do you know how to increase performance when using Cognos to request in BW ? Cognos seems to need a lot of RAM.
    Thanks for your help
    Catherine Bellec

    In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
    If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
    If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
    If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
    HTH,
    Darryl.

  • Performance degradation with Airport Express

    I have an airport extreme connected to my cable modem, and supplying wireless internet to the whole house. I get around 20Mbps download speed from the farthest computer in the house. I bought an airport express that I installed near that farthest computer, in order to get iTunes music to my stereo installation. When the express is on, my download speeds go down to 6Mbps or lower. Is there anything I can do to avoid this performace degradation?
    Thanks in advance,
    Carlos

    I have the same problem except I am using mine as the wireless pass through for a mac pro with no airport

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • How bad is the performance hit with RTMPT?

    In a conversation with engineers recently at a CDN it was suggested to me that streaming all video over RTMPT only was a viable solution to the barriers posed by firewalls and proxies blocking port 1935.  They indicated that they had seen no signifigant performance degradation with tunneling and that many of their clients were making the switch from "rollover" type connection models to connection via RTMPT only.
    This ran counter to my notion of the process.  I have always thought the packet overhead was signifigant.
    Is it?  How bad is the performance hit for streaming live h.264 video?

    Hmmm...  That's quite a hit.
    I'm trying to determine the best strategy for reaching the most people with the least performance hit.  A guy lays out his strategy here:
    http://www.kensodev.com/2010/02/19/rtmp-being-blocked-by-firewalls-flash-media-server/
    He basically says ditch 1935.  Never use it.  Always use 80.  Like this:
    rtmp://your_ip_address:80/app_name
    If that fails, do this:
    rtmpt://your_ip_address:80/app_name
    Does that seem valid?  Does that first option avoid the perfomance hit of tunnelling while getting you more connections?  If so, it makes me think there is no benefit at all to connecting via 1935.

  • Metadata Service : Performance degradation: unfetched field caused extra roundtrip

    I am having a problem with SharePoint Metadata field, it gives me the error: Performance degradation: unfetched field [Products] caused extra roundtrip, and the page just crashes when I try to open the list, can anybody help me?

    Hi,
    Could you find out what product caused the extra roundtrip? Are you using SharePoint 2010 Server or Fundation? Meanwhile, you may check the logs first.
    SharePoint 2010 log files are located in "c:\Program Files\Common Files\Microsoft Shared\web server extensions\14\LOGS" folder
    Here is one article can be referred to.
    SharePoint performance degradation with a large number of unique security scopes in lists
    http://support.microsoft.com/kb/2420771
    Hope that helps.
    Ivan-Liu
    TechNet Community Support

  • Performance Degradation from SSL

    I have read articles which are showing that performance could go down to
    1/10 with certain servers (Reference
    http://isglabs.rainbow.com/isglabs/shperformance/SHPerformance.html) when
    using SSL.
    I am cuurently using WebLogic 4.5. Can anybody tell me what kind of
    performance degradation would I see if I switch all my transactions from
    normal unsecure http transactions to secure ones (SSL V3)?
    Any help appreciated.
    best regards, Andreas

    Andreas,
    Internal benchmarks (unofficial) have shown SSL to be 65-80% slower than
    typical connections. So, anywhere between 3 to 5 times slower. This is the
    same across all http servers.
    In Denali (the next release), we're adding a performance pack enhancement
    that includes native code impls of some of our crypto code. This should
    show large speedups when it's released in March.
    Thanks!
    Michael Girdley
    Sr. Product Manager
    WebLogic Server
    BEA Systems
    ph. 415.364.4556
    [email protected]
    Andreas Rudolf <[email protected]> wrote in message
    news:820bv6$br6$[email protected]..
    I have read articles which are showing that performance could go down to
    1/10 with certain servers (Reference
    http://isglabs.rainbow.com/isglabs/shperformance/SHPerformance.html) when
    using SSL.
    I am cuurently using WebLogic 4.5. Can anybody tell me what kind of
    performance degradation would I see if I switch all my transactions from
    normal unsecure http transactions to secure ones (SSL V3)?
    Any help appreciated.
    best regards, Andreas

  • EDSPermissionError(-14120) problems with LDAP, SSL and Directory Utility

    Hello everyone,
    Apologies for the repost but I think I may have made a mistake by posting this originally in the Installation, Setup and Migration forum instead of the Open Directory forum. At least I think that may be why I didn't receive any responses.
    Anyway, I've been trying to get my head around Open Directory and SSL as they are implemented in Mac OS X Server 10.5 Leopard, and have been having a few issues. I would like to set up a secure internal infrastructure based around a local Certificate Authority that signs certificates for other internal services like LDAP, email, websites, etc.
    I only have one Mac OS X Server and it is kind of a small office so I have gone against best practice and simply made it a CA (through Keychain Utility). I then generated a self-signed SSL certificate through Server Admin, and used the "Generate CSR" option to create a Certificate Signing Request. This went fine, but I did have some problems signing it with the CA, because the server documentation suggested that once I signed it it would pop open a Mail message containing the ASCII version of the signed certificate - it did not, and it took me a loooong time to realize that I could simply export the copy of the signed certificate it put in my local Keychain on the server as a PEM file and paste this back into the "Add Signed or Renewed Certificate from Certificate Authority" dialog box in Server Admin. Hopefully this can be fixed in a forthcoming patch, but I thought I would mention it here in case anyone else is stuck on this issue.
    Once I did this I was able to use this certificate in the web server on the same machine and sure enough I was able to connect to it with with clients who had installed the CA certificate in their system Keychains without getting any error messages - very cool.
    However, I haven't had quite as much luck getting it going with LDAP/Open Directory. I installed the certificate there as well, but have run into a number of problems. At first I could not get clients (also running 10.5.2) to talk to the server at all over SSL, receiving an error in Directory Utility that the server did not support SSL. I eventually discovered that the problem seemed to lie in the fact that the OpenLDAP implementation on Leopard is not tied in with the system Keychain, necessitating some command-line voodoo to install a copy of the CA cert in a local directory and point /etc/openldap/ldap.conf at it, as documented here: http://www.afp548.com/article.php?story=20071203011158936
    This allowed me to do an ldapsearch command over SSL, and seemingly turn SSL on on clients that were previously bound to the directory, and additionally allowed me to run Directory Utility on new clients and put in the server name with the SSL box checked and begin to go through the process of binding. Once this seemed to work, I turned off all plaintext LDAP communication and locked down the service by checking the "Enable authenticated directory binding," "Require authenticated binding," "Disable clear text passwords," and "Encrypt all packets" options in Server Admin. However, I am now running into a new problem, specifically that I cannot successfully bind a local account to a directory account over SSL.
    Here's what happens:
    1) I run Directory Utility, (or it auto-runs) and add a server, typing in the DNS name and clicking the SSL box.
    2) I get asked to authenticate, and type in user credentials, including computer name (incidentally, should this be a FQDN or just a hostname?)
    3) Provided I put admin credentials in here and not user-level credentials, I get taken to the "Do you want to set up Mail, VPN, etc.?" box that normally appears when you autodiscover or connect to an Open Directory server.
    4) I click through, and am asked for a username and password on the server, as well as the password for my local account.
    5) When I put this information in, I get a popup with the dreaded "eDSPermissionError(-14120)" and it fails.
    Checking the logs in Server Admin reveals nothing special, and while I have seen a couple other threads on this error and various other binding problems:
    http://discussions.apple.com/thread.jspa?messageID=5967023
    http://discussions.apple.com/message.jspa?messageID=5982070
    these have not solved the problem. In the Open Directory user name field I am putting the short username. I have tried putting [email protected] and the user's longname but this fails by saying the account does not exist. For some reason it does seem to work if I bind it to the initial admin account I created, but no other user accounts.
    If I turn all the encryption stuff off I am able to join just fine, so I am suspecting that the error may lie in some other "under the hood" piece of software that doesn't get the CA trust settings from the Keychain or the ldap.conf file, but I'm stymied as to which piece of software this might be. Does anyone have any clues on what I might be able to do here?
    Thanks,
    Andrew

    Hard to tell what is happening without looking at the application
    source, knowing what OS & hardware you're using etc. You might want to
    try running with different JVM versions to see if it's actually the VM
    that is the problem. If you have a support contract with BEA you could
    ask support to help you diagnose this.
    Regards,
    /Helena
    Ayub Khan wrote:
    I have an application running on Weblogic 8.1 ( with JRockit as the JVM). This
    application in turns talks to an iPlanet Directory server via LDAP/SSL. The problem
    seems to happen on loading the machine..the performance progressively gets worse
    and after a couple of seconds, all the threads stop responding. I checked the
    heap, cpu and the idle threads in the execute queue and there is nothing there
    to trigger alarms...there are quite a few idle threads still and the heap and
    the cpu utilization seem OK. On doing a thread dump, Is see that all the other
    threads seem to be in a state where they are waiting for data from LDAP and it
    is basically read only data that they are waiting on.
    Does anyone know what it is going on and help point me in the right direction.
    -Ayub

  • Convergence with LDAP SSL Failure

    Hello,
    I'm now having a problem securing connections between Convergence and my LDAP server.
    Once I set it in iwcadmin, ugldap.enablessl to true and change the port to 636, the following error occurs and convergence just couldn't authenticate.
    server.log in Glassfish 2.1.1, enterprise profile using NSS keystore
    [#|2010-11-12T20:17:15.208+0000|SEVERE|sun-appserver2.1|com.sun.comms.shared.ldap|_ThreadID=19;_ThreadName=Thread-114;_RequestID=f4814afe-c0b0-4245-b21b-64be2d4a39e3;|LDAPS:Error occured during SSL handshake java.lang.RuntimeException: Could not parse key values|#]
    [#|2010-11-12T20:17:15.209+0000|SEVERE|sun-appserver2.1|com.sun.comms.shared.ldap.LDAPSingleHostPool|_ThreadID=19;_ThreadName=Thread-114;_RequestID=f4814afe-c0b0-4245-b21b-64be2d4a39e3;|buildConnection: got LDAPException while connecting to Pool number:0. Host=<ldaphost> :netscape.ldap.LDAPException: Error occured during SSL handshake java.lang.RuntimeException: Could not parse key values (91)|#]
    HTTP SSL connections to Webmail server and calendar servers are fine. I tried deploying the same configuration using developer profile with JKS keystore, the SSL authentication goes through then, but I need clustering for high availability.
    Does anyone have any ideas?
    Thanks so much in advance!
    Mathew

    Hard to tell what is happening without looking at the application
    source, knowing what OS & hardware you're using etc. You might want to
    try running with different JVM versions to see if it's actually the VM
    that is the problem. If you have a support contract with BEA you could
    ask support to help you diagnose this.
    Regards,
    /Helena
    Ayub Khan wrote:
    I have an application running on Weblogic 8.1 ( with JRockit as the JVM). This
    application in turns talks to an iPlanet Directory server via LDAP/SSL. The problem
    seems to happen on loading the machine..the performance progressively gets worse
    and after a couple of seconds, all the threads stop responding. I checked the
    heap, cpu and the idle threads in the execute queue and there is nothing there
    to trigger alarms...there are quite a few idle threads still and the heap and
    the cpu utilization seem OK. On doing a thread dump, Is see that all the other
    threads seem to be in a state where they are waiting for data from LDAP and it
    is basically read only data that they are waiting on.
    Does anyone know what it is going on and help point me in the right direction.
    -Ayub

Maybe you are looking for