Performance degradation with -g compiler option

Hello
Our mearurement of simple program compiled with and without -g option shows big performance difference.
Machine:
SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
Compiler:
CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
#include "time.h"
#include <iostream>
int main(int  argc, char ** argv)
   for (int i = 0 ; i < 60000; i++)
       int *mass = new int[60000];
       for (int j=0; j < 10000; j++) {
           mass[j] = j;
       delete []mass;
   return 0;
}Compilation and execution with -g:
CC -g -o test_malloc_deb.x test_malloc.c
ptime test_malloc_deb.xreal 10.682
user 10.388
sys 0.023
Without -g:
CC -o test_malloc.x test_malloc.c
ptime test_malloc.xreal 2.446
user 2.378
sys 0.018
As you can see performance degradation of "-g" is about 4 times.
Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
This will give us possibility to open customer core files using non-stripped exe.
But our tests shows that stripping does not give performance of executable compiled without '-g'.
So we are losing performance by using this compilation method.
Is it expected behavior of compiler?
Is there any way to have -g option "on" and not lose performance?

In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl.

Similar Messages

  • Performance degradation with addition of unicasting option

    We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
    *-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
    As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
    *-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
    and the following in the remote client node that point to one of the server node like this
    *-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
    But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
    From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
    Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
    Is performance degradation in well-known addressing is a limitation and expected?

    Hi Mahesh,
    From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
    Thanks,
    Mark
    Oracle Coherence

  • Performance Degradation with EJBs

    I have a small J2EE application that consists of a Session EJB calling 3 Entity EJBs that access the database. It is a simple Order capture application. The 3 Entity beans are called Orders, OrderItems and Inventory.
    A transaction consists of inserting a record into the order table, inserting 5 records into the orderitems table and updating the quantity field in the inventory table for each order item in an order. With this transaction I observe performance degradation as the transactions per second decreases dramatically within 5 minutes of running.
    When I modify the transaction to insert a single record into the orderitems table I do not observe performance degradation. The only difference in this transaction is we go through the for loop 1 time as opposed to 5 times. The code is exactly the same as in the previous case with 5 items per order.
    Therefore I believe the problem is a performance degradation on Entity EJBs that
    get invoked in a loop.
    I am using OC4J 10.1.3.3.
    I am using CMP (Container Managed Persistence) and CMT (Container Managed Transactions). The Entity EJBs were all generated by Oracle JDeveloper.
    EJB version being used is 2.1.

    One thing to consider it downloading and using the Oracle AD4J utility to see if it can help you identify any possible bottlenecks, on the application server or the database.
    AD4J can be used to monitor/profile/trace applications in real time with no instrumentation required on the application. Just install it into the container and go. It can even trace a request from the app server down into the database and show you the situation is down there (it needs a db agent installed to do that).
    Overview:
    http://www.oracle.com/technology/products/oem/pdf/wp_productionappdiagnostics.pdf
    Download:
    http://www.oracle.com/technology/software/products/oem/htdocs/jade.html
    Install/Config Guide:
    http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e11085/toc.htm
    Usage Scenarios:
    http://www.oracle.com/technology/products/oem/pdf/oraclead4j_usagescenarios.pdf

  • Performance degradation with Oracle EJB

    Wonder if someone has done any benchmark on the performance degradation as the number of connection into EJB based application increases. We are experiencing rather severe degradation in one such implementation. Will appreciate if you could share your experience with regard to this.

    Try to see is there any contention on the MTS configuration. Try to increase the number of MTS if the number user is very high

  • Performance degraded with VirtualListView control

    Hi,
    We are using VirtualListView control for retrieving LDAP entries from SunOne directory server. We observed that with VirtualListView control, search performance degraded considerabaly (almost down by 95%) as compared to retrieving same result without using Paging mechanism.
    We have configured the directory server for better performance. Also added the index on attributes which we are retrieving using search operation. But still performance is very bad. Does any one has faced this issue earlier? Are there any settings which we can use to improve the performance?
    We do not want to retrieve all records without using paging to avoid any memory issue.
    Thanks,
    Kiran

    "Do i need to some setting adjustments ?"Probably not.
    "The performace degraded drastically."Could you elaborate a bit more please? Could you give an example please?
    /r

  • Performance degradation with 11.0.2 CS5 update

    Hi!
    Has anyone else run into a problem with performance reduction/degradation in GPU mode after updating to 11.0.2 of Flash Pro CS5? I've been working on breakout style game (unBrix), which I carefully built up to run at very close to 60 fps, and it has been fine - until I updated Flash to 11.0.2 (to get Android publish to work, and fix a few other issues).
    I have confirmed that downgrading back to the release version of Flash CS5 fixes the performance stuttering that I have noticed since updating to 11.0.2.
    I'd love to know if anyone else has noticed a similar problem - or even better isolated the cause - or if anyone has tried downgrading back to see if there is an improvement (I test this by installing the CS5 trial on a VMWare image if that helps).
    Thanks,
    Kevin N.

    I found the same problem with new update and already wrote about this problem in this forum.
    You can find my post here: http://forums.adobe.com/message/3214594#3214594
    But I same as you don't have any answers.
    Also I tried some benchmark test and found that the FPS result is the same for previous & updated packager.
    So, I think the problem is only visually. New packager drops a lot of frames. Looks very slowly (((

  • LDAP/SSL performance degradation with 1.6.29/1.6.30

    Hi,
    we are running an application within a Tomcat 6.0.35 server on RHEL 5.7/i386 that queries our company's Active Directory using LDAP over SSL. One of the queries involves expanding a large distribution list. Since the upgrade from JDK 1.6.27 to 1.6.29 (or 1.6.30) the performance of this LDAP query has degraded dramatically, from about 8 seconds to more than 300 seconds. This only happens when encrypting the LDAP connection.
    We are not sure how to debug this further. Which information would we need to provide to get to the root of this? I was thinking that perhaps the Tomcat output with the javax.net.debug=ssl,handshake property set for 1.6.27 and 1.6.29/30 would be sufficient?
    With Java 1.6.29/30, the basic response/reply between the Tomcat and the AD server looks like:
    TP-Processor11, WRITE: TLSv1 Application Data, length = 32
    TP-Processor11, WRITE: TLSv1 Application Data, length = 160
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 11920
    TP-Processor11, WRITE: TLSv1 Application Data, length = 32
    TP-Processor11, WRITE: TLSv1 Application Data, length = 160
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 16368
    Thread-270, READ: TLSv1 Application Data, length = 11920
    When using Java 1.6.27, we see:
    TP-Processor12, WRITE: TLSv1 Application Data, length = 208
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 5696
    TP-Processor12, WRITE: TLSv1 Application Data, length = 208
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 16368
    Thread-42, READ: TLSv1 Application Data, length = 5696
    Looking at the 32 bytes long requests (with javax.net.debug=all set), we see:
    Padded plaintext before ENCRYPTION: len = 32
    0000: 30 0C C2 32 83 6E 9F D8 8F 5E E8 47 7A 0B 9A F1 0..2.n...^.Gz...
    0010: 7D 44 78 0B 9E 0A 0A 0A 0A 0A 0A 0A 0A 0A 0A 0A .Dx.............
    TP-Processor1, WRITE: TLSv1 Application Data, length = 32
    Which doesn't make a whole lot of sense to us...
    Any help debugging this further would be most welcome.
    Cheers
    Stefan
    Edited by: user9158206 on Jan 12, 2012 6:06 AM

    Since you've determined that your problem is related to the use of TLS, your posting is likely to get a quicker response on the Java Secure Socket Extension (JSSE) forum. When you do get a resolution, please post a link to it on this thread to close the loop. Thanks.
    Arshad Noor
    StrongAuth, Inc.

  • Performance degradation with COGNOS and BW

    Hello,
    Do you know how to increase performance when using Cognos to request in BW ? Cognos seems to need a lot of RAM.
    Thanks for your help
    Catherine Bellec

    In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
    If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
    If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
    If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
    HTH,
    Darryl.

  • Performance degradation with Airport Express

    I have an airport extreme connected to my cable modem, and supplying wireless internet to the whole house. I get around 20Mbps download speed from the farthest computer in the house. I bought an airport express that I installed near that farthest computer, in order to get iTunes music to my stereo installation. When the express is on, my download speeds go down to 6Mbps or lower. Is there anything I can do to avoid this performace degradation?
    Thanks in advance,
    Carlos

    I have the same problem except I am using mine as the wireless pass through for a mac pro with no airport

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • How bad is the performance hit with RTMPT?

    In a conversation with engineers recently at a CDN it was suggested to me that streaming all video over RTMPT only was a viable solution to the barriers posed by firewalls and proxies blocking port 1935.  They indicated that they had seen no signifigant performance degradation with tunneling and that many of their clients were making the switch from "rollover" type connection models to connection via RTMPT only.
    This ran counter to my notion of the process.  I have always thought the packet overhead was signifigant.
    Is it?  How bad is the performance hit for streaming live h.264 video?

    Hmmm...  That's quite a hit.
    I'm trying to determine the best strategy for reaching the most people with the least performance hit.  A guy lays out his strategy here:
    http://www.kensodev.com/2010/02/19/rtmp-being-blocked-by-firewalls-flash-media-server/
    He basically says ditch 1935.  Never use it.  Always use 80.  Like this:
    rtmp://your_ip_address:80/app_name
    If that fails, do this:
    rtmpt://your_ip_address:80/app_name
    Does that seem valid?  Does that first option avoid the perfomance hit of tunnelling while getting you more connections?  If so, it makes me think there is no benefit at all to connecting via 1935.

  • Metadata Service : Performance degradation: unfetched field caused extra roundtrip

    I am having a problem with SharePoint Metadata field, it gives me the error: Performance degradation: unfetched field [Products] caused extra roundtrip, and the page just crashes when I try to open the list, can anybody help me?

    Hi,
    Could you find out what product caused the extra roundtrip? Are you using SharePoint 2010 Server or Fundation? Meanwhile, you may check the logs first.
    SharePoint 2010 log files are located in "c:\Program Files\Common Files\Microsoft Shared\web server extensions\14\LOGS" folder
    Here is one article can be referred to.
    SharePoint performance degradation with a large number of unique security scopes in lists
    http://support.microsoft.com/kb/2420771
    Hope that helps.
    Ivan-Liu
    TechNet Community Support

  • Bad performance of PARI when compiling with GCC for SPARC 4.2.0

    Hi,
    I've compiled pari (2.3.3) with gcc 4.2.0 and with an old gcc 3.3.2 on a Sun Fire V240 and Solaris 10.
    The performance of pari compiled with gcc 4.2.0 is terrible. I've tried it with '-O3 -fast' and without this flags with the same result.
    First compiled with gcc 3.3.2 without any special CFLAGS and run make bench:
    * Testing objets for gp-sta..TIME=4 for gp-dyn..TIME=4
    * Testing analyz for gp-sta..TIME=82 for gp-dyn..TIME=80
    * Testing number for gp-sta..TIME=63 for gp-dyn..TIME=62
    * Testing polyser for gp-sta..TIME=19 for gp-dyn..TIME=19
    * Testing linear for gp-sta..TIME=30 for gp-dyn..TIME=29
    * Testing elliptic for gp-sta..TIME=50 for gp-dyn..TIME=51
    * Testing sumiter for gp-sta..TIME=45 for gp-dyn..TIME=47
    * Testing graph for gp-sta..TIME=25 for gp-dyn..TIME=25
    * Testing program for gp-sta..TIME=94 for gp-dyn..TIME=94
    * Testing trans for gp-sta..TIME=228 for gp-dyn..TIME=225
    * Testing nfields for gp-sta..TIME=434 for gp-dyn..TIME=433
    +++ Total bench for gp-sta is 726
    +++ Total bench for gp-dyn is 722
    Then compiled with gcc 4.2.0 again without any special flags and run make bench:
    * Testing objets for gp-sta..TIME=4 for gp-dyn..TIME=4
    * Testing analyz for gp-sta..TIME=82 for gp-dyn..TIME=83
    * Testing number for gp-sta..TIME=62 for gp-dyn..TIME=65
    * Testing polyser for gp-sta..TIME=19 for gp-dyn..TIME=19
    * Testing linear for gp-sta..BUG [1686975] for gp-dyn..BUG [1608146]
    * Testing elliptic for gp-sta..TIME=50 for gp-dyn..TIME=52
    * Testing sumiter for gp-sta..TIME=51 for gp-dyn..TIME=50
    * Testing graph for gp-sta..TIME=28 for gp-dyn..TIME=28
    * Testing program for gp-sta..TIME=98 for gp-dyn..TIME=97
    * Testing trans for gp-sta..BUG [613000] for gp-dyn..BUG [606558]
    * Testing nfields for gp-sta..BUG [3648091] for gp-dyn..BUG [3716170]
    +++ [BUG] Total bench for gp-sta is 3029987
    +++ [BUG] Total bench for gp-dyn is 2958336
    Do anybody know this problem or can tell me what the problem could be?
    Thanks!!
    Rainer W.

    Here are the compiler options I used to compile it with gcc4.
    C compiler is /usr/local/gcc4/gcc/bin/gcc -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer
    Executable linker is /usr/local/gcc4/gcc/bin/gcc -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer
    Dynamic Lib linker is /usr/local/gcc4/gcc/bin/gcc -shared -mimpure-text $(CFLAGS) $(DLCFLAGS) -Wl,-G,-h,$(LIBPARI_SONAME)
    If I use the -fast option I get the same result may be a little bit worse.
    Here are some compilation lines:
    Making gp in Osolaris-sparcv9
    make[2]: Entering directory `/no_backup/pari/pari-2.3.3/Osolaris-sparcv9'
    File ../src/funclist not changed.
    ../config/genkernel ../src/kernel/sparcv8_micro/asm0-common.h ../src/kernel/sparcv8_micro/asm0.h > parilvl0.h
    cat ../src/kernel/none/tune.h ../src/kernel/none/int.h ../src/kernel/none/level1.h > parilvl1.h
    cat parilvl0.h parilvl1.h > pariinl.h
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -I../src/language -I/usr/local/include -o gp.o ../src/gp/gp.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -I../src/graph -o gp_init.o ../src/gp/gp_init.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -I../src/language -I/usr/local/include -o gp_rl.o ../src/gp/gp_rl.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -DDL_DFLT_NAME=NULL -o highlvl.o ../src/gp/highlvl.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o whatnow.o ../src/gp/whatnow.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -I/usr/openwin/include -o plotX.o ../src/graph/plotX.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o anal.o ../src/language/anal.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o compat.o ../src/language/compat.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o default.o ../src/language/default.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o errmsg.o ../src/language/errmsg.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o es.o ../src/language/es.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o init.o ../src/language/init.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o intnum.o ../src/language/intnum.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o members.o ../src/language/members.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o sumiter.o ../src/language/sumiter.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o aprcl.o ../src/modules/aprcl.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o elldata.o ../src/modules/elldata.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o elliptic.o ../src/modules/elliptic.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o galois.o ../src/modules/galois.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o groupid.o ../src/modules/groupid.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o kummer.o ../src/modules/kummer.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o mpqs.o ../src/modules/mpqs.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o nffactor.o ../src/modules/nffactor.c
    /usr/local/gcc4/gcc/bin/gcc -c -O3 -Wall -fno-strict-aliasing -fomit-frame-pointer -I. -I../src/headers -o part.o ../src/modules/part.c

  • Performance degradation of Weblogic 5.1 sp 6 bundled with Peoplesoft 8.1.2:

    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
    Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
    time and Weblogic 5.1 sp6.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
    the weblogic wont even shutdown completely when trying to shutdown.
    Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
    support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
    8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
    along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
    sp 9) in the mean time.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

    There shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
    encryption. If that's the issue, you should post in the security
    newsgroup or contact [email protected]
    -- Rob
    Mani Ayyalas wrote:
    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
    Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
    time and Weblogic 5.1 sp6.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
    the weblogic wont even shutdown completely when trying to shutdown.
    Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
    support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
    8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
    along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
    sp 9) in the mean time.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

  • Performance degradation of Weblogic 5.1 sp 6 when used with Peoplesoft 8

    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 2000. Besides the weblogic wont even shutdown completely
    when trying to shutdown.
    Weblogic customer support advised to upgrade to sp 8 but sp 8 wont support 128
    bit encription which peoplesoft 8 need.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

    There shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
    encryption. If that's the issue, you should post in the security
    newsgroup or contact [email protected]
    -- Rob
    Mani Ayyalas wrote:
    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
    Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
    time and Weblogic 5.1 sp6.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
    the weblogic wont even shutdown completely when trying to shutdown.
    Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
    support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
    8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
    along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
    sp 9) in the mean time.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

Maybe you are looking for

  • Why are the open windows not listed in any kind of order?

    It's bad enough there's no menu showing all the open tabs, but why does the dropdown menu for open Windows not be in ANY order whatsoever? It would certainly stop having to look up and down the list every time I want to go to a different window if th

  • PO vendor and parked document number

    hi, I am new to FI and i need to fetch the following from the Purchase Order Number. PO vendor Parked Doc Number/ Invoice Number. Can you plz let me know the table and field where i can get these details? regards, Balaji

  • Email attachment with didnt used EXPORTING TO MEMORY

    Hi Expert, For my development I cant using below abap code because I getting short dump when trying to execute the SAP Standard Program "RM06WCD1". When trying other program that is working fine. SUBMIT (report_name)          USING SELECTION-SET p_va

  • How to create an odbc connections to LMS 4.0.1 database

    Does anyone know how to connect to the LMS database (preferably from a machine other than the actual LMS server) to access the database - ani, rmeng etc.  I was able to do this on my LMS 2.2 version, but can't seem to figure out how to do it with thi

  • Requestors end users will not acces a site directly and add list items

    Hi i created a  site for IT services  and this site includes some lists for various services http://Intranet.com/Units/IT services  main site is http://Intranet.com and also workflow are configured  for these lists for create new item and only 3 type