Performance Degradation when server added to Cluster

          Hi,
          I am having some performance issues with my weblogic cluster.
          I am running 2 WLS 5.1 sp8 on Solaris 7. And 4 apache 1.3.12
          webservers using the apache/weblogic proxy.
          The performance seems to be fine when only one server is running.
          But when both servers are running the application slows to a crawl.
          It seems to be VERY slow when hitting the database (only with
          both servers running).
          I also have the same exact application running in clustered mode
          in my staging environment, and it has NO performance issues when
          both servers are running.
          My thoughts are that something is configured incorrectly, and
          is causing the 2 servers in the cluster to have problems communicating.
          Any ideas or thoughts would be greatly appreciated.
          Thank you.
          

Hi,
According to your description, my understanding is that you want to run your custom code in the feature event receiver automatically without re-activating
the feature.
In the feature event receiver, the event needs to be triggered by activating or deactivating the feature. So there is no easy way to run code directly
without re-activating the feature in feature event receiver.
As a workaround, I suggest you can create a task scheduler to reactive feature using PowerShell Command to run your custom code.
More information:
Activating and Deactivating Features with PowerShell:
http://sharepointgroup.wordpress.com/2012/05/04/activating-and-deactivating-features-with-powershell/
Running a SharePoint PowerShell script from Task Scheduler:
http://get-spscripts.com/2011/01/running-sharepoint-powershell-script.html
Best regards,
Zhengyu Guo
Zhengyu Guo
TechNet Community Support

Similar Messages

  • Performance degradation when using foreign keys

    Hi,
    I face drastic performance degradation when I add foreign keys to a table and perform insert / update on that table.
    I have a row store table to  which I need to insert around 1,50,000 records.
    If the table has no foreign key reference it takes maximum of 5 seconds but if the same table has references to other tables (in my case there are 3 references), the processing speed reduces drastically to 2 minutes.
    Is there any solution / best practice that can help me in gaining performance (processing speed) in this situation?
    Thanks
    S.Srivatsan

    Hi Sri,
    When you perform one insert in any database table which is having foreign key relationships, it will check the corresponding parent tables to check whether the master data is available or not. If your table is having 2 foreign key relationship, it happen twice per insert. Hence the performance will degrade. This is one of the reasons why ECC  doesn't establish foreign key relationship in the back end database. The case is not just for INSERT, for UPDATE & DELETE the same is applicable.
    Sreehari

  • System performance degrades after server migration ???

    Hi Friends,
    System performance degrades after we migrate our BW 3.5 Server (Production) from UK to Germany.
    for details :
    1. Data is coming to Informatica server from Informatica its going to POSDM Server (Point of Sales Data Management) from POSDM we are running pipes and data is coming to BW (Delta Queqe)
    so before Server Migration it was taking 2 Hrs to load 4 Million Records
    After Server Migration it is taking 4 Hrs.
    Please help out to find the reason for this.
    Note : Server Ram , Hard Disk , Speed is same on both Servers
    Thanks
    Asim

    Note : Server Ram , Hard Disk , Speed is same on both Servers
    Are you cahnge any application or database parameter (the OS and all patch'es are the same?)
    How you doing migration? Very low information... Are you check the network configuration are the same( for example check the network speed 1G or 100MB).
    Are you trying to analyse the St03N and St04 t-codes?
    Regards.

  • Performance degradation when using proxy.pac file with FF ESR 31

    With Bug 923458 many people complained about a performance issue compared to other browsers when a proxy.pac file is used.
    The issue initially reported with the bug was resolved for ESR25 according to the statistics, but the general performance issue remained.
    I had the same issue with ESR24 and ESR31.3 .
    I was testing with www.bild.de.
    It took about 40 seconds to load the content completely. Without the proxy.pac file it took about 10 seconds.
    I added a few alerts to the pac-File in order to get logs within the console for some analyses.
    I found the following:
    1. the pac.file is executed for every request, no matter if the host changed or not.
    With us the pac-File checks for IP-Adresses and host-names only.
    It is not necessary to execute the pac file for each and every request to the same remote host.
    So the question is, if we are able to disable this behaviour via about:config?
    2. the content referenced by www.bild.de seems to be loaded sequentially and with a delay
    The overall time consumed by the proxy.pac file executions was about 4 Seconds compared to the 40 seconds of overall load time.
    So I checked the delay between executions of the pac-file and found an overall delay of 40 seconds. I expect that the delay between the calls to the pac-file is caused by the retrieval of contents from the remote host.
    So why are the requests executed sequentially?
    Hint: Due to the times necessary for executing the pac-file and downloading the contents from the remote host, I would expect the logs generated by my alerts to be mixed (especially if myIpAddress took 1 Second). But the log is cleanly ordered by URL. (see attachment)

    Hi guigs2,
    thanks for your response. As we only use myIpAddress once within our pac-File and only rely on dnsDomainIs(), ==-Comparisons and shExpMatch() and the sum of all pac-Executions was about 4 seconds compared to 40 seconds overall load time, I do not think that dns resolving is our issue.
    I checked the seetings of the configuration you mentioned above. It is set to "false", so the client would try the resolve the dns names. Our admin told, that we do not use socks-Proxies, only http-Proxies.
    Regarding sequential load of the contents included on www.bild.de from other web sites, I attached a screenthot.
    Please note the red highlights. These show the start time in milliseconds of the pac-execution. I added this as a kind of id which represents a unique identifier together with the URL if the log items are mixed. But they are not, instead they are cleanly ordered by URL (for all 360 pac-file calls).
    Moreover in the picture you can see the delay between the end of the last pac-file execution and the next one (blue timestamp in millisonds compared to the red timestamp of the next row saying "entered proxy.pac"). The delay sum up exactly to the 40 seconds the FF took to load the page completely.
    Alone the fragment shown represents a delay of 630ms between the pac-file executions. If the contents would be loaded in parallel, there should be no such delay.

  • Quality and performance degradation when standalone Flash Lite has been sent to back

    Hi forum,
    I'm new to Flash Lite 4 on Symbian 3, I want to start working on a new app and am currently deciding if Flash Lite 4 is a good choice. I have just ran into my first problem: When I send my native Flash Lite 4 app (either running from an swf or started from a c++ stub --which obviously does the same, but I thought I should try it--) to the background (because f.i. a call comes in), and it comes back, the screen quality has degraded (as seen in textfields, it looks as if stage quality has gone to MEDIUM), the app runs more choppily, and, even worse, things drawn with the drawing API are drawn in a different way! (???) Line caps & miter settings appear to be lost. This is on a Nokia C7-00.
    This probably has to do with the new performance optimizations in mobile flash, but is there any way to avoid this, or to reset my flash lite app to "normal" state? Is there an event I can listen to? 
    Thanks in advance,
    Bas Horsting

    Here are the before and after screenshots.

  • Performance Degradation When Increasing Queues in Same Queue Space

    Hi,
    We are running tuxedo 8.1, 32 bit with patch level 258 in our windows server 2003 based production environment.
    We've had to add quite a few more queues (approximately 10) into our existing queue space. However, what we have noticed is that there is a degradation in enqueing messages. It takes about 10% longer to enqueue a message and process messages, than before these new queues were introduced (despite the enqueues not happening to the majority of these new queues).
    How can we improve this performance in Tuxedo?
    Kind Regards,
    Asim

    Hi Asim,
    Without know more about your configuration it's a little hard to say what's going on. Do you have more traffic in the queue space now that you have more queues? Are you using XA transactions and now maybe need to process a higher rate of XA transactions? For the later, you might try increasing the number of TMS processes configured for the group the TMQUEUE server is running in. You can also run multiple TMQUEUE servers if you aren't already. How many enqueue/dequeue operations are you trying to perform on the queue space per second?
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • How to minimise performance degradation when querying a growing table during processing...?

    Hi everyone
    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    If so, is there any way to rectify this? Would updating the stats at certain points in the process be effective?
    Would it be any different if a MERGE was used to (conditionally) insert the header records into TableA? (i.e. would the stats still get stale?)
    DB is 11GR2 and OS is Windows Server 2008
    Thanks

    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    What do you mean 'presumably the stats . .'?
    In item #3 you said that TXH_ID is the primary key. That means only ONE value will EVER be found in the index so there should be NO degradation for looking up that primary key value.
    The plan you posted shows an index range scan. A range scan is NOT used to lookup primary key values since they must be unique (meaning there is NO RANGE).
    So there should be NO impact due to the header table 'getting big'.

  • Performance degrade when changing MAXTRANSOPS

    How much will it affect the performance to change:
    GROUPTRANSOPS
    MAXTRANSOPS
    from the default values to 1?
    Have any of You expert tested this?

    Only GroupTransOps is a performance tuning option. You typically do not want to decrease this below the default of 1000. This groups smaller transactions into larger ones to optimize 'replicat'. You can try to increase this to a larger value to increase throughput under high-volume loads -- but it could increase latency under low volumes.
    The MaxTransOps parameter is not for performance tuning; do not set this unless you have a reason to. This breaks up large transactions into smaller transactions -- for example, if you 'capture' a large tx on a 'source' database, and the 'target' doesn't support tx's that large (such as in a heterogenous replication scenario; e.g., sqlserver to oracle). Or, sometimes you may need to break up large tx's for some other reason. Note that this does break transactionality on the target; i.e., 1 tx on the source may be applied as 2 or more separate tx's on the target. If you set this to '1', then every operation becomes a single transaction (definitely not what you want to do).
    Review the documentation in the reference guide for these two options => http://download.oracle.com/docs/cd/E18101_01/index.htm

  • KNOWN ISSUES 3513544: Performance degradation...

    I see that version 3.1 of the Microsoft Drivers for PHP for SQL Server was published on 12/12/2014 and is available on
    http://www.microsoft.com/en-us/download/details.aspx?id=20098. Thank you Microsoft...
    One thing that bothered me though is the "Known issue" described at the end of the release.txt file included in the SQLSRV31.EXE package:
    KNOWN ISSUES
    "3513544: Performance degradation when using Microsoft Drivers 3.1 for PHP for SQL Server with Windows 7/Windows Server 2008 R2 and previous versions. Clients connecting to supported versions of Microsoft SQL Server may notice decreased performance when
    opening and closing connections in a Windows 7/Windows Server 2008 R2 environment. The recommended course of action is to upgrade to Windows 8/Windows Server 2012 or later."
    Has anybody experienced that "decreased performance when opening and closing connections" problem on Windows Server 2008 R2? If you have, how bad is it?
    And are there any solutions - other than the "recommended course of action" ("upgrade to Windows 8/Windows Server 2012 or later")? In a corporate environment upgrading a OS isn't always a simple thing that you do in a few minutes. It can
    take weeks or months of planning and testing...

    As I googled, apparently there are less articles mentioned this issue. On the worst surmise, MS may just drive you to upgrade :P
    I don't find a specific document on this, this may be a better question on a dedicated PHP forum.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Error in Adding new vritual machine server to HaA cluster ???

    Hi
    when i adding new Server to my cluster i see this error
    Check prerequisites to add server (192.168.20.253) to server pool (karkas) succeed
    2009-11-29 12:16:33     Check prerequisites to add server (192.168.20.253) to server pool (karkas) succeed
    2009-11-29 12:16:46     During adding servers ([192.168.20.253]) to server pool (karkas), Cluster setup failed: (OVM-1011 OVM Manager communication with 192.168.20.254 for operation HA Setup for Oracle VM Agent 2.2.0 failed: errcode=50006, errmsg=Do 'clusterm_init_root_sr' on servers ('192.168.20.253') failed. )
    but when edited again see solved problem but not HA feature work correctly(when shutdown server pool virtual machine is goes down )
    i test is with nfs server but not work i user iscsci target on RHEL5 for create iscsi server
    i see ha work correctly same az this
    2009-11-29 12:25:44     Check prerequisites to add server (192.168.20.253) to server pool (karkas) succeed
    2009-11-29 12:25:50     Check prerequisites to add server (192.168.20.253) to server pool (karkas) succeed
    Select     Server Pool Name     Status     High Availability Status     Servers     Users     Logs
    Select     karkas     Active     Enabled     Total: 2     Total: 1     View Logs
    and no error in /var/log/ovs/*.log
    but when add new server i see in status "error in adding server "" and when edited again see server add and active state
    1-i have no ocfs2 partition on my machine use iscsi initiator
    2-all server use root cluster id /var/ovs/mount/disk_id
    3-all permission set for everybody and 777
    any idea for solve this problem
    thanks

    hi
    and very thanks for your reply
    every this work good
    [root@OVS2-253-32 ~]# service o2cb online
    Starting O2CB cluster ocfs2: OK
    [root@OVS2-253-32 ~]# service o2cb start
    Starting O2CB cluster ocfs2: OK
    [root@OVS2-253-32 ~]#
    but see :
         During adding servers ([192.168.20.253]) to server pool (jojo), Cluster setup failed: (OVM-1011 OVM Manager communication with 192.168.20.254 for operation HA Setup for Oracle VM Agent 2.2.0 failed: errcode=50006, errmsg=Do 'clusterm_init_root_sr' on servers ('192.168.20.253') failed. )
    Select     Server Host/IP     Server Name     Server Type     Status     Server Location     Server Pool Name     Logs
    Select     192.168.20.254     254     Server Pool Master,Utility Server,Virtual Machine Server     Active          jojo     View Logs
    Select     192.168.20.253     253     Virtual Machine Server     Error          jojo     View Logs
    but edit this server work fine. i use doc in step by step configure and no error occur in configuration but still see this error on adding new server and HA can't works correctly (only when i power off virtual quest machine after 15 second this server goes up and live migration not work correctly )when i shutting down one virtual machine server guest still wait for powering up this server
    ocfs2 kernel module load correctly
    cat /var/log/ovs-agent/ovs_root.log
    StackTrace:
    File "/opt/ovs-agent-2.3/OVSXCluster.py", line 115, in clusterm_init_root_sr
    sr.initialize()
    File "/opt/ovs-agent-2.3/_storage/OVSFileSR.py", line 127, in initialize
    self.sp.mount(mp)
    File "/opt/ovs-agent-2.3/_storage/plugins/OVSFileSP.py", line 209, in mount
    fs_spec = self.get_fs_spec()
    File "/opt/ovs-agent-2.3/_storage/plugins/OVSFileSP.py", line 184, in get_fs_spec
    tgt_dev = get_dev_spec(self.fs_uuid, self.fs_spec)
    File "/opt/ovs-agent-2.3/_storage/plugins/OVSFileSP.py", line 82, in get_dev_spec
    raise Exception("No device found: dev_uuid=%s" % dev_uuid)
    are you have any idea for solve this problem ???
    very thanks

  • Very low performance level when the Server has 2 actives Network Interf.

    In a server Pentium IV 2.0 GHz, 1 Gbyte RAM, Windows 2000 Adv. Server, with two network interfaces, we observe a lower [really critical]performance when the two network interfaces are actives. If we disable one netrowk interface the performance increases to a very higher level. Could it be solved only by the configuration? At the beginning [still both interfaces enabled] we had only one listerner applied to IP 0.0.0.0. To test this situation, we modified the IP of this listener for the IP of one of the network interface and we also added one other IP to the other listener with the IP of the second network interface. At this test was not possible to see any increase of performance. In conclusion, we understood that it is not a trivial problem and we appreciate some help to avoid this down performance at the server.
    Thanks in advance, Ricardo Gomes e Bruno Guimar�es
    Network Engineer
    Link Data Inform�tica
    www.linkdata.com.br
    +55 61 9219238
    [email protected]

    Running with two nics will not necessarily give better performance. 1st, if the 2 adapters have been configured as a team then the network switch that the server is connecting through must support teaming of ports. 2nd if adapters are not teamed but going through different networks then make sure that the routing priority has been set.

  • Very slow performance jclient when running with remote server

    We have performance problems when running a JClient application, if the Application Server is on a different machine in the same 100mbit network. In our application we open 6 panels with about 15 TextFieldBindings each, on a tabbed pane. Each panel has it's own viewobject on the server. It takes the panel allmost two minutes to start up. Our own code seems to perform reasonable, but between the last line of code and the actual visibility of the panel there is a long period of low intensity network traffic between the client and the server machine, while both machines have low CPU usage. We tried setting the synchmode of the ApplicationModule to SYNC_LAZY and SYNC_IMMEDIATE, but this does not seem to make any difference.
    It seems as if the server starts throwing a lot of events after our code is executed, which are caught by the BC4J controlbinding listeners. The performance is a lot better if we have the server and the client on the same machine, and the database on a different one.
    This kind of performance is not acceptable for this application. Are we doing something that should not be done with BC4J, or are we missing something?

    We have performance problems when running a JClient application, if the Application Server is on a different machine in the same 100mbit network. In our application we open 6 panels with about 15 TextFieldBindings each, on a tabbed pane. Each panel has it's own viewobject on the server. It takes the panel allmost two minutes to start up. Our own code seems to perform reasonable, but between the last line of code and the actual visibility of the panel there is a long period of low intensity network traffic between the client and the server machine, while both machines have low CPU usage. We tried setting the synchmode of the ApplicationModule to SYNC_LAZY and SYNC_IMMEDIATE, but this does not seem to make any difference.
    It seems as if the server starts throwing a lot of events after our code is executed, which are caught by the BC4J controlbinding listeners. The performance is a lot better if we have the server and the client on the same machine, and the database on a different one.
    This kind of performance is not acceptable for this application. Are we doing something that should not be done with BC4J, or are we missing something? You must be hitting a performance issue regarding download of all property metadata for setting labels etc. on the UI (in case of remote-tier deployment).
    This issue has been resolved for our next release of JDeveloper. Basically a new api has been added that allows 3tier apps to "download" the set of "used" VO definitions, Attribute Definitions etc on the client, so that the UI comes up quick.
    Also a application/ui/binding load code-generation has been modified to allow for "lazy" loading of controls/lazy binding etc, quite like what's done in the JClient Control-bindings sample on otn.
    For 9.0.2, you may shorten the "load" time, by loading only the UI that's first displayed and pre-loading ViewObject definition. However it'll still be slower than what the above mentioned method would do in one roundtrip.

  • Performance degradation of Weblogic 5.1 sp 6 when used with Peoplesoft 8

    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 2000. Besides the weblogic wont even shutdown completely
    when trying to shutdown.
    Weblogic customer support advised to upgrade to sp 8 but sp 8 wont support 128
    bit encription which peoplesoft 8 need.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

    There shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
    encryption. If that's the issue, you should post in the security
    newsgroup or contact [email protected]
    -- Rob
    Mani Ayyalas wrote:
    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
    Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
    time and Weblogic 5.1 sp6.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
    the weblogic wont even shutdown completely when trying to shutdown.
    Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
    support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
    8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
    along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
    sp 9) in the mean time.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

  • My internet continually disconnects.  When I added my airport express, I had to call Verizon (my provider) and have them "bridge" my modem.  Ever since, my connection constantly disconnects and puts up the message "Could not find a PPPoE server".

    My internet continually disconnects.  When I added my airport express, I had to call Verizon (my provider) and have them "bridge" my modem.  Ever since, my connection constantly disconnects and puts up the message "Could not find a PPPoE server".
    Any ideas?  I probably have something wrong in my computer settings.

    You should restore your modem to router condition and set the Airport Express to 'Bridge Mode'. A full explanation and the method of doing this is here:
    http://www.wilmut.webspace.virginmedia.com/notes/airport.html

Maybe you are looking for

  • E-mail integration configuration

    When I go to the "E-mail Inbox" workcenter in the webui, and click "Transfer to CRM" of an e-mail I get the following alert message: "SAP CRM solution cannot access Microsoft Outlook Apllication was stoppen" A hint on where to configure or resolve th

  • Problem when data update through the jsp

    Hello, Im using hibernate 3 with postgre database with proxool . i got edit list jsp which has form and list of data, when i update the data and reload the page. data in the list update, when i refresh few more times, its start to show old data again

  • Siri mic starts and stops in 1 second

    Hi all, So, - quit messages app to clear it out - I start the Messages app - click on a person - then click at bottom to message area - then click on MIC for dictation - 1st time siri starts - then stops after 1 second - then after that I try again a

  • Create Delivery

    Hi Friends, We need to create a delivery with reference to a sales order (Inbound and outbound) , But the picking Qty also needs to be updated in the delivery at the time of creation. Is ther any function module to do this (I am trying to use RV_DELI

  • Where's the Exception???

    Try creating a JSP containing nothing but this: <% String bad = null; bad.equals("death"); %> This will show a 404 error and NOT display a stack trace! What gives? When I have a real problem, how I am I supposed to debug it if handlePageException eat