Performance degrade when changing MAXTRANSOPS

How much will it affect the performance to change:
GROUPTRANSOPS
MAXTRANSOPS
from the default values to 1?
Have any of You expert tested this?

Only GroupTransOps is a performance tuning option. You typically do not want to decrease this below the default of 1000. This groups smaller transactions into larger ones to optimize 'replicat'. You can try to increase this to a larger value to increase throughput under high-volume loads -- but it could increase latency under low volumes.
The MaxTransOps parameter is not for performance tuning; do not set this unless you have a reason to. This breaks up large transactions into smaller transactions -- for example, if you 'capture' a large tx on a 'source' database, and the 'target' doesn't support tx's that large (such as in a heterogenous replication scenario; e.g., sqlserver to oracle). Or, sometimes you may need to break up large tx's for some other reason. Note that this does break transactionality on the target; i.e., 1 tx on the source may be applied as 2 or more separate tx's on the target. If you set this to '1', then every operation becomes a single transaction (definitely not what you want to do).
Review the documentation in the reference guide for these two options => http://download.oracle.com/docs/cd/E18101_01/index.htm

Similar Messages

  • Performance degradation when using foreign keys

    Hi,
    I face drastic performance degradation when I add foreign keys to a table and perform insert / update on that table.
    I have a row store table to  which I need to insert around 1,50,000 records.
    If the table has no foreign key reference it takes maximum of 5 seconds but if the same table has references to other tables (in my case there are 3 references), the processing speed reduces drastically to 2 minutes.
    Is there any solution / best practice that can help me in gaining performance (processing speed) in this situation?
    Thanks
    S.Srivatsan

    Hi Sri,
    When you perform one insert in any database table which is having foreign key relationships, it will check the corresponding parent tables to check whether the master data is available or not. If your table is having 2 foreign key relationship, it happen twice per insert. Hence the performance will degrade. This is one of the reasons why ECC  doesn't establish foreign key relationship in the back end database. The case is not just for INSERT, for UPDATE & DELETE the same is applicable.
    Sreehari

  • [svn] 3363: Fix performance problem when changing multiple DisplayObject-dependent properties .

    Revision: 3363
    Author: [email protected]
    Date: 2008-09-25 11:58:56 -0700 (Thu, 25 Sep 2008)
    Log Message:
    Fix performance problem when changing multiple DisplayObject-dependent properties. The call to assignDisplayObjects() is now batched up into the next commitProperties() call.
    Bugs: SDK-17033
    Reviewer: Deepa
    Ticket Links:
    http://bugs.adobe.com/jira/browse/SDK-17033
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/flex/core/Group.as

    hello sir,
    i want to your help
    i was installed fresh windows 7 via cd rom and then after installed all software.
    and now after 1 day customer complained me that cd rom not read any cd and i m also check when i insurt cd so its not read and when i am double click on cd rom icon its eject so what i do for that please reply on my email address.
    [text removed for privacy]
    VIMAL

  • Performance Degradation when server added to Cluster

              Hi,
              I am having some performance issues with my weblogic cluster.
              I am running 2 WLS 5.1 sp8 on Solaris 7. And 4 apache 1.3.12
              webservers using the apache/weblogic proxy.
              The performance seems to be fine when only one server is running.
              But when both servers are running the application slows to a crawl.
              It seems to be VERY slow when hitting the database (only with
              both servers running).
              I also have the same exact application running in clustered mode
              in my staging environment, and it has NO performance issues when
              both servers are running.
              My thoughts are that something is configured incorrectly, and
              is causing the 2 servers in the cluster to have problems communicating.
              Any ideas or thoughts would be greatly appreciated.
              Thank you.
              

    Hi,
    According to your description, my understanding is that you want to run your custom code in the feature event receiver automatically without re-activating
    the feature.
    In the feature event receiver, the event needs to be triggered by activating or deactivating the feature. So there is no easy way to run code directly
    without re-activating the feature in feature event receiver.
    As a workaround, I suggest you can create a task scheduler to reactive feature using PowerShell Command to run your custom code.
    More information:
    Activating and Deactivating Features with PowerShell:
    http://sharepointgroup.wordpress.com/2012/05/04/activating-and-deactivating-features-with-powershell/
    Running a SharePoint PowerShell script from Task Scheduler:
    http://get-spscripts.com/2011/01/running-sharepoint-powershell-script.html
    Best regards,
    Zhengyu Guo
    Zhengyu Guo
    TechNet Community Support

  • Performance degradation when using proxy.pac file with FF ESR 31

    With Bug 923458 many people complained about a performance issue compared to other browsers when a proxy.pac file is used.
    The issue initially reported with the bug was resolved for ESR25 according to the statistics, but the general performance issue remained.
    I had the same issue with ESR24 and ESR31.3 .
    I was testing with www.bild.de.
    It took about 40 seconds to load the content completely. Without the proxy.pac file it took about 10 seconds.
    I added a few alerts to the pac-File in order to get logs within the console for some analyses.
    I found the following:
    1. the pac.file is executed for every request, no matter if the host changed or not.
    With us the pac-File checks for IP-Adresses and host-names only.
    It is not necessary to execute the pac file for each and every request to the same remote host.
    So the question is, if we are able to disable this behaviour via about:config?
    2. the content referenced by www.bild.de seems to be loaded sequentially and with a delay
    The overall time consumed by the proxy.pac file executions was about 4 Seconds compared to the 40 seconds of overall load time.
    So I checked the delay between executions of the pac-file and found an overall delay of 40 seconds. I expect that the delay between the calls to the pac-file is caused by the retrieval of contents from the remote host.
    So why are the requests executed sequentially?
    Hint: Due to the times necessary for executing the pac-file and downloading the contents from the remote host, I would expect the logs generated by my alerts to be mixed (especially if myIpAddress took 1 Second). But the log is cleanly ordered by URL. (see attachment)

    Hi guigs2,
    thanks for your response. As we only use myIpAddress once within our pac-File and only rely on dnsDomainIs(), ==-Comparisons and shExpMatch() and the sum of all pac-Executions was about 4 seconds compared to 40 seconds overall load time, I do not think that dns resolving is our issue.
    I checked the seetings of the configuration you mentioned above. It is set to "false", so the client would try the resolve the dns names. Our admin told, that we do not use socks-Proxies, only http-Proxies.
    Regarding sequential load of the contents included on www.bild.de from other web sites, I attached a screenthot.
    Please note the red highlights. These show the start time in milliseconds of the pac-execution. I added this as a kind of id which represents a unique identifier together with the URL if the log items are mixed. But they are not, instead they are cleanly ordered by URL (for all 360 pac-file calls).
    Moreover in the picture you can see the delay between the end of the last pac-file execution and the next one (blue timestamp in millisonds compared to the red timestamp of the next row saying "entered proxy.pac"). The delay sum up exactly to the 40 seconds the FF took to load the page completely.
    Alone the fragment shown represents a delay of 630ms between the pac-file executions. If the contents would be loaded in parallel, there should be no such delay.

  • Quality and performance degradation when standalone Flash Lite has been sent to back

    Hi forum,
    I'm new to Flash Lite 4 on Symbian 3, I want to start working on a new app and am currently deciding if Flash Lite 4 is a good choice. I have just ran into my first problem: When I send my native Flash Lite 4 app (either running from an swf or started from a c++ stub --which obviously does the same, but I thought I should try it--) to the background (because f.i. a call comes in), and it comes back, the screen quality has degraded (as seen in textfields, it looks as if stage quality has gone to MEDIUM), the app runs more choppily, and, even worse, things drawn with the drawing API are drawn in a different way! (???) Line caps & miter settings appear to be lost. This is on a Nokia C7-00.
    This probably has to do with the new performance optimizations in mobile flash, but is there any way to avoid this, or to reset my flash lite app to "normal" state? Is there an event I can listen to? 
    Thanks in advance,
    Bas Horsting

    Here are the before and after screenshots.

  • Performance Degradation When Increasing Queues in Same Queue Space

    Hi,
    We are running tuxedo 8.1, 32 bit with patch level 258 in our windows server 2003 based production environment.
    We've had to add quite a few more queues (approximately 10) into our existing queue space. However, what we have noticed is that there is a degradation in enqueing messages. It takes about 10% longer to enqueue a message and process messages, than before these new queues were introduced (despite the enqueues not happening to the majority of these new queues).
    How can we improve this performance in Tuxedo?
    Kind Regards,
    Asim

    Hi Asim,
    Without know more about your configuration it's a little hard to say what's going on. Do you have more traffic in the queue space now that you have more queues? Are you using XA transactions and now maybe need to process a higher rate of XA transactions? For the later, you might try increasing the number of TMS processes configured for the group the TMQUEUE server is running in. You can also run multiple TMQUEUE servers if you aren't already. How many enqueue/dequeue operations are you trying to perform on the queue space per second?
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • How to minimise performance degradation when querying a growing table during processing...?

    Hi everyone
    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    If so, is there any way to rectify this? Would updating the stats at certain points in the process be effective?
    Would it be any different if a MERGE was used to (conditionally) insert the header records into TableA? (i.e. would the stats still get stale?)
    DB is 11GR2 and OS is Windows Server 2008
    Thanks

    Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
    Read record from source table
    Check to see if matching header record exists in TableA (using indexed field)
    If match found then store TXH_ID (PK in TableA)
    If no match found then create new header record in TableA with new TXH_ID
    Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
    If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
    What do you mean 'presumably the stats . .'?
    In item #3 you said that TXH_ID is the primary key. That means only ONE value will EVER be found in the index so there should be NO degradation for looking up that primary key value.
    The plan you posted shows an index range scan. A range scan is NOT used to lookup primary key values since they must be unique (meaning there is NO RANGE).
    So there should be NO impact due to the header table 'getting big'.

  • KNOWN ISSUES 3513544: Performance degradation...

    I see that version 3.1 of the Microsoft Drivers for PHP for SQL Server was published on 12/12/2014 and is available on
    http://www.microsoft.com/en-us/download/details.aspx?id=20098. Thank you Microsoft...
    One thing that bothered me though is the "Known issue" described at the end of the release.txt file included in the SQLSRV31.EXE package:
    KNOWN ISSUES
    "3513544: Performance degradation when using Microsoft Drivers 3.1 for PHP for SQL Server with Windows 7/Windows Server 2008 R2 and previous versions. Clients connecting to supported versions of Microsoft SQL Server may notice decreased performance when
    opening and closing connections in a Windows 7/Windows Server 2008 R2 environment. The recommended course of action is to upgrade to Windows 8/Windows Server 2012 or later."
    Has anybody experienced that "decreased performance when opening and closing connections" problem on Windows Server 2008 R2? If you have, how bad is it?
    And are there any solutions - other than the "recommended course of action" ("upgrade to Windows 8/Windows Server 2012 or later")? In a corporate environment upgrading a OS isn't always a simple thing that you do in a few minutes. It can
    take weeks or months of planning and testing...

    As I googled, apparently there are less articles mentioned this issue. On the worst surmise, MS may just drive you to upgrade :P
    I don't find a specific document on this, this may be a better question on a dedicated PHP forum.

  • ASR1002 throughput degradation when wccp redirect-list is changed

    We have two ASR 1002's going to 2 different WAN service providers, and two 7371 WAE load balanced by mask assignment. When we change the ACL (adding or removing lines) from our wccp redirect-list, the throughput on interfaces applied to the wccp service-groups is degraded to almost no traffic passing, until we completely remove wccp service group from the global configuration and then reapply. Then traffic throughput on the interface goes back to normal.
    Our ACL defined in the redirect list specifies our specific networks on our WAN that have WAE's and need the redirection. All other networks are denied implicitly. We need to regularly change this ACL, and this service interruption is a major issue. This was not an issue before moving to the ASR platform from 7206's.
    At TAC's request we have upgraded our IOS version to 15.1(3)S4 and that did not make any difference. Does anyone know why this occurs and if there is a way to work around this other than removing wccp configuration and adding back, every time the ACL needs to be modified?
    As a side note to this... We have recently added riverbed appliances, and created separate service groups with separate redirect-lists. The exact same behavior occurs on the ASR 1002 when the ACL for the riverbed's redirect list is altered.

    Thank you very much for sharing that information.  It is great to hear verification that the mask assignment change did resolve your problem.   That is the latest resolution that TAC has recommended, but we have to restart the WCCP service on all redundant edge routers to be able to implement this, so planning the outage window is taking some time.   We've been told that TAC will set this up in a lab and test for us by our Cisco SE.  We're hoping to get verfication that this actually resolves the problem before we take the outage.   
         If you could, can you tell me if this resolved the issue 100% or do you still have any performance issues when making a change to your WCCP ACL going to your bluecoat equipment?    We may also need to implement this in our redirects to BlueCoat from our Nexus.  Do you happen to have a link to how to make this change in Bluecoat?   Thanks again!

  • Performance degradation of Weblogic 5.1 sp 6 when used with Peoplesoft 8

    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 2000. Besides the weblogic wont even shutdown completely
    when trying to shutdown.
    Weblogic customer support advised to upgrade to sp 8 but sp 8 wont support 128
    bit encription which peoplesoft 8 need.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

    There shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
    encryption. If that's the issue, you should post in the security
    newsgroup or contact [email protected]
    -- Rob
    Mani Ayyalas wrote:
    Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
    Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
    time and Weblogic 5.1 sp6.
    There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
    the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
    the weblogic wont even shutdown completely when trying to shutdown.
    Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
    support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
    8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
    along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
    sp 9) in the mean time.
    Any of you had such an experience ? Please let me know if there is a solution
    or workaround.
    Thanks in advance.
    Mani

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Performance degradation after upgrading to yosemite

    I'm experiencing performance degradation on MacBook Pro 15  including
    number of spinning wheels,
    instance of dark screen, 
    overheating after upgrading to Yosemite
    diminished battery life
    Is Yosemite the cause of this and other issues

    I can't believe Apple have made this software available to the masses when it's clearly not ready. Since "upgrading" my Mac mini has been pretty useless. Programs take a substantially increased time to open compared to Mavericks, and then performance is poor at best. Opening files in Photoshop, for example, is painfully slow and I get the colour wheel almost all of the time - something which definitely didn't happen with Mavericks.
    Aside from this, I've experienced a number of frustrating bugs today. When I open a new program on my left screen the right screen is changed to a fresh desktop. Why? When I turn my Mac on I get the login screen on the left monitor (as it always used to) but then the primary desktop is set to the right and I've been unable to keep the correct setting so far (it forgets that I've changed it following a reboot). The background of the top bar keeps disappearing so all of the icons, time, etc. just sit on top of the desktop background.
    I hope Apple can release a fix, and quickly.

  • Repair Tool Performance Problem when Zoom on Image

    Aperture Friends,
    I experience a huge performance hit when editing images with the Repair/Clone tool only when "zoomed" into an image. The curious part is that the repair while "zoomed in" function works fine for about 3 images, then from the 4th image on, the software grinds to almost a complete stop for several seconds after selection a source point and trying to use the repair. The mouse cursor even changes to a spinning rainbow DVD look during this time.
    These are 15-20MB Raw files from a Canon 5d and we are using the most recent version of AP on a nearly brand new iMac 27", of course with Snow L, and all patches applied. The library is less than 20GB and we are not using Reference files as of yet.
    I realize that these may be large files, but it is curios that it works with blazing speed to start with, then degrades.
    Is this a common problem, or an identified issue? Might you more experienced folks know a resoulution?

    hi!
    i'm experiencing a similar problem on my imac i5 with the difference it is slow all the time. even after a reboot it's quite impossible to use the clone/repair tool. my library is about 40GB with referenced files (all on external HD fw800). i work with a 5D Mk II (which is still not supported for tethered shooting, by the way!).
    (sorry if i nicked your thread )
    best regards,
    günther

  • Severe performance degradation after update/clean install of 10.10 Yosemite

    Hello, Apple Community.
    I am submitting this general complaint in the name of myself and of my friend. We both have rather fresh MacBooks (15" Early 2014 with 2,5 GHz Core i7 and Late 2013 with 2,3 GHz Core i7, both with GeForce 750M as dedicated GPU).
    I decided to upgrade my (flawlessly running) Mavericks while my friend performed the clean install.
    Now we both experience serious performance degradation (in comparison to 10.9), visible especially in very low framerate of most of system UI animations (the worst are: showing/hiding mission control, switching spaces, displaying/hiding folder stacks - but the simple ones like magnifying the dock tend to stutter as well).
    Beside that, there are many fades (like the one between the login screen and the desktop or the one between the main screen of system preferences overview and particular settings panes) that often get distorted - it seems to be the issue connected with retina scaling (I am using the 1920x1200 mode), since both login screen and settings seem to be showing some parts of UI improperly magnified or zoomed that suddenly "skip" to their adequate proportions.
    Summing up - I am really unhappy with my initial 10.10 experiences, especially since I used to consider my hardware rather hi-end - the worst thing being that Yosemite runs on my friend's 2012 Mac Mini flawlessly.
    Do any of you guys experience the same issues with 10.10 and your rMBPs? Maybe just those of us with dedicated graphics?

    Hello Richard:
    As a side note, there is no such thing as a +"clean install."+ Apple changed that terminology to +"erase and install"+ several years ago.
    I am not sure if what you describe would work. Unless the new owner does not have broadband service, there is no reason why they cannot install your version of OS X 10.5 and then run software update when they get the system. I vaguely recall that when one does an erase and install the system prompts for passwords, etc at that time.
    I think I would erase the HD and then let the new owner go from there.
    Barry

Maybe you are looking for

  • New iMac 24 " GeForce 8800 flicker : sharing a solution

    Just in case you face the same problem I have had with a new 24" iMac equiped with a GeForce 8800 GS. I faced a flickering of rectangular areas on the screen in certain applications, the worst being Capture NX 2, but also while scrolling the iTunes S

  • I have unpuchased music on my iphone and it will not transfer to my Itunes account

    I have music on my iphone from my EX husbands itune account and would like to put it in my account the music is already on my phone, how do i put it into Itunes.

  • Project wise revenue

    Hi Gurus, How can we get project wise revenue from system if we have PS module , SD, CS module and FICO module. When we do sale cycle from SD module, should wbs element come in revenue gl line???? Please tell me. Thank You

  • Where do i go to find out about this undocumented feature?

    If I open a PDF's Document Properties and click the Additional Metadata button, it brings up a new dialog for entering metadata. All the metadata fields, there, have down arrows on the right side, presumably to enable someone to choose from a pick li

  • On getting packet 's source ip address

    Hi, i have been given project problem to capture packets and filter them base on their source ip addresses then later perform some analysis, but do not know how to get source ip address of a packet. could any one assist me through, its urgent please.