Problem Queries on tables with multiple domain indexes

I recently came across an issue in which the development team added a second context index to a table; that is, there was a context index on "title", and a second one was created on "summary".
Text queries on the new index do not always return results -- In one example, I get 77 rows querying the title, and 5 querying the summary (which, for the purposes of this explanation, always includes the title).
e.g. contains (title,'matha',1)>0 --> 77 rows
contains (summary,'martha',1)>0 -->5 rows
Inspection of the tokenlists shows that the correct pkeys are associated with the expected tokens, and the index is up to date.
So -- when I was asked to take a look at this, I vaguely recalled some issues with this functionality -- but I can't find any reference to this. The closest thing I could find was reports of problems when multiple contains clauses exist within the same query -- and that is not what is happening here.
What I would like to do is recommend moving to a single index with a multi-column datastore, and some reference to a bug # would make the process much faster.
Does anyone recall this issue?
DB Version 9.2.0.7
Sun Solaris 10

e.g. contains (title,'matha',1)>0 --> 77 rows
contains (summary,'martha',1)>0 -->5 rowsYou have searched for 'matha' without an 'r' in your first example and 'martha' with an 'r' in your second example. I don't know if this was just a typo in your post or if that could be the problem. If it is not the problem, can you post a reproducible test case or at least a copy and paste of an actual run of the two queries using the two contains clauses and the results of the count(token_text) for the search value from the dr$...$i domain index tables. I recall problems with two contains clauses in one query using an 'or' condition, but not with just one contains query where two context indexes exist.

Similar Messages

  • Select max date from a table with multiple records

    I need help writing an SQL to select max date from a table with multiple records.
    Here's the scenario. There are multiple SA_IDs repeated with various EFFDT (dates). I want to retrieve the most recent effective date so that the SA_ID is unique. Looks simple, but I can't figure this out. Please help.
    SA_ID CHAR_TYPE_CD EFFDT CHAR_VAL
    0000651005 BASE 15-AUG-07 YES
    0000651005 BASE 13-NOV-09 NO
    0010973671 BASE 20-MAR-08 YES
    0010973671 BASE 18-JUN-10 NO

    Hi,
    Welcome to the forum!
    Whenever you have a question, post a little sample data in a form that people can use to re-create the problem and test their ideas.
    For example:
    CREATE TABLE     table_x
    (     sa_id          NUMBER (10)
    ,     char_type     VARCHAR2 (10)
    ,     effdt          DATE
    ,     char_val     VARCHAR2 (10)
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('15-AUG-2007', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('13-NOV-2009', 'DD-MON-YYYY'), 'NO');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('20-MAR-2008', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('18-JUN-2010', 'DD-MON-YYYY'), 'NO');
    COMMIT;Also, post the results that you want from that data. I'm not certain, but I think you want these results:
    `    SA_ID LAST_EFFD
        651005 13-NOV-09
      10973671 18-JUN-10That is, the latest effdt for each distinct sa_id.
    Here's how to get those results:
    SELECT    sa_id
    ,         MAX (effdt)    AS last_effdt
    FROM      table_x
    GROUP BY  sa_id
    ;

  • ORA-01461 Error when mapping table with multiple varchar2(4000) fields

    (Note: I think this was an earlier problem, supposed fixed in 11.0, but we are experiencing in 11.7)
    If I map an Oracle 9i table with multiple varchar2(4000) columns, targeting another Oracle 9i database, I get the ORA-01461 error (Can't bind a LONG value only for insert into a LONG).
    I have tried changing the target columns to varchar2(1000), as suggested as a workaround in earlier versions, all to no avail.
    I can have just one varchar2(4000) map correctly and execute flawlessly - the problem occurs when I add a second one.
    I have tried making the target column a LONG, but that does not solve the problem.
    Then, I made the target database SQL Server, and it had no problem at all, so the issue seems to be Oracle-related.

    Hi Jon,
    Thanks for the feedback. I'm unable to reproduce the problem you describe at the moment - if I try to migrate a TEXT(5), OMWB creates a VARCHAR(5) and the data migrates correctly!! However, I note from you description that even though the problematic source column datatype is TEXT(5), you mention that there are actually 20 lines of text in this field (and not 5 variable length characters as the definition might suggest).
    Having read through some of the MySQL reference guide I note that, in certain circumstances, MySQL actually changes the column datatype specified either at table creation time or when interfacing with other databases ( ref 14.2.5.1 Silent Column Specification Changes and 12.7 Using Column Types from Other Database Engines in the MySQL reference guide). Since your TEXT(5) actually contains 20 lines of text, MySQL (database or JDBC driver .... or both) may be trying to automatically map the specified datatype of the column to a datatype more appropriate to storing 20 lines of text.... that is, to a LONG value in this case. Then, when Oracle is presented with this LONG value to store in a VARCHAR(5) field, it throws the ORA-01461 error. I need to investigate this further, but this may be the case - its the first time I've see this problem encountered.
    To workaround this, you could change the datatype of the column to a LONG from within the Oracle Model before migrating. Any application code that accesses this column and expects a TEXT(5) value may need to be adjusted to cope with a LONG value. Is this a viable workaround for you?
    I will investigate further and notiofy you of any details I uncover. We will need to track this issue for possible inclusion in future development plans.
    I hope this helps,
    Regards,
    Tom.

  • Editable table with multiple rows

    Hi All!
    We're trying to develop some application in VC 7.0. That application should read data from some R/3 tables (via standard and custom functional modules), then display data to user and allow him/her to modify it (or add some data into table rows), and then save it back to R/3.
    What is the best way to do that?
    There's no problem with displaying data.
    But when I try to add something to table (on portal interface), I'm able to use only first row of the table... Even if I fill all fields of that row, I'm not able to add data to second row, etc.
    Second question. Is it possible to display in one table contents of output of several BAPIs? For example we have three bapis, one displaying user, second displays that user's subordinates, and the third one - that user's manager. And we want one resulting table...
    And last. What is the best way to submit data from table view in VC (portal) to R/3 table? I understand that we should write some functional module, that puts data to R/3. I'm asking about what should be done in VC itself. Some button, or action...
    Any help will be appreciated and rewarded :o)
    Regards,
    DK

    Here are some former postings:
    Editable table with multiple rows
    and
    Editable table with multiple rows
    Are you on the right SP-level?
    Can you also split up your posting: one question, one posting? This way you get faster answers, as people might just browse the headers.

  • How to create a table with multiple select on???

    Hi all,
            I am  new to webdynpro and my requirement is to create a  table with multiple selection on.I have to add abt 10 rows in the table but only 5 rows should be visible and moreover a verticalscroll should be available to view other rows.Can anybody explain me in detail how to do that.Please reply as if you are explaining  to a newcomer.Reply ASAP as i have to do it today.
                                                                           Thanxs

    Hi,
    1. Create a value node in your context name Table and set its cardinality to 0:n
    2. Create 2 value attributes within the Table node name value1 and value2
    3. Goto Outline view> Right click on TransparentUIContainer>Apply Template> Select Table>mark the node Table and it's attributes.
    you have created a table and binded its value to context
    Table UI properties
    4.Set Selection Mode to Multi
    5.Set Visible Row Count to 5
    6.ScrollableColCount to 5
    In your implemetaion, you can add values to table as follow:
    IPrivate<viewname>.ITableElement ele = wdContext.nodeTable().createTableElement();
    ele.setValue1(<value>);
    ele.setValue2(<value>);
    wdContext.nodeTable().addElement(ele);
    The above code will allow you to add elements to your table node.
    Regards,
    Murtuza

  • ORA-00942 error on truncating a table with a XML Index

    Oracle Version: 11.2.0.1.0
    When truncate command fails with error "ORA-00942: table or view does not exist" when run against a table with an XML Index defined
    SQL> CREATE TABLE XML_TEST
    2 (
    3 ID INTEGER,
    4 TESTXML SYS.XMLTYPE
    5 );
    Table created.
    SQL> truncate table XML_TEST;
    Table truncated.
    SQL> CREATE INDEX xmlindex ON XML_TEST(TESTXML)
    2 indextype IS xdb.xmlindex
    3 parameters ('PATH TABLE MY_PATH_TABLE');
    Index created.
    SQL> truncate table XML_TEST;
    truncate table XML_TEST
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> Drop Index xmlindex;
    Index dropped.
    SQL> truncate table XML_TEST;
    Table truncated.

    No, I don't think that explanation is correct. I don't think it has to do with user privs. besides, we don't
    adjust rowids on an import -- we recreate the index, just like a b-tree index import would.
    This should be working. It's most likely a bug in our (i.e. Text) import code -- SYS.XMLTYPE is a little
    strange because under the covers it's actually a function-based index.
    I will test it out and file a bug if I can reproduce the behavior on solaris.

  • TO DRAW A TABLE WITH MULTIPLE ROWS AND MULTIPLE COLOUMNS IN FORM

    Hi,
       How to draw a table with multiple rows and columns seperated by lines in form printing?

    check this
    http://sap-img.com/ts003.htm
    Regards
    Prabhu

  • Can we bind a single external table with multiple files in OWB 11g?

    Hi,
    I wanted to ask if it is possible to bind an external table with multiple source files at same or different locations? Or an external table has to be bound to a single source file and a single location.
    Thanks in advance,
    Ann.
    Edited by: Ann on Oct 8, 2010 9:38 AM

    Hi Ann,
    Can you please help me out by telling me the steps to accomplish this. Right click on the external table in project tree, from the menu choose Configure,
    then in opened Configuration Properties dialog window right clock on Data Files node and choose from menu Create -
    you will get new record for file - specify Data File Name property
    Also link from OWB user guide
    http://download.oracle.com/docs/cd/B28359_01/owb.111/b31278/ref_def_flatfiles.htm#i1126304
    Regards,
    Oleg

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Deadlock when updating different rows on a single table with one clustered index

    Deadlock when updating different rows on a single table with one clustered index. Can anyone explain why?
    <event name="xml_deadlock_report" package="sqlserver" timestamp="2014-07-30T06:12:17.839Z">
      <data name="xml_report">
        <value>
          <deadlock>
            <victim-list>
              <victimProcess id="process1209f498" />
            </victim-list>
            <process-list>
              <process id="process1209f498" taskpriority="0" logused="1260" waitresource="KEY: 8:72057654588604416 (8ceb12026762)" waittime="1396" ownerId="1145783115" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.430" XDES="0x3a2daa538" lockMode="X" schedulerid="46" kpid="7868" status="suspended" spid="262" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783115" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
               <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales1', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'ABCD' AND cam_window= 2   </inputbuf>
              </process>
              <process id="process9cba188" taskpriority="0" logused="2084" waitresource="KEY: 8:72057654588604416 (2280b457674a)" waittime="1397" ownerId="1145783104" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.427" XDES="0x909616d28" lockMode="X" schedulerid="23" kpid="8704" status="suspended" spid="155" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783104" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
                <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales2', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'12345' AND cam_window= 1   </inputbuf>
              </process>
            </process-list>
            <resource-list>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process9cba188" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process1209f498" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock18ee1ab00" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process1209f498" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process9cba188" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
            </resource-list>
          </deadlock>
        </value>
      </data>
    </event>

    To be honest, I don't think the transaction is necessary, but the developers put it there anyway. The select statement will put the result cam_status
    into a variable, and then depends on its value, it will decide whether to execute the second update statement or not. I still can't upload the screen-shot, because it says it needs to verify my account at first. No clue at all. But it is very simple, just
    like:
    Clustered Index Update
    [analyst_monitor].[IX_Clust_scam_an_name_window]
    cost: 100%
    By the way, for some reason, I can't find the object based on the associatedObjectId listed in the XML
    <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor"
    indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
    For example: 
    SELECT * FROM sys.partition WHERE hobt_id = 72057654588604416
    This return nothing. Not sure why.

  • Web Analysis : populate the same table with multiple data sources

    Hi folks,
    I would like to know if it is possible to populate a table with multiple data sources.
    For instance, I'd like to create a table with 3 columns : Entity, Customer and AvgCostPerCust.
    Entity and Customer come from one Essbase, AvgCostPerCust comes from HFM.
    The objective is to get a calculated member which is Customer * AvgCostPerCust.
    Any ideas ?
    Once again, thanks for your help.

    I would like to have the following output:
    File 1 - Store 2 - Query A + Store 2 - Query B
    File 2 - Store 4 - Query A + Store 4 - Query B
    File 3 - Store 5 - Query A + Store 5 - Query B
    the bursting level should be give at
    File 1 - Store 2 - Query A + Store 2 - Query B
    so the tag in the xml has to be split by common to these three rows.
    since the data is coming from the diff query, and the data is not going to be under single tag.
    you cannot burst it using concatenated data source.
    But you can do it, using the datatemplate, and link the query and get the data for each file under a single query,
    select distinct store_name from all-stores
    select * from query1 where store name = :store_name === 1st query
    select * from query2 where store name = :store_name === 2nd query
    define the datastructure the way you wanted,
    the xml will contain something like this
    <stores>
    <store> </store> - for store 2
    <store> </store> - for store 3
    <store> </store> - for store 4
    <store> </store> - for store 5
    <stores>
    now you can burst it at store level.

  • Why Segment shrink is not supported for tables with function-based indexes

    As we all know , Segment shrink is not supported for tables with function-based indexes.
    But i'm very confused .
    Why Segment shrink is not supported for tables with function-based indexes ?? what's its essential?

    Creating a function based index creates a hidden virtual column (you'll see it if you query user_tab_cols) and once you index a virtual column you can no longer shrink the table:orcl> create table t1(c1 number,c2 as (c1 * 2)) segment creation immediate;
    Table created.
    orcl> alter table t1 enable row movement;
    Table altered.
    orcl>
    orcl> alter table t1 shrink space;
    Table altered.
    orcl> create index i2 on t1(c2);
    Index created.
    orcl> alter table t1 shrink space;
    alter table t1 shrink space
    ERROR at line 1:
    ORA-10631: SHRINK clause should not be specified for this object
    orcl>so the issue is not with function based indexes per se, it is a level beneath that. Perhaps because the virtual column has no physical existance, when the row is moved there is no reason for Oracle to realize that an index needs updating? I haven't attempted to reverse engineer this, I would be interested to know if anyone else has.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Messed with multiple Domains... Big Problems

    I have posted on this topic because I was experiencing very slow save times and publishing times.
    I had 20+ sites created in iWeb, many of which have large photo galleries.
    Wanting to solve this problem, I tried a couple of techniques to start with a fresh Domain.
    I tried duplicating the existing Domain and deleting the sites I didnt want in Domain 2.
    Problem was this retained the massive "Albums" folder within iWeb package content.
    The sites didnt show in the iWeb interface but the new Domain was giant (1.9 gigs) and saving and publishing still took forever.
    So I went into the Domain 2 package contents and deleted the Albums for the sites I didnt want in my new Domain.
    Well then Domain saved and published nice and fast as I expected it to.
    Thing is...
    Now all of my previous sites from the old Domain are broken when you attempt to view them on the web.
    They all look fine when I look at them in iWeb by opening their files in iWeb.
    My original Domain is still intact where I have stored it on my computer.
    When I check my iDisk, I can see that my old websites are there but their Media folders are completely empty.
    Sooo... how do I fix this situation?
    I realize I brought this upon myself but I was trying to do what I have got to believe is doable.
    I want to have multiple Domains so I can work more efficiently.
    Surely I dont have to be saddled with updating a 1.9 gig Domain every time I publish to iWeb.

    You're only putting the domain file(s) in the trash to prevent iWeb opening them so that it will be forced to created a new blank domain file. Then drag it out and store it in a folder.
    Individual domain files are opened in Iweb by double clicking them.
    Splitting domain files with multiple sites is not recommended. You're only leaving yourself open to file corruption and other problems some where down the line.
    Start each new site on a blank domain file and store it in its own folder.

  • Problems with multiple Domains open in same web browser?

    Here is the problem.
    I login to one domain and then open a new webpage (same browser) and login
    to a 2nd domain. I can then navigate in both domains in each web page.
    However, when I logout of one, it logs me out of both and which ever one
    that I logged out of is the logout/login page that I see for both web pages.
    The same thing happens if I try to login to 2 different domains at the same
    time. One will override the other and I end up with the same domain open in
    both browsers.
    I am pretty sure that this has to do with the cookie, since it is probably
    using the same one for both and then getting confused. Does anyone know how
    to fix this?
    Portal Server: IPS 3.0 SP2
    Web Server: IWS 4.1 SP8

    Brian Orwig wrote:
    Here is the problem.
    I login to one domain and then open a new webpage (same browser) and login
    to a 2nd domain. I can then navigate in both domains in each web page.
    However, when I logout of one, it logs me out of both and which ever one
    that I logged out of is the logout/login page that I see for both web pages.
    The same thing happens if I try to login to 2 different domains at the same
    time. One will override the other and I end up with the same domain open in
    both browsers.
    I am pretty sure that this has to do with the cookie, since it is probably
    using the same one for both and then getting confused. Does anyone know how
    to fix this?The way the cookies are set for the current authentication are it is based on
    the portal domain/dns domains and the client ipaddress.
    When a user logs in a random session id is generated, and then once he
    authenticates the session id is made valid and is added into the session hash
    table.
    So to answer your question you can do what your trying to do if you are using
    two browsers but in the current architecture and the way sessions are handled it
    is not possible if your using just one browser ..
    iWS Sp8 is not certified to run with portal and the web server that is shipped
    with should not be updated seperately, in the past doing that has broken portal
    >
    >
    Portal Server: IPS 3.0 SP2
    Web Server: IWS 4.1 SP8

Maybe you are looking for