Index vs views in performance

HI,
   Can creating a view will help to improve performance.I have to read from table RBKP but i don't have index fields , can i create view to get faster access.will it help over secondary index. 
Regards,
Karthik.k

hi,
if u are comparing views vs secondary indexes, then secondary indexes are better ,  view will improve performance little bit and will be constant but if u r usingh secondary indexes then overall performance will be better.
Disadvantage of secondary indexes is that it will occupy space, it will slow down add , delete operation slow.
Jogdand M B

Similar Messages

  • Need to find total no fo  tables/index/m.views in my database

    Hello Everyone ;
    How can i find total no fo  tables/index/m.views in my database ?
    when i  google  i have seen  following  command ;
    SQL> Select count(1) from user_tables where table_name not like '%$%' /
      COUNT(1)
             but i dont understand  what  '%$%'  indicates ?
    Thanks all ;

    Hello Everyone ;
    How can i find total no fo  tables/index/m.views in my database ?
    when i  google  i have seen  following  command ;
    SQL> Select count(1) from user_tables where table_name not like '%$%' /
      COUNT(1)
             but i dont understand  what  '%$%'  indicates ?
    Thanks all ;
    consider to simply Read The Fine Manual YOURSELF!
    Oracle Database Search Results: like

  • Can we put INDEXES on Views if not Is there any way to make it Index bas

    Hi
    I am running a query in which there are several views involved in the join condition Can we put INDEXES on Views in ORACLE 9i if not Is there any way to make it Index based views beacuse the result that i am getting is very slow and is eating like 10gb of the temp tablespace.
    Thanks

    No, you cannot put indexes on a view. Think about what a view is, a stored sql statement. Oracle has no way of knowing what rows are in a view until it actually runs the view. So, even if you could, Oracle would need to run the view to get the rows, build the index, then run your query a second time using the index. Seems counter productive to me. A view can use indexes on the underlying tables if appropriate.
    Generally speaking, I would say that a query that makes use of multiple views is probably really inefficient. Often, you are only looking for one or two columns from a view which may require joining several tables whose columns are of no use in the main query. I would start by re-writing the query using the base tables, and only using those tables that are actually required to answer the question.
    TTFN
    John

  • Authorization object to view Maintain Performance Documents on MSS

    Hi Experts,
    Would like to know which authorization object would require to view Maintain Performance Documents on MSS. Currently, we removed SAP_ALL access from MSS user and not able to peform Maintain Performance Documents.We are on EP 7 and ECC 6.
    It gives following error :
    java.lang.NullPointerException
         at com.sap.xss.hr.mbo.blc.BMboStatusComp.resetGlobalMboR3Data(BMboStatusComp.java:260)
         at com.sap.xss.hr.mbo.blc.wdp.InternalBMboStatusComp.resetGlobalMboR3Data(InternalBMboStatusComp.java:195)
         at com.sap.xss.hr.mbo.blc.BMboStatusCompInterface.resetGlobalMboR3Data(BMboStatusCompInterface.java:150)
         at com.sap.xss.hr.mbo.blc.wdp.InternalBMboStatusCompInterface.resetGlobalMboR3Data(InternalBMboStatusCompInterface.java:168)
         at com.sap.xss.hr.mbo.blc.wdp.InternalBMboStatusCompInterface$External.resetGlobalMboR3Data(InternalBMboStatusCompInterface.java:224)
         at com.sap.xss.hr.mbo.vac.VMboStatusComp.onBeforeOutput(VMboStatusComp.java:227)
         at com.sap.xss.hr.mbo.vac.wdp.InternalVMboStatusComp.onBeforeOutput(InternalVMboStatusComp.java:185)
         at com.sap.xss.hr.mbo.vac.VMboStatusCompInterface.onBeforeOutput(VMboStatusCompInterface.java:143)
         at com.sap.xss.hr.mbo.vac.wdp.InternalVMboStatusCompInterface.onBeforeOutput(InternalVMboStatusCompInterface.java:136)
         at com.sap.xss.hr.mbo.vac.wdp.InternalVMboStatusCompInterface$External.onBeforeOutput(InternalVMboStatusCompInterface.java:212)
         at com.sap.pcuigp.xssfpm.wd.FPMComponent.callOnBeforeOutput(FPMComponent.java:603)
         at com.sap.pcuigp.xssfpm.wd.FPMComponent.doProcessEvent(FPMComponent.java:569)
         at com.sap.pcuigp.xssfpm.wd.FPMComponent.doEventLoop(FPMComponent.java:438)
         at com.sap.pcuigp.xssfpm.wd.FPMComponent.wdDoInit(FPMComponent.java:196)
         at com.sap.pcuigp.xssfpm.wd.wdp.InternalFPMComponent.wdDoInit(InternalFPMComponent.java:110)
         at com.sap.tc.webdynpro.progmodel.generation.DelegatingComponent.doInit(DelegatingComponent.java:108)
         at com.sap.tc.webdynpro.progmodel.controller.Controller.initController(Controller.java:215)
         at com.sap.tc.webdynpro.progmodel.controller.Controller.init(Controller.java:200)
         at com.sap.tc.webdynpro.clientserver.cal.ClientComponent.init(ClientComponent.java:430)
         at com.sap.tc.webdynpro.clientserver.cal.ClientApplication.init(ClientApplication.java:362)
         at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.initApplication(ApplicationSession.java:754)
         at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.doProcessing(ApplicationSession.java:289)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessingPortal(ClientSession.java:733)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessing(ClientSession.java:668)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doProcessing(ClientSession.java:250)
         at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:149)
         at com.sap.tc.webdynpro.clientserver.session.core.ApplicationHandle.doProcessing(ApplicationHandle.java:73)
         at com.sap.tc.webdynpro.portal.pb.impl.AbstractApplicationProxy.sendDataAndProcessActionInternal(AbstractApplicationProxy.java:860)
         at com.sap.tc.webdynpro.portal.pb.impl.AbstractApplicationProxy.create(AbstractApplicationProxy.java:220)
         at com.sap.portal.pb.PageBuilder.updateApplications(PageBuilder.java:1288)
         at com.sap.portal.pb.PageBuilder.createPage(PageBuilder.java:355)
         at com.sap.portal.pb.PageBuilder.init(PageBuilder.java:548)
         at com.sap.portal.pb.PageBuilder.wdDoInit(PageBuilder.java:192)
         at com.sap.portal.pb.wdp.InternalPageBuilder.wdDoInit(InternalPageBuilder.java:150)
         at com.sap.tc.webdynpro.progmodel.generation.DelegatingComponent.doInit(DelegatingComponent.java:108)
         at com.sap.tc.webdynpro.progmodel.controller.Controller.initController(Controller.java:215)
         at com.sap.tc.webdynpro.progmodel.controller.Controller.init(Controller.java:200)
         at com.sap.tc.webdynpro.clientserver.cal.ClientComponent.init(ClientComponent.java:430)
         at com.sap.tc.webdynpro.clientserver.cal.ClientApplication.init(ClientApplication.java:362)
         at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.initApplication(ApplicationSession.java:754)
         at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.doProcessing(ApplicationSession.java:289)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessingStandalone(ClientSession.java:713)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessing(ClientSession.java:666)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doProcessing(ClientSession.java:250)
         at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:149)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:62)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doPost(DispatcherServlet.java:53)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:401)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:266)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:386)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:364)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:1039)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:265)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:95)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:175)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:102)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:172)
    Would appreciate kind guidance to resolve issue.
    Thanks in advance.
    Aashish

    I am closing this thread as opened at wrong place.
    Thanks,
    Aashish

  • Sharepoint 2013 Datasheet View Navigation performance Issues on large lists

    I was recently upgraded to SharePoint 2013.
    In SharePoint 2010 Datasheet View, one could scroll through and bulk select 100s of line items in Datasheet view very easily.   Navigating the datasheet view was just like navigating through a worksheet in Excel, and one could use quick select
    keys like ctrl-shift-right arrow / down arrow to bulk select items.
    After the upgrade to SharePoint 2013, using the "Quick Edit" tab in Datasheet view and changing the Item Limit to 5000 items, it takes an inordinately long time to load the list.   None of the quick select keys like ctrl-shift-right arrow
    works anymore and the browser keeps warning about a slow-running script when attempting to select multiple items while scrolling through the list. 
    The functionality I need is for users to be able to bulk delete all line items in a SP list and copy (from Excel) and paste new line items into the SP list through their browser. 

    I am running SP Server 2013 with the Dec CU on my internal farm.  This past week, I migrated 1 database containing 1 site collection (also ran the site upgrade to 2013).  Prior to the migration, the users were using Datasheet View in 2010 to bulk
    edit and also add attachments through the pop up window while in Datasheet View.  After the migration, performance on their list using IE8 with less than 300 items was horrible for the end user and also for me as a farm admin (I'm running iE11). 
    I created a new view and performance seemed better for me and a slight improvement for the end user, but still not satisfying.  The end user is using Chrome in the interim until I can test with IE9 to see if that makes a difference.
    My main concern is the Quick Edit view.  This user needs to be able to use the 2010 Datasheet View to manage attachments while in DS view.  I created a new DS view on the affected list, but it still defaults back to the Quick Edit view.  This
    list is on 1 of 3 web apps in my internal farm.  The interesting thing is that I can create a 2010 Datasheet View in a custom list on a site that was created in 2013 from scratch in another web app on my internal farm.  However, I can only do
    this on 1 of 3 web apps.  On my 2013 external farm, I can create a DS view in a 2013 site created from scratch in 1 of 2 web apps.  Creating a list on the 2nd web app in DS view defaults the list back to Quick Edit.  I checked in SharePoint
    Manager thinking there was a web app feature that wasn't getting activated.  There was only 1 web app feature that jumped out.  Academic Library Site Safe Controls was activated on the 2 web apps I could create the 2010 Datasheet View on, but
    not the other 3 web apps I could only seem to get the Quick Edit view when creating the DS view.
    Is there a feature that needs to get activated or one that might need to be deactivated/reactivated?  My internal farm with the 3 web apps is running the Dec 2013 CU and the external farm w/the 2 web apps is running the March 2013
    PU.
    Aside from that, I've received complaints on the list performance using IE and not being able to use the 2010 datasheet functionality as noted above.  I haven't received any complaints on being able to only copy 100 items in Quick Edit view at a time,
    but I have also noticed that issue.  In my case, the retry doesn't work and I have to delete anything over 100 items before it will save.  I would like to find a resolution on this as well.  Every bit of help I can get with this issue is very
    much appreciated.  Thanks in advance!

  • How to view database performance in OEM back 8 hour

    Hello All,
    I have recently installed OEM 11g. Please can I know how to view the performance of the database back 8 hour as I can see the performance for an hour only on the performance page.
    Thanks.

    Yes, you can.
    For instance you can use the ADDM feature (via the Database Homepage --> Adivsor Central --> ADDM --> Run ADDM to analyze past performance
    Checkout:
    Oracle® Enterprise Manager Concepts
    *11g Release 11.1.0.1*
    http://download.oracle.com/docs/cd/E11857_01/em.111/e11982/database_management.htm#i1006971
    Regards
    Rob
    http://oemgc.wordpress.com

  • Indexes For Views

    Hi gurus,
    Is it possible to assign indexes to views ? If yes, how ? I can create indexes for views through database management software, but in SAP, in SE11 i found nothing for this.
    Thx,

    An index cannot be defined on a view. Because view is
    virtual table, which consists of a subset of columns from
    one more tables.
    The Restrictions imposed on views are as follows:
    << Cut and paste without attribution from http://www.allinterview.com/showanswers/10530.html removed >>
    Thanks
    Rahul
    Edited by: Rob Burbank on Jan 20, 2010 10:42 AM

  • Index to Improve SQL Performance

    Please provide a link to information about how to use an Index to enhance SQL performance with Oracle 10g database.

    user8973820 wrote:
    I would like information regarding the use of clustered versus non-clustered indexes.The documentation is your friend :)
    [Overview of Indexed Clusters|http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/tablecls.htm#CFABHBAG]
    [Indexes and Index-Organized Tables|http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/indexiot.htm#BABHJAJF]
    HTH!

  • INDEX on VIEW or TABLE when using CONTAINS function on the VIEW

    Hi,
    I'm querying a view with a contains function and I'm getting an error:
    query: select * from view where contains(name,'jack OR jill')>0
    ORA-20000: Oracle text error
    DRG-10599 column is not indexed.
    From what I gathered you can't create an index on the view, or is that possible afterall (using 10g).
    Would it be sufficent to create an index on the column in the table that the view-query is pulling the data from? or can you use the contains funtion on views at all?
    Or are there other ways of doing this better?
    Thanks,

    That particular CONTAINS clause will require a domain index on the column, NAME in your case.
    You could also do: select * from view where name in ('Jack', 'Jill')
    Depending on your need and number of records and other variables, it may be "fastest" to do the text index on the NAME column. If it is a rather small subset of data and performance is not critical then the IN clause should suffice.

  • Slow report viewer/rdlc performance in local mode with Single Sign On

    Hi Team,
    We have recently enabled Single Sign On to our application and after that our rdlc reports loading got extremely slow.
    Please find the below configuration that we are using.
    1. Report Viewer 11.0.0.0
    2. running rdlc file in local mode (not using Report Server)
    3. System.IdentityModel.Services 4.0.0.0
    The query behind the reports is returning result in 5-10 sec but report is taking 1-4 min to load (sometimes getting timeout) (as per the complexity of the report).
    We have tried a lot of workaound but nothing worked.
    i saw performance improvement in reports by addding <trust legacyCasModel = "True"   level="Full" /> in config file, But using this we are getting "Dynamic operations can only be performed in homogenous AppDomain" error
    in many pages of our application.
    Without SSO reports are running completely fine.
    We are stucked here and not able to proceed. Is there any issue with the SSO and rdlc in local mode ? Is there any hot fix available for the same ?
    Please help !!!
    Regards,
    Pranav Sharma

    This problem is probably related to :
    [http://blogs.oracle.com/stevenChan/2010/03/ebs_jre_issues_16018.html]
    Oracle problem ID : 1054293.1
    Loginpage / Error in Browser for Export and Attachments after upgrading to Sun JRE 1.6.0_18 [ID 1054293.1]
    Sun bug : 6927268
    ShowDocument calls results in new iexplorer process

  • How to create a Complex Organization Index  Materialized View Example

    Hi
    I have a 11g database that I'm trying to create a complex Materialized View that I would like to make Organization Index? How do I specify what I want for a primary Key?
    CREATE MATERIALIZED VIEW RCS_STG.MV_NEXT_HOP_iot
    ORGANIZATION INDEX
    AS
    SELECT r2.resource_key, r1.resource_key resource_key2, r2.resource_full_path_name, device_name, device_model,
    service_telephone_number, service_package_name, telephone_number.telephone_number_key, c1.created_on
    FROM network_resource PARTITION (network_resource_subinterface) r1,
    connection c1,
    network_resource PARTITION (network_resource_subinterface) r2,
    device d1,
    tn_network_resource,
    telephone_number
    WHERE r1.resource_key = c1.resource1_key
    AND c1.resource2_key = r2.resource_key
    AND d1.device_key = r2.device_key
    AND tn_network_resource.resource_key(+) = r2.resource_key
    AND telephone_number.telephone_number_key(+) = tn_network_resource.telephone_number_key
    UNION ALL
    SELECT r1.resource_key, r2.resource_key resource_key2, r1.resource_full_path_name, device_name, device_model,
    service_telephone_number, service_package_name, telephone_number.telephone_number_key, c1.created_on
    FROM network_resource PARTITION (network_resource_subinterface) r1,
    connection c1,
    network_resource PARTITION (network_resource_subinterface) r2,
    device d1,
    tn_network_resource,
    telephone_number
    WHERE r1.resource_key = c1.resource1_key
    AND c1.resource2_key = r2.resource_key
    AND d1.device_key = r1.device_key
    AND tn_network_resource.resource_key(+) = r1.resource_key
    AND telephone_number.telephone_number_key(+) = tn_network_resource.telephone_number_key
    I get an error message ORA-25175: no PRIMARY KEY constraint found
    I would like to specify resource_key, resource_key2, and service_telephone_number as my primary key?

    Ah,
    I get it now. This is what I did.
    CREATE TABLE mv_next_hop_iot
    resource_key NUMBER (38),
    resource_key2 NUMBER (38),
    resource_full_path_name VARCHAR2 (256 BYTE),
    device_name VARCHAR2 (64 BYTE),
    device_model VARCHAR2 (64 BYTE),
    service_telephone_number VARCHAR2 (20 BYTE),
    service_package_name VARCHAR2 (64 BYTE),
    telephone_number_key NUMBER (38),
    created_on DATE,
    CONSTRAINT mv_next_hop_pk PRIMARY KEY (resource_key, resource_key2, service_telephone_number)
    ORGANIZATION INDEX
    CREATE MATERIALIZED VIEW rcs_stg.mv_next_hop_iot
    ON PREBUILT TABLE
    AS
    /* Formatted on 2010/06/10 1:39:04 PM (QP5 v5.149.1003.31008) */
    SELECT resource_key, resource_key2, resource_full_path_name, device_name, device_model, service_telephone_number,
    service_package_name, telephone_number_key, created_on
    FROM (SELECT r2.resource_key, r1.resource_key resource_key2, r2.resource_full_path_name, device_name, device_model,
    NVL (service_telephone_number, ' ') AS service_telephone_number, service_package_name,
    telephone_number.telephone_number_key, c1.created_on
    FROM network_resource PARTITION (network_resource_subinterface) r1,
    connection c1,
    network_resource PARTITION (network_resource_subinterface) r2,
    device d1,
    tn_network_resource,
    telephone_number
    WHERE r1.resource_key = c1.resource1_key
    AND c1.resource2_key = r2.resource_key
    AND d1.device_key = r2.device_key
    AND tn_network_resource.resource_key(+) = r2.resource_key
    AND telephone_number.telephone_number_key(+) = tn_network_resource.telephone_number_key
    UNION ALL
    SELECT r1.resource_key, r2.resource_key resource_key2, r1.resource_full_path_name, device_name, device_model,
    NVL (service_telephone_number, ' ') AS service_telephone_number, service_package_name,
    telephone_number.telephone_number_key, c1.created_on
    FROM network_resource PARTITION (network_resource_subinterface) r1,
    connection c1,
    network_resource PARTITION (network_resource_subinterface) r2,
    device d1,
    tn_network_resource,
    telephone_number
    WHERE r1.resource_key = c1.resource1_key
    AND c1.resource2_key = r2.resource_key
    AND d1.device_key = r1.device_key
    AND tn_network_resource.resource_key(+) = r1.resource_key
    AND telephone_number.telephone_number_key(+) = tn_network_resource.telephone_number_key)
    Many thanks. the PREBUILT TABLE is the secret.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • TREX indexing on content server performance

    Hi guys,
    Our Portal is integrated with SAP CRM (using webdav) that manages documents stored in SAP content server. We use TREX to index these documents such that users in Portal can search for these documents. Currently we're evaluating the performance of indexing and searching, thus if we have a heavy load of documents to index, would it affect the SAP CRM/Content server that is the document repository? (such as memory consumption, performance, etc..)
    Thanks,
    ZM

    Hi Chris,
    do you use the ContentServer in the DMS application? If yes, you need to index documents stored in the DMS_PCD1 docu category.
    Regards,
    Mikhail

  • Custom function in a database view makes performance slow in OBI?

    Hi,
    I am facing a major performance problem.
    I have a oracle view which is calling a database function ( which I created).
    CREATE OR REPLACE VIEW ISRM_ECOX_NAK_REPLAY (ret)
    as
    select APP.ISRM_ECOX_NAK_REPLAY (CASHFLOW_MESSAGE.EXTERNAL_DEAL_NUMBER) ret from CASHFLOW_MESSAGE
    It runs in the database in a couple of seconds. But when I import the view in the OBI repository the query keeps running for hours without returning any data.
    If there are very few records in the database table used on the view then it runs in OBI after a couple of minutes, but not otherwise.
    When I pick the query from Manage Sessions and run it in the database, again it runs fast
    The OBIEE version is 10.1.3.4.1
    thanks and regards,
    Gaurav
    Edited by: Gaurav on 22-Sep-2011 02:43

    Hi guarav,
    may be an idea...why not creating a materialized view in the db which you make use of in obi?
    Can't that solve your perf problems?
    Kr,
    A

  • OBIEE 11g - View selector performance with YTD,MTD,Dept ...

    We are implementing BI Apps 7963 OBIEE 11.1.5. Financial Analytics. We have a financial report with multiple compound layout views MTD,YTD, by company, by dept.We are experiencing some memory leak issues by Sawserver when we run the report. How does the view selector works ? does it run the query in Database when first time report is accessed and then build all these different views on BI server or query will be run against database for specific view when accessed? Any known performance issues with view selectors ? Any one used view selector for different versions of reports (MTD, YTD ..).
    Your feedback is greatly appreciated.
    Thanks in advance

    I am experiencing performance issues with view selector. It repeats the SQL for each view resulting in duplicate SQL to be run and thus resulting in poor performance of reports. Is this an expected behavior of view selector?
    Thanks.

Maybe you are looking for

  • Using an external hard drive with iTunes...

    My library has gotten so large that it really no longer fits on my computer. I'd like to move my music to an external hard drive. Unfortunately, I didnt realize the whole, "your music can only be moved 5 times before its frozen" deal until recently,

  • How to copy and pasting of the web catlog in OBISE1?

    hi, The usage tracking dashboard is created and while enabling it, it means copying the usage catlog folder from offline mode to the online mode -shared folder, its not copying...and throws an error msg: as "destination and source are same". plz find

  • Deletion is not working

    Dear all I have summary page in that delete submit button is there when i click on delete button i am calliing dialoug page in that i click on ok button i am deleting the record i wrote the code like this /*If user will click on delete button it will

  • Credit management information.

    Hello gurus, I am new to credit management with BW integration,please any one give me detail information about credit management Data Sources and info objects and cubes and ods. I want to impliment credit management data sources right now and i want

  • Vertical repeating background slice from image

    Hi. I'm trying to slice a portion of a Photoshop CS4 image and use it as a vertical repeating background at the top of a web page. Then I'm overlaying the full image in the horizontal center of the web page. No matter how much I try, the background s