Use multiple partitions on a table in query

Hi All,
Overview:-
I have a table - TRACK which is partitioned on weekly basis. Im using this table in one of my SQL queries in which I require to find a monthly count of some column data. The query looks like:-
Select  count(*)
from Barcode B
inner join Track partition P(99) T
    on B.item_barcode = T.item_barcode
where B.create_date between 20120202 and 20120209;In the above query I am fetching the count for one week using the Partitions created on that table during that week.
Desired output:-
I want to fetch data between 01-Feb and 01-Mar and use the rest of the partitions for that table during the duration in the above query. The weekly partitions currently present for Track table are -
P(99) - 20120202
P(100) - 20120209
P(101) - 20120216
P(102) - 20120223
P(103) - 20120301
My question is that above Ive used one partition successfully, now how can I use the other 4 partitions in the same query if I am finding the count for one month (i.e. from 201201 to 20120301) ?
Environment:-
Oracle version - Oracle 10g R2 (10.2.0.4)
Operating System - AIX version 5
Thanks.
Edited by: Sandyboy036 on Mar 12, 2012 10:47 AM

I'm with damorgan on this one, though I was lazy and only read it twice.
You've got a mix of everything in this one and none of it is correct.
1. If B.create_date is VARCHAR2 this is the wrong way to store dates.
2. All Track partitions are needed for one month if you only have the 5 partitions you listed so there is no point in mentioning any of them by name. So the answer to 'how can I use the other 4 partitions' is - don't; Oracle will use them anyway.
3. BETWEEN 01_Feb and 01-Mar - be aware that the BETWEEN operator includes both endpoints so if you actually used it in your query the data would include March 1.

Similar Messages

  • Using multiple 'and' conditions in a SQL query

    Is it possible to reduce the SQL required to query using multiple 'and' conditions, e.g. I have a query like the following:
    select stat.personal_id, appt.username, appt.password, apps.rgn_apt_id, apps.apy_apn_id
    from apy_ast_application_status stat, rgn_usr_user appt, rgn_aps_applications apps
    where stat.apy_apn_id = apps.rgn_apt_id
    and apps.rgn_apt_id = appt.rgn_apt_id
    and stat.application_completed is null
    and stat.application_started_date > '01-MAY-11'
    and stat.amount_paid is null
    and stat.personal_details = 'C'
    and stat.further_details = 'C'
    and stat.education = 'C'
    and stat.employment = 'C'
    and stat.personal_statement = 'C'
    and stat.choices = 'C'
    and stat.reference = 'C'
    and stat.student_finance = 'C'
    Is there a way, to reduce all the multiple 'and' queries, to be read from say one line? If you know what I mean.......

    Ah, Ok this looks nice, thanks very much. It doesn't quite run as is because the stat.amount_paid query value is 'is null', while the others are 'C'. I tried amending the relevant line to various versions of the following:-
    in (select 'is null' 'C','C','C','C','C','C','C','C' from dual)
    which doesn't work.
    I can get the following to work so I am assuming that the it is not possible to use different query values within the brackets of the 'in (select....' statement?
    select stat.personal_id, appt.username, appt.password, apps.rgn_apt_id, apps.apy_apn_id
    from apy_ast_application_status stat, rgn_usr_user appt, rgn_aps_applications apps
    where stat.apy_apn_id = apps.rgn_apt_id
    and apps.rgn_apt_id = appt.rgn_apt_id
    and stat.application_completed is null
    and stat.application_started_date > '01-MAY-11'
    and stat.amount_paid is null
    and (stat.personal_details, stat.further_details, stat.education,
    stat.employment, stat.personal_statement, stat.choices, stat.reference, stat.student_finance)
    in (select 'C','C','C','C','C','C','C','C' from dual)
    Thanks for everybodys help - the suggested alternatives seem so much more elegant

  • Possible to swap multiple partitions into a table?

    Hi,
    We are using partition exchnage to swap individual partitions into table which then backed up.
    This being done one partition at a time.
    Is it possible to swap several partitions of a tabel in one go.
    using Oracle 11.2.0.3
    partioned by date, one partition of reach day.
    Is it possible say to move the last 7 days partitions into the other table for backup using partition exchange?
    Thanks

    >
    We are using partition exchnage to swap individual partitions into table which then backed up.
    This being done one partition at a time.
    Is it possible to swap several partitions of a tabel in one go.
    >
    No.
    If the goal is to back up the data why not just use expdp to export the data for all seven partitions at once? Then drop the partitions.
    If you only use one regular table for the exchange you would have to start with an empty table, swap one partition, backup the table, truncate the table, swap the next partition and so on.
    Or you could create a table with seven empty partitions and swap the 7 partitions one at a time and then backup the new partitioned table.
    Or you could create seven tables and swap each one with a partition and then backup all seven tables.
    Too many choices.

  • Sort functionality using MULTIPLE columns in a table control

    Hi all,
    I have a custom screen with table control.Now i need to provide SORT functinality in this screen for the columns in the table control.
    My questins:
    1.Is it possible to seelct MULTIPLE columns in a table control for SORTING?If yes,what explicit settings do i need to do while creatng the TABEL CONTROL in the screen?DO I need to select "Column selection " as MULTIPLE??
    2.How do I write the code for SORT functinonality for multiple columns?
    I know how to write the code for SORTING on basis of single column .
    Thanks!

    Hi Rob,
    Thanks for the reply.
    However I was thinking to apply the same logic as for single columns as follows:
    types : begin of ty_fields,
                c_fieldname(20),
               end of ty_fields.
    data  : t_fields type table of ty_fields,
               wa_fields like line of t_fields.
    WHEN 'SORTUP'.(Ascending)
          loop at TABLE tc01-cols INTO wa_tc01  where  selected = 'X'.
          SPLIT wa_tc01-screen-name AT '-' INTO g_help g_fieldname.
          wa_fields-c_fieldname = g_fieldname.
          append wa_fields to t_fields.
          endloop.
          describe table t_fields lines l_index.
          c_count = 1.
          if c_count  <= l_index.
          read table t_fields into wa_fields index c_count.
          case c_count.
          when '1'.
          l_field1 = wa_fields-c_fieldname.
          when '2'.
         l_field2 = wa_fields-c_fieldname.
        and so on depending on the no of columns in the table control...
          endcase.
          endif.
          SORT t_tvbdpl_scr BY  l_fields1 l_fields 2......l_fieldn.
    Let me know if the above method will work!Also for the above method to work will the type of fields(columns on whihc sort function will be applied) matter???
    Thanks again for your time.

  • Using OID's scema base tables fofr querying

    Is it possible to use the underlying tables of OID's schema in you pL/sql code.One way of doing that is through dbms_ldap package.but in my case i dont want to connect to OID but i want to use the base tables in which the OID stores the data. we tried doing such an excersice but cud not get success. the only option we were left with was like creating our own tables having similar structure as that of OID's and then creating views on them.these views were used instead of dbms_ldap package.

    Edited by: user11244575 on Oct 26, 2012 12:34 PM
    Edited by: user11244575 on Oct 26, 2012 12:34 PM

  • Using multiple sub indexes in a catsearch query

    Hi,
    I have a ctxcat index on a table with orgname as the indexed column and city and postal code as sub indexes.
    I could do a
    select *
    from xxx_org_search_v
    where 1 =1
    and catsearch(org_name,'green*','postal_code=''38016''') > 0;
    and
    select *
    from xxx_org_search_v
    where 1 =1
    and catsearch(org_name,'green*','city=''CORDOVA''') > 0;
    but I how do I do this one?
    select *
    from xxx_org_search_v
    where 1 =1
    and catsearch(org_name,'green*','postal_code=''38016'' city=''CORDOVA''') > 0;
    whats the syntax for acheiving the above?
    Thanks
    Guru

    SCOTT@orcl_11g> CREATE TABLE xxx_org_search_v
      2    (org_name     VARCHAR2(15),
      3       city          VARCHAR2(15),
      4       postal_code  VARCHAR2(15))
      5  /
    Table created.
    SCOTT@orcl_11g> INSERT ALL
      2  INTO xxx_org_search_v VALUES ('GREEN1', 'CITY1', '38016')
      3  INTO xxx_org_search_v VALUES ('GREEN2', 'CORDOVA', 'POST2')
      4  INTO xxx_org_search_v VALUES ('GREEN3', 'CITY3', 'POST3')
      5  INTO xxx_org_search_v VALUES ('GREEN4', 'CORDOVA', '38016')
      6  SELECT * FROM DUAL
      7  /
    4 rows created.
    SCOTT@orcl_11g> BEGIN
      2    CTX_DDL.CREATE_INDEX_SET ('iset');
      3    CTX_DDL.ADD_INDEX ('iset', 'city');
      4    CTX_DDL.ADD_INDEX ('iset', 'postal_code');
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> CREATE INDEX ctxcat_index ON xxx_org_search_v (org_name)
      2  INDEXTYPE IS CTXSYS.CTXCAT
      3  PARAMETERS ('INDEX SET iset')
      4  /
    Index created.
    SCOTT@orcl_11g> select *
      2  from   xxx_org_search_v
      3  where  catsearch (org_name, 'green*',
      4           'postal_code=''38016''') > 0
      5  /
    ORG_NAME        CITY            POSTAL_CODE
    GREEN1          CITY1           38016
    GREEN4          CORDOVA         38016
    SCOTT@orcl_11g> select *
      2  from   xxx_org_search_v
      3  where  catsearch (org_name, 'green*',
      4           'city=''CORDOVA''') > 0
      5  /
    ORG_NAME        CITY            POSTAL_CODE
    GREEN2          CORDOVA         POST2
    GREEN4          CORDOVA         38016
    SCOTT@orcl_11g> select *
      2  from   xxx_org_search_v
      3  where  catsearch (org_name, 'green*',
      4           'postal_code=''38016'' AND city=''CORDOVA''') > 0
      5  /
    ORG_NAME        CITY            POSTAL_CODE
    GREEN4          CORDOVA         38016
    SCOTT@orcl_11g>

  • Use of CDHDR and CDPOS Table in Query

    Hi all,
    I am having requirement where i need to fetch the data from the CDHDR and CDPOS by passing the release indicator from EKKO.
    I would like to know whether the performance will b the issue with this as the CDHDR and CDPOS are the tables which consist of Huge data.
    Pls let me know what will b the impact on performance and other things..
    Regards
    KK

    Hi,
    Pls chk this links, might be helpful;
    https://forums.sdn.sap.com/click.jspa?searchID=5749799&messageID=1456945
    Re: Delta - Records changed directly in table
    Generic delta queue for CDHDR
    LBWE, delta extraction without BW
    http://www.sap123.com/showthread.php?t=47
    Regards
    CSM Reddy

  • How to Query Multiple Fields from different Tables using Toplink Expression

    Hi,
    I am trying to prepare an Oracle Toplink Expression to qurey the multiple columns of different tables. the query as following. Please can anyone help?
    SELECT CYCLE.CYCLE_ID,
    CYCLE.ASPCUSTOMER_ID,
    CYCLE.FACILITYHEADER_ID,
    CYCLE.ADDUSER,
    ASP.FIRSTNAME || ' ' || ASP.LASTNAME ADDUSERNAME,
    CYCLE.ADDDATE,
    CYCLE.LASTUPDATEUSER,
    ASP.FIRSTNAME || ' ' || ASP.LASTNAME LASTUPDATEUSERNAME,
    CYCLE.LASTUPDATEDATE,
    CYCLE.CYCLENAME,
    CYCLE.CYCLENUMBER,
    CYCLE.DESCRIPTION
    FROM CYCLE,ASPUSER ASP
    WHERE CYCLE.ADDUSER = ASP.ASPUSER_ID
    and then i want to send that expression to readAllObjects method as a parameter
    Expression exp = (..............this is the required qurey expression...................)
    Vector employees = session.readAllObjects(getClass(), exp);
    thanks,

    You havent given any information on the mapping between Cycle and Asp. I presume there is a one to one mapping between them. Also it appears there is no "WHERE" clause to limit the number of cycles being retrieved. If that is the case then I presume you want to load all cycles in the system.
    Thats just a clientSession.readAllObjects(Cycle.class). If you have indirection turned on the Asp should get loaded when you do a cycle.getAsp().
    I presume that SQL you posted loads all the columns of CYCLE and ASP. If you are interested in a subset of CYCLE or ASP then you should do a ReportQuery or partial object read.
    Hi,
    I am trying to prepare an Oracle Toplink Expression
    to qurey the multiple columns of different tables.
    the query as following. Please can anyone help?
    SELECT CYCLE.CYCLE_ID,
    CYCLE.ASPCUSTOMER_ID,
    CYCLE.FACILITYHEADER_ID,
    CYCLE.ADDUSER,
    ASP.FIRSTNAME || ' ' || ASP.LASTNAME ADDUSERNAME,
    CYCLE.ADDDATE,
    CYCLE.LASTUPDATEUSER,
    ASP.FIRSTNAME || ' ' || ASP.LASTNAME
    LASTUPDATEUSERNAME,
    CYCLE.LASTUPDATEDATE,
    CYCLE.CYCLENAME,
    CYCLE.CYCLENUMBER,
    CYCLE.DESCRIPTION
    FROM CYCLE,ASPUSER ASP
    WHERE CYCLE.ADDUSER = ASP.ASPUSER_ID
    and then i want to send that expression to
    readAllObjects method as a parameter
    Expression exp = (..............this is the required
    qurey expression...................)
    Vector employees = session.readAllObjects(getClass(),
    exp);
    thanks,

  • Partitioning a fact table

    I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
    One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
    Is this a good idea?

    Jason,
    In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
    1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
    2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
    In general, your suggestion for the generated surrogate key would work.
    Thanks,
    Mark.

  • Multiple instances of a table & join supported in OBIEE SQLQuery report

    Hello All,
    I am creating a report in BIP based on the RPD created in OBIEE.
    I have to use multiple instances of same table in this case. But when I do that, I am getting "'The query contains a self join/This is a non-supported operation" error.
    Have anybody got his error before? Could anybody help me solving this?
    Thanks
    Narasimha Rao

    Hello All,
    I am creating a report in BIP based on the RPD created in OBIEE.
    I have to use multiple instances of same table in this case. But when I do that, I am getting "'The query contains a self join/This is a non-supported operation" error.
    Have anybody got his error before? Could anybody help me solving this?
    Thanks
    Narasimha Rao

  • Datapump Export - multiple EXCLUDE patterns for TABLE

    I'm performing an export and I have two classes of tables (as in LIKE filters) that I wish to exclude. I've tried using multiple LIKE statements:
    EXCLUDE=TABLE:"LIKE 'FILTER1%'"
    EXCLUDE=TABLE:"LIKE 'FILTER2%'"
    However this way it appears the second EXCLUDE overwrites the first and only tables matching FILTER2% are excluded.
    Doing it like this has the same behavior and only tables matching FILTER2 are excluded
    EXCLUDE=TABLE:"LIKE 'FILTER1%'",TABLE:"LIKE 'FILTER2%'"
    The following are not syntactically correct but seemed worth trying
    EXCLUDE=TABLE:"LIKE 'FILTER1%' OR 'FILTER2%'"
    EXCLUDE=TABLE:"LIKE 'FILTER1%' OR LIKE 'FILTER2%'"
    Is there any way to accomplish what I'm trying to do here? This is 10.2.0.2.
    Thanks

    Hi,
    I can figure out a way for export, but not for import. If this is a user doing it's own tables, then you could use this
    exclude=table:'IN(select table_name from user_tables where table_name like ''TAB1%'' OR TABLE_NAME LIKE ''TAB2%'';
    If you are doing this for multiple schemas, then you need to use something like:
    exclude=table:'IN(select table_name from dba_tables where table_name like ''TAB1%'' OR TABLE_NAME LIKE ''TAB2%'';
    This does not work for import since chances are, the tables don't exist, so the query will return no rows found.
    Dean

  • Best practice for multiple instances of the same BEX query

    Hi there,
    I'm wondering what's the best way to use multiple instances of the same BEX query. Let me explain what I mean:
    I have a dashboard with different queries feeding different period of time such as: week to date, month to date and so on. One query for each since it is based on a user exit.
    For each query I want to show different data in different sections of my dashboard. Per example: sales per directors or sales per customer group, sales per day, sales per week and the like.I tried to connect a simple bar chart via a direct connection but with no success due to the multiple lines generated by the addition of the sales director, customer group, week number and so on.
    My question is about the way to connect the different queries efficiently in order to show the different data while avoiding multiple useless lines.
    The image above shows the query browser where, per example, for a Month to date query there will be mutiple line for each week as well as one line for each director. If, for two different components, I want to show data per week and data per director or other representation what is the best practice:
    Add another instance of the same query and only put the week information and another one will only the director info?
    Should I bind those to the excel file and use formulas to make final calculations?
    Will there be a performance issues for adding different instances of the same query
    I have 6 different queries (read 6 user exit that filters time via user exit).
    Depending on the best practices there might be 4 instances for each for a total of 24 instances in the query browser.
    I hope my question is clear enough, if not please do not hesitate I'll clarify as much as possible.
    Regards,
    Steve

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Installing Lion clean on hard drive with multiple partitions

    I have a spring 2008 24" iMac running Snow Leopard.
    I am about to put a new 2TB hard drive in it and after I do that I want to do a clean install of Lion on it.
    I do not want to upgrade my Snow Leopard install to Lion. I will keep it on my back up drive as a fallback incase of serious workflow incompatiblities with the new OS.
    For my workflow I create and use multiple partitions (Mac OS,  Windows and multiple HFS+ for data) on my hard drive and I have seen that Lion creates it's own hidden recovery partition as well for the recovery functionality.
    My questions are:
    1) Will I have issues running Lion on a partition on a hard drive with multiple partitions that have different file systems?
    2) If I install Lion into one of these partitions will it create it's recovery partition within the space of the partition it is being installed into?
    3) I will be creating a clean install by downloading Lion using the App Store and then burning an installer DVD using instructions I found elsewhere and then using that to do the install on the new drive. Is that the best route to take?
    All my current data I will have on a backup external hard drive and after I complete the Lion install on the new larger drive I will manually reinstall all my software and move my data back from my backup drive to the new drive one partition at a time except of course for OS partition. I keep all my real user data outside of that partition anyway.

    I believe this article answers most of your questions.
    http://support.apple.com/kb/HT4718
    or possibly
    http://support.apple.com/kb/HT4649
    You will most likely run into the error message that "Some features of Mac OS X Lion are not supported for the disk" if you have multiple partitions set up, especially if they were not set up using Bootcamp and/or have serveral different file systems.
    You can confirm that the Recovery Partition will not be installed by checking Disk Utility for your current partition map scheme.
    This is most definitely not the end of the world as it is quite easy to create an external Recovery disk.
    1) No, you shouldn't have issues running Lion, but Recovery HD will not be created.
    2) No, and in your case it doesn't sound like it will be installed on your internal drive at all.
    3) Yes. If you begin with an empty partition, then install Lion that would be considered a clean install.
    Hope that helps.
    Autumn

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • In  a SQL query whihc has join, How to reduce Multiple instance of a table

    in a SQL query which has join, How to reduce Multiple instance of a table
    Here is an example: I am using Oracle 9i
    is there a way to reduce no.of Person instances from the following query? or can I optimize this query further?
    TABLES:
    mail_table
    mail_id, from_person_id, to_person_id, cc_person_id, subject, body
    person_table
    person_id, name, email
    QUERY:
    SELECT p_from.name from, p_to.name to, p_cc.name cc, subject
    FROM mail, person p_from, person p_to, person p_cc
    WHERE from_person_id = p_from.person_id
    AND to_person_id = p_to.person_id
    AND cc_person_id = p_cc.person_id
    Thnanks in advance,
    Babu.

    SQL> select * from mail;
            ID          F          T         CC
             1          1          2          3
    SQL> select * from person;
           PID NAME
             1 a
             2 b
             3 c
    --Query with only ne Instance of PERSON Table
    SQL> select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
      2         max(decode(m.t,p.pid,p.name)) to_name,
      3         max(decode(m.cc,p.pid,p.name)) cc_name
      4  from mail m,person p
      5  where m.f = p.pid
      6  or m.t = p.pid
      7  or m.cc = p.pid
      8  group by m.id;
            ID FRM_NAME   TO_NAME    CC_NAME
             1 a          b          c
    --Expalin plan for "One instance" Query
    SQL> explain plan for
      2  select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
      3         max(decode(m.t,p.pid,p.name)) to_name,
      4         max(decode(m.cc,p.pid,p.name)) cc_name
      5  from mail m,person p
      6  where m.f = p.pid
      7  or m.t = p.pid
      8  or m.cc = p.pid
      9  group by m.id;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 902563036
    | Id  | Operation           | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |        |     3 |   216 |     7  (15)| 00:00:01 |
    |   1 |  HASH GROUP BY      |        |     3 |   216 |     7  (15)| 00:00:01 |
    |   2 |   NESTED LOOPS      |        |     3 |   216 |     6   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL| MAIL   |     1 |    52 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL| PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       4 - filter("M"."F"="P"."PID" OR "M"."T"="P"."PID" OR
                  "M"."CC"="P"."PID")
    Note
       - dynamic sampling used for this statement
    --Explain plan for "Normal" query
    SQL> explain plan for
      2  select m.id,pf.name fname,pt.name tname,pcc.name ccname
      3  from mail m,person pf,person pt,person pcc
      4  where m.f = pf.pid
      5  and m.t = pt.pid
      6  and m.cc = pcc.pid;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4145845855
    | Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |        |     1 |   112 |    14  (15)| 00:00:01 |
    |*  1 |  HASH JOIN           |        |     1 |   112 |    14  (15)| 00:00:01 |
    |*  2 |   HASH JOIN          |        |     1 |    92 |    10  (10)| 00:00:01 |
    |*  3 |    HASH JOIN         |        |     1 |    72 |     7  (15)| 00:00:01 |
    |   4 |     TABLE ACCESS FULL| MAIL   |     1 |    52 |     3   (0)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |   6 |    TABLE ACCESS FULL | PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    |   7 |   TABLE ACCESS FULL  | PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("M"."CC"="PCC"."PID")
       2 - access("M"."T"="PT"."PID")
       3 - access("M"."F"="PF"."PID")
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement
    25 rows selected.
    Message was edited by:
            jeneesh
    No indexes created...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for

  • How can I get my LaCie NAS to play nice with Lion?

    Hi all- I bought a LaCie 2TB NAS a couple months before upgrading to Lion. All was well when my MBP had Snow Leopard instand and the NAS was plugged into my Air Port Extreme. As many know, this came to an end with the upgrade to Lion. I contacted LaC

  • Fail in Patch FND.G

    Hi All I try several times in patching FND.G on my 11.5.8 system but fail at each time. I have applied AD.H successfully and apply c driver of the FND.G well, but when I try to apply d driver for the FND, it always warn my FNDLIBR error then my works

  • Backup from windows photoshop elemets 7 to mac os photoshop elements 10

    Has anyone experiences how I can bring my backup, made on my former windows pc vista with the adobe photoshop elements 7.0 to my new iMac with photoshop elements10. The programm say that this ist not compatibel with mac??? I've got more than 40000 pi

  • Constant program not responding from iPhoto on Activity Monitor

    Hello I just fixed a whole hard drive with Disk Warrior and the results where that iPhoto files corrupted the disk. Now i have 15000 fotos in this thing and decided to create a whole new library on another drive so i can format the old one and have 2

  • Why does it say there is an error when I attempt to sync it to iTunes?

    The first time I synced it, it worked but now when I try, it says there was either a missing file or an unknown error (-37). What does this mean?