Partition by, rank over ....  ?

Hi , my dear sqlplsql expert gurus. Its real pleasure to learn from you.
I doubt my problem will be difficult to you so i cant' wait to see your advices. :)
I have this query:
SQL> SELECT   COUNT (*), states
  2      FROM my_case
  3     WHERE states IN (1, 2)
  4  GROUP BY states
  5  ORDER BY 1 DESC;
  COUNT(*)      STATES
   1624307          1
     12547          2Is there some way to include the SUM function and get the output similiar to this :
SQL> SELECT   COUNT (*), states
  2      FROM my_case
  3     WHERE states IN (1, 2)
  4  GROUP BY states
  5  ORDER BY 1 DESC;
  COUNT(*)      STATES
   1624307          1
     12547          2
    SUM
  1636854

Hi,
You can use ROLLUP to get totals, like this:
SELECT       COUNT (*)
,       NVL ( TO_CHAR (deptno)
           , 'Total'
           )          AS dept
FROM       scott.emp
WHERE       deptno     IN (10, 20)
GROUP BY  ROLLUP (deptno)
;Output:
. COUNT(*) DEPT
         3 10
         5 20
         8 TotalIs output like:
    SUM
  1636854
an important part of this problem?
If so, does the above need to be 4 separate rows in the result set, or would a single row (with some embedded CHR(13) characters to start new lines) be good enough?

Similar Messages

  • MDX Rank Over Partition

    Are there any MDX gurus who can help me?
    I am trying to produce an MDX query that generates a ranked result set, within this I am trying to get two levels of ranking based on Net Sales, firstly the ranking within the overall set, and secondly a ranking partitioned by an attribute dimension (the equivalent of RANK () OVER (PARTITION BY ...) in SQL Server), with the final result set sorted alphabetically by the attribute name and secondly by Net Sales. So far I have got the sorting and the overall ranking to work but not the partitioned rank. Any solution will need to be fast as the base dimension has 100K members.
    My current MDX looks like this:
    WITH
    SET [Divisions] AS '[AttributeContract].[Att_CC01].Children'
    SET [ContractsByDiv] AS
    'GENERATE(
    ORDER(
    [AttributeContract].[Att_CC01].Children,
    AttributeContract.CurrentMember.[MEMBER_NAME],
    BASC
    CROSSJOIN(
    AttributeContract.CurrentMember
    ORDER(
    NonEmptySubset(
    UDA([Entity].[Entity Contract Reporting], "Contract")
    [Net Sales],
    BDESC
    MEMBER [Account].[Overall Rank] AS 'Rank([ContractsByDiv].CurrentTuple,[ContractsByDiv],[Net Sales])'
    MEMBER [Account].[Rank In Div] AS '1'
    SELECT
    [Net Sales]
    ,[Overall Rank]
    ,[Rank In Div]
    } ON COLUMNS,
    [ContractsByDiv]
    } ON ROWS
    FROM HCREPRT2.Analysis
    WHERE
    [Year].[FY13],
    [Period].[BegBalance],
    [ISBN Type].[Total ISBN Type],
    [Lifecycle].[Front List],
    [Scenario].[DPG_Budget],
    [Market].[Total Market],
    [Version].[Working],
    [Sales Channel].[Total Sales Channel]
    Any suggestions as to how to do this or whether it is possible?
    Regards,
    Gavin
    Edited by: GavinH on 07-Mar-2012 02:57

    This was the solution I came up with:
    The following query returns a result set with the the data ranked across the overall set and with a ranking partioned by division:
    WITH
    SET [Divisions] AS 'ORDER([AttributeContract].[Att_CC01].Children,AttributeContract.CurrentMember.[MEMBER_NAME],BASC)'
    SET [EntitySet] AS 'ORDER(NonEmptySubset(UDA([Entity].[Entity Contract Reporting], "Contract")),[Net Sales],BDESC)'
    SET [ContractsByDiv] AS
    'GENERATE(
    [Divisions],
    CROSSJOIN(
    AttributeContract.CurrentMember
    NonEmptySubset([EntitySet])
    -- Rank in whole data set
    MEMBER [Account].[Overall Rank] AS 'Rank([ContractsByDiv].CurrentTuple,[ContractsByDiv],[Net Sales])'
    -- Ranking in division
    MEMBER [Account].[Rank In Div] AS
    'Rank(
    ([AttributeContract].CurrentMember,[Entity].[Entity Contract Reporting].CurrentMember),
    CROSSJOIN(
    AttributeContract.CurrentMember
    NonEmptySubset([EntitySet])
    [Net Sales]
    -- Rownumber
    MEMBER [Account].[RowNumber] AS 'RANK([ContractsByDiv].CurrentTuple,[ContractsByDiv],1,ORDINALRANK)'
    SELECT
    [Net Sales]
    ,[Overall Rank]
    ,[Rank In Div]
    ,[RowNumber]
    } ON COLUMNS,
    [ContractsByDiv]
    } ON ROWS
    FROM HCREPRT2.Analysis
    WHERE
    [Year].[FY13],
    [Period].[BegBalance],
    [ISBN Type].[Total ISBN Type],
    [Lifecycle].[Front List],
    [Scenario].[DPG_Budget],
    [Market].[Total Market],
    [Version].[Working],
    [Sales Channel].[Total Sales Channel]
    The key was to use the cross join portion of the generate statement used to create the overall set as the set for the intra divisional ranking.

  • Dear SQL-Experts:  RANK() OVER (...) - Function

    Hello,
    I'm looking for a solution for the following SQL-SELECT-Problem.
    Imagine a table or a view with
    many items with a composite key (5 columns)
    some item-specific data columns and...
    ...at least an integer-column which counts double or more entrys of items with the same key (call it U_EntryCnt)
    What I need is a SELECT-Statement which SELECTs all Items BUT only one of the double entrys (if there are doubles).
    It should SELECT only that one of the double entrys which has the maximum number of the double-entrys in the U_EntryCnt-column.
    My idea is to create a VIEW like (three key-columns in this example):
    SELECT
         RANK() OVER (PARTITION BY U_KeyCol1, U_KeyCol2, U_KeyCol3 ORDER BY U_EntryCnt DESC) AS Rank,
         U_KeyCol1, U_KeyCol2, U_KeyCol3,
         U_DatCol1, U_DatCol2,..........U_DatColN,
            U_EntryCnt
    FROM
         [@TST_EXAMPLE]
    And afterwards SELECTing FROM that VIEW WHERE Rank=1
    A test on a little example table seems to work. But could somebody tell me is this is the right way?
    Thanks,
    Roland
    PS.:
    GROUP BY does not work in my opinion. When I want to SELECT any column, this column must be added to the GROUP BY statement. Once a selected data col differs within the same key doubles I get two results of the same key and this is not wanted.

    Hi Robin,
    thanks for your answer. (I hope I've understand everything right - this problem also seems to lead me to the boundary of my english ;-/ )
    Within the double-key-rows the MAX-Aggregat gives me not the correct result: It does not return the data from the row with the MAX(U_EntryCnt) but rather the maximum of the data within the double-key-rows.
    The best would be an complete example.
    Here is the example-table with (unsorted) keys, data and the entry-counter (only one example DataCol for clearness):
    KeyCol1 |KeyCol2 |KeyCol3 |DataCol1     |EntryCnt
    ================================================
    A     |A     |1       |AA1 XXX     |1
    B     |B     |1       |BB1 Wanted     |2
    A     |A     |1       |AA1 Wanted     |2
    B     |B     |1       |BB1 XXX     |1
    C     |C     |1       |CC1 Wanted     |1
    The wanted rows are marked with "Wanted" in the Data colum for example. Technically they're wanted because these rows are containing the maximum EntryCnt-Number within their key-double rows.
    Robin:
    When you talk about sub-select I think you mean sth. like this:
    SELECT
         T0.U_KeyCol1, T0.U_KeyCol2, T0.U_KeyCol3, MAX(T0.U_EntryCnt),
         (SELECT T1.U_DatCol1
         FROM [@TST_EXAMPLE] T1
         WHERE
         T0.U_KeyCol1=T1.U_KeyCol1 AND
         T0.U_KeyCol2=T1.U_KeyCol2 AND
         T0.U_KeyCol3=T1.U_KeyCol3 AND
         T1.U_EntryCnt=MAX(T0.U_EntryCnt)) AS DatCol1,
         (SELECT T1.U_DatColN
         FROM [@TST_EXAMPLE] T1
         WHERE
         T0.U_KeyCol1=T1.U_KeyCol1 AND
         T0.U_KeyCol2=T1.U_KeyCol2 AND
         T0.U_KeyCol3=T1.U_KeyCol3 AND
         T1.U_EntryCnt=MAX(T0.U_EntryCnt)) AS DatColN
    FROM
         [@TST_EXAMPLE] T0
    GROUP BY
         T0.U_KeyCol1, T0.U_KeyCol2, T0.U_KeyCol3
    Yes: this also works.
    As far as I know every column needs it's own subSelect. Very extensive when we talk about 20 to 40 columns.
    If the RANK function really gives the same result under all circumstances (in this example it does) it's much easier:
    First create a VIEW which contains a Rank-column:
    CREATE VIEW [dbo].[@TST_EXAMPLE_VIEW] AS
    SELECT
         RANK() OVER (PARTITION BY U_KeyCol1, U_KeyCol2, U_KeyCol3 ORDER BY U_EntryCnt DESC) AS Rank,
         Code, Name,
         U_KeyCol1, U_KeyCol2, U_KeyCol3,
         U_DatCol1, U_DatCol2, U_DatCol3, U_DatCol4,
         U_EntryCnt
    FROM
         [@TST_EXAMPLE]
    And now the condition WHERE Rank=1 seems to give the wanted rows (in the example it does :-):
    SELECT * FROM [@TST_EXAMPLE_VIEW] WHERE Rank=1
    Because this is a much more clearly arranged query, it would be nice if somebody could confirm that this is also a right way.
    Another question is which of the two Query-examples may have the best SQL-Server performance (which ist faster)

  • How to use to use rank over() function in block coding

    Hi,
    I am having problem with using rank () over function in block coding. I can't use it in declaration section with select statement.
    How to use in executable section of pl sql ?
    --Sujan                                                                                                                                                                                                                                                                                                                                                                                                           

    thanks

  • How to use rank() over(...) in a map?

    I use rank() over(...) in a Filter operator
    validate give a error: ora-30483

    Currently OWB does not directly support analytical functions. The (not very elegant) workaround could be implementing these feature in custom transformations.
    Regards:
    Igor

  • No possible debug for mapping containing "rank() over"

    It looks like mappings using "rank() over" functions (most of mines) are no candidate for debug functionalities :-/
    Analyzing map for debug...
    Retrieving runtime connection info...
    Connecting to runtime schema...
    Checking character set of runtime schema...
    Configuring sources and targets...
    Analyzing map for debug...
    Configuring sources and targets...
    Validating map...
    Correlated Commit is OFF.
    Generating debug package...
    Deploying temp debug tables...
    Deploying debug package...
    Debug code deployment messages:
    LINE 5964 ,COLUMN 66:
    PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
    . ( * % & = - + ; < / > at in is mod not rem
    <an exponent (**)> <> or != or ~= >= <= <> and or like
    between ||
    End debug code deployment messages
    DBG1012: Debug deployment errors, can't run debug code.

    That's right.
    Though, use of dedup is totally prohibitive to nest rank() function (in order to use it in the outer where clause) and creation of dumb tables are messy in a way.
    I chose creating a dumb constant vector that matches the rest of the query, filtering it to an empty set and then UNION ALL with the main query that is thenafter filtered on rank column (and it works pretty fine)
    ...just a pity it can't be debugged (they are my most complex mappings) with the debugger

  • How does the analytical function RANK OVER() work inoracle??

    Hi,
    Can i get the information about how oracle internally handles RANK over() ?

    http://tahiti.oracle.com
    http://asktom.oracle.com

  • Ranking over consecutive dates

    Hi guys,
    I have been struggling with this for some days already and I have tried to implement different solutions I was able to get from searching the internet, and applying some approeaches learned from this forum on similar questions, but maybe if I put que question here someone knows exactly the best way to handle it:
    So I have a table like this:
    with payments as
    select '1' as ID, '20130331' as DateR, 'Not_paid' as Status from dual
    union
    select '1' as ID, '20130430' as DateR, 'Paid' as Status from dual
    union
    select '1' as ID, '20130531' as DateR, 'Not_paid' as Status from dual
    union
    select '2' as ID, '20130331' as DateR, 'Not_paid' as Status from dual
    union
    select '2' as ID, '20130430' as DateR, 'Not_paid' as Status from dual
    union
    select '3' as ID, '20130331' as DateR, 'Paid' as Status from dual
    union
    select '3' as ID, '20130430' as DateR, 'Paid' as Status from dual
    union
    select '3' as ID, '20130531' as DateR, 'Paid' as Status from dual
    And I am trying to get the amount of the current consecutive non-payments from a user. Now, it starts to count everytime the previous payments was Paid or it´s the first payment.
    I have tried to get through dense rank, giving me this as output:
    select ID, dater, status, dense_rank() over (partition by ID, status order by dater asc) rnk from payments
    But I need to get something like this:
    ID DATER STATUS RNK
    1   20130331  Not_paid  1
    1   20130430  Paid  1
    1   20130531  Not_Paid  1
    2   20130331  Not_paid  1
    2   20130430  Not_paid  2
    3   20130331  Paid  1
    3   20130430  Paid  2
    3   20130531  Paid  3
    Such that if I want to get the max(rank) to check how many non-payments a user currently has I get that ID has 1 unpayment, ID 2 two consecutive unpayments, and ID 3 has 0 unpayments. This is because on the fourth consecutive non-payment I have to consider the user as churned.

    Hi,
    Here's one way:
    WITH got_grp_num AS
        SELECT  p.*
        ,      ROW_NUMBER () OVER ( PARTITION BY  id
                                    ORDER BY      dater
             - ROW_NUMBER () OVER ( PARTITION BY  id, status
                                    ORDER BY      dater
                                  )   AS grp_num
        FROM    payments  p
    SELECT    id, dater, status
    ,         ROW_NUMBER () OVER ( PARTITION BY  id, status, grp_num
                                   ORDER BY      dater
                                 )   AS rnk
    FROM      got_grp_num
    ORDER BY  id, dater
    For an explanation of the Fixed Difference technique used here, see
    https://forums.oracle.com/forums/thread.jspa?threadID=2302781 and/or
    https://forums.oracle.com/forums/thread.jspa?threadID=2303591
    By the way, storing date information in a sting column is simply asking for trouble.  Use a DATE column instead.

  • Replicating router partitions for Fail/Over ?

    Hi everyone,
    Is it really worth the value to replicate a router partition on the same
    node ?
    In other words, has anyone ever seen a router partition failing on its
    own (eg due
    to an out of memory), ie not because of the node failure ?
    What I am looking for is to avoid wasting system resources for unneeded
    replicates.
    Thanks.
    Vincent Figari

    You will use Active/Standby failover method to keep your fail-over configuration in secondary firewall (PIX).
    Active/Standby Failover lets you use a standby security appliance to take over the functionality of a failed unit. When the active unit fails, it changes to the standby state while the standby unit changes to the active state. The unit that becomes active assumes the IP addresses (or, for a transparent firewall, the management IP address) and MAC addresses of the failed unit and begins to pass traffic. The unit that is now in standby state takes over the standby IP addresses and MAC addresses. Because network devices see no change in the MAC to IP address pairing, no ARP entries change or time out anywhere on the network. PIX Security Appliance with 7.x version and above supports failover.
    For further information click this link.
    http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_configuration_example09186a00807dac5f.shtml#regu

  • Cannot resize my main partition, random left over space?

    So as you can understand from the topic I can't resize my mac main partition. There is some grayed out area of the parition and when i try to resize it, it then jumps back to the original size with the grayed out area.

    No, you are starting up using your Recovery partition, so that you can use Disk Utility from there without the limitations that happen when you are booted from the main HD. And in fact, the unrecoverable section is probably your Recovery Partition.

  • Root partition filling up over time

    I am running Sol 10 11/06 on a Sun Blade. I installed Sol 10 from DVD a few months ago, making my / partition 12GB in size. /var is not a separate partition. Sun Studio 11 was also installed. I have installed all available patches using Sun Update Manager. Since installation, my / partition has filled up with more than 860 MB of data, none of it from me. I assume that all that space is being used up by superseded versions of installed patches. Does this seem normal? Am I supposed to put up with this until the disk fills up with old, unused patches? /var/adm is ok in terms of the wtmpx file being small (678k). /tmp is clean. I shutdown the computer whenever I am not using it, so MANY reboots have occurred. Here is my df -h:
    df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0t0d0s0 12G 6.3G 5.4G 55% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 5.2G 1016K 5.2G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    fd 0K 0K 0K 0% /dev/fd
    swap 5.2G 1.6M 5.2G 1% /tmp
    swap 5.2G 48K 5.2G 1% /var/run
    /dev/dsk/c0t2d0s7 37G 14G 22G 40% /storage1
    /dev/dsk/c0t0d0s3 19G 1000M 18G 6% /storage2
    /dev/dsk/c0t0d0s7 1.9G 687M 1.2G 36% /export/home
    Is there somewhere else I should look for large log/error files, or is the space usage on / normal? Should I back out all the unused patches (I am very, very reluctant to waste time on that)?
    Thank you.......

    Thank you, Paul. I agree with your thought that we will have to live with the space taken up by superseded patches. I ran the du command listed above:
    46817 /var/sadm/pkg/SUNWj5rt/save/pspool/SUNWj5rt/save/118666-10
    46818 /var/sadm/pkg/SUNWj5rt/save/118666-10
    46905 /var/sadm/pkg/SUNWj5rt/save/118666-11
    46905 /var/sadm/pkg/SUNWj5rt/save/pspool/SUNWj5rt/save/118666-11
    47399 /var/sadm/pkg/SUNWj5rtx/save
    47406 /var/sadm/pkg/SUNWj5rtx
    47485 /usr/sadm/lib
    47767 /opt/SUNWspro/prod/lib/v8
    47874 /usr/appserver/lib
    48525 /usr/openwin/lib/sparcv9
    48717 /usr/lib/AdobeReader/Resource
    50997 /usr/sadm
    51545 /opt/SUNWspro/prod/lib/v9/libp
    51916 /var/sadm/pkg/SPROscl/save/pspool/SPROscl/save
    51964 /var/sadm/pkg/SPROscl/save/pspool/SPROscl
    51965 /var/sadm/pkg/SPROscl/save/pspool
    54932 /var/sadm/pkg/SPROsclx/save
    54939 /var/sadm/pkg/SPROsclx
    55191 /var/sadm/pkg/SUNWj5dmo/save
    55198 /var/sadm/pkg/SUNWj5dmo
    55474 /opt/SUNWvts
    55707 /var/sadm/pkg/SUNWj5dev/save/pspool/SUNWj5dev/save
    55721 /var/sadm/pkg/SUNWj5dev/save/pspool/SUNWj5dev
    55722 /var/sadm/pkg/SUNWj5dev/save/pspool
    56645 /opt/SUNWspro/prod/lib/stlport4
    56746 /usr/lib/iconv
    56767 /etc
    57941 /opt/SUNWspro/prod/lib/v9b
    58317 /opt/SUNWspro/prod/lib/v8plusa
    58991 /usr/perl5
    59079 /usr/openwin/lib/X11/fonts
    60046 /usr/sfw/bin
    60659 /usr/openwin/platform/sun4u/lib/sparcv9/GL
    60660 /usr/openwin/platform/sun4u/lib/sparcv9
    60817 /usr/openwin/lib/X11
    61012 /usr/jdk/packages
    61274 /usr/openwin/platform/sun4u/lib/GL
    63586 /opt/SUNWspro/contrib/xemacs-21.4.12/lib/xemacs/xemacs-packages/lisp
    63738 /usr/sfw/lib/mozilla
    64776 /opt/SUNWspro/prod/lib/cpu/sparcv9+vis2
    70582 /usr/sfw/include
    71427 /usr/lib/AdobeReader/Reader/sparcsolaris/plug_ins
    73032 /usr/j2se/jre/lib
    74250 /usr/staroffice7/program/resource
    76277 /usr/j2se/jre
    76762 /usr/lib/cpu/sparcv9+vis2
    78514 /var/sadm/pkg/SPROf90/save
    78521 /var/sadm/pkg/SPROf90
    79585 /usr/bin/idl_6.3/bin/bin.solaris2.sparc64
    79705 /usr/bin/idl_6.3/bin
    80078 /usr/sfw/share
    80719 /opt/SUNWspro/prod/lib/v9a
    81777 /usr/dt
    83075 /opt/SUNWspro/prod/bin
    84217 /var/sadm/pkg/SUNWglrt/save/120812-15
    84418 /var/sadm/pkg/SUNWglrt/save/120812-14
    86632 /opt/SUNWspro/contrib/xemacs-21.4.12/lib/xemacs/xemacs-packages
    91933 /usr/lib/sparcv9
    93723 /var/sadm/pkg/SUNWj5rt/save/pspool/SUNWj5rt/save
    93848 /var/sadm/pkg/SUNWj5rt/save/pspool/SUNWj5rt
    93849 /var/sadm/pkg/SUNWj5rt/save/pspool
    103054 /usr/bin/idl_6.3
    103883 /var/sadm/pkg/SPROscl/save
    103890 /var/sadm/pkg/SPROscl
    108572 /usr/appserver
    109127 /usr/j2se
    110373 /opt/SUNWspro/contrib/xemacs-21.4.12/lib/xemacs
    111430 /var/sadm/pkg/SUNWj5dev/save
    111438 /var/sadm/pkg/SUNWj5dev
    114717 /opt/SUNWspro/prod/lib/cpu
    115284 /usr/jdk/instances/jdk1.5.0/jre/lib
    117539 /usr/jdk/instances/jdk1.5.0/jre
    121935 /usr/openwin/platform/sun4u/lib
    122130 /usr/openwin/platform/sun4u
    122170 /usr/openwin/platform
    129566 /opt/SUNWspro/contrib/xemacs-21.4.12/lib
    130404 /opt/SUNWspro/prod/lib/v9
    134488 /usr/lib/AdobeReader/Reader/sparcsolaris
    144166 /usr/staroffice7/help
    146171 /opt/sfw/lib
    146175 /opt/sfw
    149240 /usr/lib/AdobeReader/Reader
    151465 /opt/SUNWspro/contrib/xemacs-21.4.12/xemacs_sources
    155896 /usr/staroffice7/share
    165537 /usr/jdk/instances/jdk1.5.0
    165538 /usr/jdk/instances
    168657 /var/sadm/pkg/SUNWglrt/save
    168665 /var/sadm/pkg/SUNWglrt
    173612 /usr/bin
    185511 /usr/lib/cpu
    187573 /var/sadm/pkg/SUNWj5rt/save
    187603 /var/sadm/pkg/SUNWj5rt
    199081 /usr/lib/AdobeReader
    204370 /usr/sfw/lib
    204935 /usr/openwin/lib
    226554 /usr/jdk
    256823 /usr/share
    283253 /usr/staroffice7/program
    291529 /opt/SUNWspro/contrib/xemacs-21.4.12
    305314 /opt/SUNWspro/contrib
    387548 /usr/openwin
    460719 /usr/sfw
    584586 /usr/staroffice7
    810087 /opt/SUNWspro/prod/lib
    827759 /usr/lib
    945127 /opt/SUNWspro/prod
    1253930 /opt/SUNWspro
    1429852 /var/sadm/pkg
    1480266 /var/sadm
    1488099 /var
    1500665 /opt
    3450973 /usr
    6587050
    The above is in 1024 byte blocks. The above isn't the whole listing, but what sticks out is all the space for the java-related j2se/jre/jdk/j5dev patches. I'm sure those take up 100s of megs. The /var/sadm/pkg looks awful big. Can you see anything else in there that looks out of whack?

  • Recovery CD - clean install and new partitions or install over old OS?

    I have a T400s running Windows 7. While using the factory recovery tool, the restore was interrupted. The OS wouldn't boot and the F11 function was disabled.
    I ordered the recovery CDs, which will arive today. Now, there are 3 partitions on my ThinkPad: a boot partition, the OS partition, and the recovery CD partition. My question is this. Will the Recovery CD do a clean install ... wiping all 3 partitions and reinstalling the entire OS and recovery partitions to the factory state? Or will it simply reinstall the OS in the OS partition, leaving everything else alone? Thanks.
    P

    Since you'll have the Recovery Media, you could elect to simply remove the recovery partition.  This can be done through Disk Management (right-click Computer in Start Menu and choose Manage).  The drive should be set up with the Service Partition, the C: partition, and then the Q: partition.  If you delete the Q: partition you can then extend the C: partition.

  • InfoCube design- partition of chararcters over IC - compression rate etc

    Hello,
    prerequiste for aggregation in an IC is that technical key is identical. technical key is formed of all DIM IDs and
    those in turn are formed by characters in appropriate dimension. That means all characters of all dimensions form up the
    technical key. So I have to organize an appropriate set of characters in an IC to get max. compression rate. That
    partitioingn may result in to many InfoCubes if I consider semantic or project aspects. On the other hand aggregates are also
    a means for realizing compression. Could you support me with guidelines for that or support me with links for this
    subject.
    Thanks in advance
    Oskar

    Arun,
    thanks for your answer but I think I must change my misleading question to get the answer I want to have
    Assumption :
    1) InfoCube A has dimension d1 with characters ch1 and ch2 and dimension d2 has character ch3 and
    ch4.
    2) If ch1 and ch2 and ch3 and ch4 are identical standard aggregation is done ex. sum
    Lets say thats what I want to have as aggregation level and which I get If I compress table F to table E
    Til now I used an IC with 4 chararcters in 2 dimensions. Now I assume that I have additional 20 characters which
    I didn´t organize in Cubes but they are semantical belonging to characters used in InfoCube A. If I pack them in Cube A
    may be I destroy that aggregatio level I want to have ( because now all 24 characters must be identical for case of
    aggregation ) . But if I organize them in seperate ICs may be I get to much ICs.
    How would you solve that modelling ? Would you for example pack all characters into one IC and solve necessary aggregation level with aggregates , because they allow to use a subset of characters in the underlaying cube ?
    Thanks in advance
    Regards,
    Oskar

  • Using lag and rank in the same query

    Hi
    I am trying to find out the difference in time between peoples memberships and also the order that these memberships are taken out in. So far I have added in a rank statement to work out the order the memberships were created in, but now want to look at the difference between the dates returned. The SQL I used is:
    SELECT owner_party_id,
    mem_number,
    support_id,
    mem_start_date,
    RANK() OVER (PARTITION BY owner_party_id ORDER BY mem_start_date ASC) MEMBERSHIP_SEQUENCE
    FROM membership_all
    WHERE version_type = 'CUR'
    AND owner_party_id IN ('65051', '65051', '65348', '65348', '65607', '65607', '65607')
    to get:
    "OWNER_PARTY_ID"|"MEM_NUMBER"|"SUPPORT_ID"|"MEM_START_DATE"|"MEMBERSHIP_SEQUENCE"
    65051|318874751|8014747|01-MAR-10|1
    65051|412311060|21502883|15-AUG-12|2
    65348|308672459|3526913|01-MAY-10|1
    65348|409951130|20950524|18-JUN-12|2
    65607|315830192|7510133|17-MAY-10|1
    65607|406448110|20024246|16-MAR-12|2
    65607|409738130|20903556|14-JUN-12|3
    Now I would like to calculate the difference between the start dates of each of the owner_party_id groups, so to get something like this:
    OWNER_PARTY_ID|MEM_NUMBER     |SUPPORT_ID|MEM_START_DATE     |MEMBERSHIP_SEQUENCE|Diff
    65051|318874751|8014747|01-Mar-10|1|     
    65051|412311060|21502883|15-Aug-12|2|898
    65348|308672459|3526913|01-May-10|1     
    65348|409951130|20950524|18-Jun-12|2|779
    65607|315830192|7510133|17-May-10|1     
    65607|406448110|20024246|16-Mar-12|2|669
    65607|409738130|20903556|14-Jun-12|3|90
    I think that I need to use the Lag function in, but I am not too sure if it can be linkited to look at the data within a grouping of owner party id, as it would make no sense to calculate the difference in dates for two different owner party ids.
    Any advice much appreciated.
    Thanks
    Edited by: 992871 on 09-Mar-2013 23:34

    Couple notes:
    1. You wrote you want to get order that these memberships are taken out in, however, both your and Etbin's queries calculate order within each owner_party_id and not across all members. If you want to get rank and difference in time regardless of member's owner_party_id remove PARTITION BY caluse.
    2. You might want to use DENSE_RANK and not RANK depending how you want to display rank. If two people joined at the same time and were second in rank, analytic RANK will be:
    RANK
    1
    2
    2
    4
    5
    .while DENSE_RANK:
    DENSE_RANK
    1
    2
    2
    3
    4
    .SY.

  • Need help with RANK() on NULL data

    Hi All
    I am using Oracle 10g and running a query with RANK(), but it is not returning a desired output. Pleas HELP!!
    I have a STATUS table that shows the history of order status.. I have a requirement to display the order and the last status date (max). If there is any NULL date for an order then show NULL.
    STATUS
    ORD_NO | STAT | DT
    1 | Open |
    1 | Pending |
    2 | Open |
    2 | Pending |
    3 | Open |1/1/2009
    3 | Pending |1/6/2009
    3 | Close |
    4 | Open |3/2/2009
    4 | Close |3/4/2009
    Result should be (max date for each ORD_NO otherwise NULL):
    ORD_NO |DT
    1 |
    2 |
    3 |
    4 |3/4/2009
    CREATE TABLE Status (ORD_NO NUMBER, STAT VARCHAR2(10), DT DATE);
    INSERT INTO Status VALUES(1, 'Open', NULL);
    INSERT INTO Status VALUES(1, 'Pending', NULL);
    INSERT INTO Status VALUES(2, 'Open', NULL);
    INSERT INTO Status VALUES(2, 'Pending',NULL);
    INSERT INTO Status VALUES(3, 'Open', '1 JAN 2009');
    INSERT INTO Status VALUES(3,'Pending', '6 JAN 2009');
    INSERT INTO Status VALUES(3, 'Close', NULL);
    INSERT INTO Status VALUES(4, 'Open', '2 MAR 2009');
    INSERT INTO Status VALUES(4, 'Close', '4 MAR 2009');
    COMMIT;
    I tried using RANK function to rank all the orders by date. So used ORDER BY cluse on date in descending order thinking that the null dates would be on top and will be grouped together by each ORD_NO.
    SELECT ORD_NO, DT, RANK() OVER (PARTITION BY ORD_NO ORDER BY DT DESC)
    FROM Status;
    ...but the result was something..
    ORD_NO |DT |RANKING
    *1 | | 1*
    *1 | | 1*
    *2 | | 1*
    *2 | | 1*3 | | 1
    3 |1/6/2009 | 2
    3 |1/1/2009 | 3
    4 |3/4/2009 | 1
    4 |3/2/2009 | 2
    I am not sure why didn't the first two ORD_NOs didn't group together and why ranking of 1 was assigned to them. I was assuming something like:
    ORD_NO |DT |RANKING
    *1 | | 1*
    *1 | | 2*
    *2 | | 1*
    *2 | | 1*
    3 | | 1
    3 |1/6/2009 | 2
    3 |1/1/2009 | 3
    4 |3/4/2009 | 1
    4 |3/2/2009 | 2
    Please guide me if I am missing something here?
    Regards
    Sri

    Hi,
    If i well understood, you don't need rank
    SELECT   ord_no, MAX (dt)KEEP (DENSE_RANK LAST ORDER BY dt) dt
        FROM status
    GROUP BY ord_no
    SQL> select * from status;
        ORD_NO STAT       DT
             1 Open
             1 Pending
             2 Open
             2 Pending
             3 Open       2009-01-01
             3 Pending    2009-01-06
             3 Close
             4 Open       2009-03-02
             4 Close      2009-03-04
    9 ligne(s) sélectionnée(s).
    SQL> SELECT   ord_no, MAX (dt)KEEP (DENSE_RANK LAST ORDER BY dt) dt
      2      FROM status
      3  GROUP BY ord_no;
        ORD_NO DT
             1
             2
             3
             4 2009-03-04
    SQL>

Maybe you are looking for

  • IPod nano works with USB 1.1 but not USB 2.0

    Anyone using an iPod nano that works with USB 1.1 but not with USB 2.0? I am using Windows XP and when installing the iPod software from CD it recognizes the iPod and am able to transfer songs using iTunes but when installing SP1 or SP2 to enable USB

  • CRM_ORDER_DELETE does not delete salesorders with follow-up documents

    Hi All, in my CRM 5.2 i have to delte many sales orders exchanged in past from R/3 to CRM via middleware. There is report CRM_ORDER_DELETE to delete sales document filtering with transacion type and sales order code, and it is ok; but the problem is

  • Export in pdf: looks awful in Acrobat Reader

    Hello I wanna do a simple 1024*768 pdf (from sources i use for iPad app made in Digital Publishing Suite) WHen iI export in .pdf, there's no way (well, for me, so far) to get the pdf have the right size/dpi ratio than in my original innn document my

  • How to fill in Element Name box in Create Symbol Script?

    I'm trying to follow Adobe's directions for creating a Component Symbol so I can change the properties later.  They say, " 9. Click the plus button to add an element name. 10. In the Element Name box, type the name of the element to customize. For ex

  • Disabling Profit center in VA01 and VA02

    Hi geeks, I have a requirement where i need to disable the profit center column for some specific sales document types in VA01 and VA02. Please guide me with the User-exit to complete the requirement. -Jiyaaaa