Can Analytical function give me this result?

Hi Friends
I have the following query:
SELECT DISTINCT
      DATE_FIELD
      ,ACCT_ID
      ,CITY
      ,STATE
      ,ZIP
      ,UNIQUE_ID
      ,OPEN_DT
      ,TO_CHAR (ROW_NUMBER () OVER (PARTITION BY ACCT_ID ORDER BY ACCT_ID,OPEN_DT ) ) rn
FROM TABLE1
WHERE 1=1
AND OPEN_DT IS NOT NULL This query gives me the following result:
DATE_FIELD     | ACCT_ID     CITY | STATE | ZIP | UNIQUE_ID |OPEN_DT|RN
1/02/2010|111|'CITY1'|'STATE1'|3333|2325|9/01/1987|1
1/02/2010|111|'CITY2'|'STATE1'|3333|2090|19/01/1996|2
1/02/2010|111|'CITY2'|'STATE1'|3333|2090|20/06/2002|3
1/02/2010|111|'CITY2'|'STATE1'|3333|2090|20/06/2002|4
1/02/2010|111|'CITY1'|'STATE1'|3333|2325|20/06/2002|5
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|9/08/1974|1
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|11/12/1980|2
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|5/12/1989|3
1/02/2010|222|'CITY4'|'STATE1'|3350|8308|5/12/1989|4
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|5/12/1989|5
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|5/12/1989|6
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|4/08/1994|7
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|4/08/1994|8
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|4/08/1994|9
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|30/06/2000|10
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|4/12/2003|11
1/02/2010|222|'CITY3'|'STATE1'|3350|9747|20/09/2004|12
1/02/2010|222|'CITY4'|'STATE1'|3350|1794|22/10/2004|13
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|20/08/2009|14
1/02/2010|222|'CITY4'|'STATE1'|3350|2278|17/09/2009|15
1/02/2010|222|'CITY4'|'STATE1'|3350|2278|28/09/2009|16
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|1/10/2009|17
1/02/2010|222|'CITY3'|'STATE1'|3350|1270|1/10/2009|18
1/02/2010|222|'CITY4'|'STATE1'|3350|8308|2/10/2009|19
1/02/2010|222|'CITY4'|'STATE1'|3353|2278|2/10/2009|20
1/02/2010|333|'CITY5'|'STATE2'|5001|9905|17/06/2002|1
1/02/2010|333|'CITY6'|'STATE2'|5016|3948|24/06/2002|2
1/02/2010|333|'CITY5'|'STATE2'|5001|9905|3/09/2009|3
1/02/2010|333|'CITY7'|'STATE2'|5020|6444|3/09/2009|4All I want is the rownumer having maximum rn value from abouve quey.
How do I achieve this? Much appreciate a quick response.

Hi,
Not sure I understand what you mean by maximum rn value? Is it the same value in every row for each partition?
If that's the case you could use count:
SELECT DISTINCT
      DATE_FIELD
      ,ACCT_ID
      ,CITY
      ,STATE
      ,ZIP
      ,UNIQUE_ID
      ,OPEN_DT
      ,COUNT(*) OVER (PARTITION BY ACCT_ID) cnt
FROM TABLE1
WHERE 1=1
AND OPEN_DT IS NOT NULLIf not, then perhaps can you post sample data showing what is it that you expect from the output, please? Something along the lines of what you have posted for the current output you're obtaining.

Similar Messages

  • Analytical functions approach for this scenario?

    Here is my data:
    SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 26 17:03:17 2013
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select * from batch_parameters;
           LOW         HI MIN_ORDERS MAX_ORDERS
            51        100          6          8
           121        200          1          5
           201       1000          1          1
    SQL> select * from orders;
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4905        154
            4899        143
            4925        123
            4900        110
            4936        106
            4901        103
            4911        101
            4902         91
            4903         91
            4887         90
            4904         85
            4926         81
            4930         75
            4934         73
            4935         71
            4906         68
            4907         66
            4896         57
            4909         57
            4908         56
            4894         55
            4910         51
            4912         49
            4914         49
            4915         48
            4893         48
            4916         48
            4913         48
            2894         47
            4917         47
            4920         46
            4918         46
            4919         46
            4886         45
            2882         45
            2876         44
            2901         44
            4921         44
            4891         43
            4922         43
            4923         42
            4884         41
            4924         40
            4927         39
            4895         38
            2853         38
            4890         37
            2852         37
            4929         37
            2885         37
            4931         37
            4928         37
            2850         36
            4932         36
            4897         36
            2905         36
            4933         36
            2843         36
            2833         35
            4937         35
            2880         34
            4938         34
            2836         34
            2872         34
            2841         33
            4889         33
            2865         31
            2889         30
            2813         29
            2902         28
            2818         28
            2820         27
            2839         27
            2884         27
            4892         27
            2827         26
            2837         22
            2883         20
            2866         18
            2849         17
            2857         17
            2871         17
            4898         16
            2840         15
            4874         13
            2856          8
            2846          7
            2847          7
            2870          7
            4885          6
            1938          6
            2893          6
            4888          2
            4880          1
            4875          1
            4881          1
            4883          1
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4879          1
            2899          1
            2898          1
            4882          1
            4877          1
            4876          1
            2891          1
            2890          1
            2892          1
            4878          1
    107 rows selected.
    SQL>The batch_parameters:
    hi - high count of lines in the batch.
    low - low count of lines in the batch.
    min_orders - number of minimum orders in the batch
    max_orders - number of maximum orders in the batch.
    The issue is to create an optimal size of batches for us to pick the orders. Usually, you have stick within a given low - hi count but, there is a leeway of around let's say 5%percent on the batch size (for the number of lines in the batch).
    But, for the number of orders in a batch, the leeway is zero.
    So, I have to assign these 'orders' into the optimal mix of batches. Now, for every run, if I don't find the mix I am looking for, then, the last batch could be as small as a one line one order too. But, every Order HAS to be batched in that run. No exceptions.
    I have a procedure that does 'sort of' this, but, it leaves non - optimal orders alone. There is a potential of the orders not getting batched, because they didn't fall in the optimal mix potentially missing our required dates. (I can write another procedure that can clean up after).
    I was thinking (maybe just a general direction would be enough), with what analytical functions can do these days, if somebody can come up with the 'sql' that gets us the batch number (think of it as a sequence starting at 1).
    Also, the batch_parameters limits are not hard and fast. Those numbers can change but, give you a general idea.
    Any ideas?

    Ok, sorry about that. That was just guesstimates. I ran the program and here are the results.
    SQL> SELECT SUM(line_count) no_of_lines_in_batch,
      2         COUNT(*) no_of_orders_in_batch,
      3         batch_no
      4    FROM orders o
      5   GROUP BY o.batch_no;
    NO_OF_LINES_IN_BATCH NO_OF_ORDERS_IN_BATCH   BATCH_NO
                     199                     4     241140
                      99                     6     241143
                     199                     5     241150
                     197                     6     241156
                     196                     5     241148
                     199                     6     241152
                     164                     6     241160
                     216                     2     241128
                     194                     6     241159
                     297                     2     241123
                     199                     3     241124
                     192                     2     241132
                     199                     6     241136
                     199                     5     241142
                      94                     7     241161
                     199                     6     241129
                     154                     2     241135
                     193                     6     241154
                     199                     5     241133
                     199                     4     241138
                     199                     6     241146
                     191                     6     241158
    22 rows selected.
    SQL> select * from orders;
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4905        154     241123
            4899        143     241123
            4925        123     241124
            4900        110     241128
            4936        106     241128
            4901        103     241129
            4911        101     241132
            4903         91     241132
            4902         91     241129
            4887         90     241133
            4904         85     241133
            4926         81     241135
            4930         75     241124
            4934         73     241135
            4935         71     241136
            4906         68     241136
            4907         66     241138
            4896         57     241136
            4909         57     241138
            4908         56     241138
            4894         55     241140
            4910         51     241140
            4914         49     241142
            4912         49     241140
            4915         48     241142
            4916         48     241142
            4913         48     241142
            4893         48     241143
            2894         47     241143
            4917         47     241146
            4919         46     241146
            4918         46     241146
            4920         46     241146
            2882         45     241148
            4886         45     241148
            2901         44     241148
            2876         44     241148
            4921         44     241140
            4891         43     241150
            4922         43     241150
            4923         42     241150
            4884         41     241150
            4924         40     241152
            4927         39     241152
            2853         38     241152
            4895         38     241152
            4931         37     241154
            2885         37     241152
            4929         37     241154
            4890         37     241154
            4928         37     241154
            2852         37     241154
            2843         36     241156
            2850         36     241156
            4932         36     241156
            4897         36     241156
            4933         36     241158
            2905         36     241156
            2833         35     241158
            4937         35     241158
            4938         34     241158
            2880         34     241159
            2872         34     241159
            2836         34     241158
            2841         33     241159
            4889         33     241159
            2865         31     241159
            2889         30     241150
            2813         29     241159
            2902         28     241160
            2818         28     241160
            4892         27     241160
            2884         27     241160
            2820         27     241160
            2839         27     241160
            2827         26     241161
            2837         22     241133
            2883         20     241138
            2866         18     241148
            2849         17     241161
            2871         17     241156
            2857         17     241158
            4898         16     241161
            2840         15     241161
            4874         13     241146
            2856          8     241154
            2847          7     241161
            2846          7     241161
            2870          7     241152
            2893          6     241142
            1938          6     241161
            4888          2     241129
            2890          1     241133
            2899          1     241136
            4877          1     241143
            4875          1     241143
            2892          1     241136
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4878          1     241146
            4876          1     241136
            2891          1     241133
            4880          1     241129
            4883          1     241143
            4879          1     241143
            2898          1     241129
            4882          1     241129
            4881          1     241124
    106 rows selected.As you can see, my code is a little buggy in that it may not have followed the strict batch_parameters. But, this is acceptable to be in the general area.

  • Can analytic functions be used  in a cursor ?

    The following pl/sql code gives out the error as given below the code. However when the select staement in the cursor if run alone gives the results. Can someone tell me why. Can't analytic functions be used in cursors
    declare
    cursor cur1 is
    SELECT
    col1,
    col2,
    REGR_SLOPE(col1, LOG(10,col2))
    OVER(PARTITION BY col1 ORDER BY col2
    ROWS BETWEEN 2 PRECEDING AND UNBOUNDED FOLLOWING)
    from datatab;
    OVER(PARTITION BY col1 ORDER BY col2
    ERROR at line 7:
    ORA-06550: line 7, column 5:
    PLS-00103: Encountered the symbol "(" when expecting one of the following:
    , from

    Since it is a cursor, you can put the select statement in quotes which will execute the statement as dynamic SQL and allow the analytical function reference:
    open c for
      'SELECT col1, col2, REGR_SLOPE(col1, LOG(10,col2))
                          OVER(PARTITION BY col1 ORDER BY col2
                          ROWS BETWEEN 2 PRECEDING AND UNBOUNDED FOLLOWING)
         from datatab';
    loop
      fetch c into v_col1, v_col2, v_col3;
      exit when c%notfound;
      -- do something with values
    end loop;
    close c;

  • AVG as an analytic function - gives error ORA-0439

    I'm trying my first implementation of AVG as an analytic function. I took the following query which gave a simple average:
    SELECT
    PERSON.LASTNAME,
    COUNT(TO_NUMBER(RPTOBS.OBSVALUE)) CNT,
    AVG(TO_NUMBER(RPTOBS.OBSVALUE)) AVRG
    FROM
    TUT.PERSON PERSON,
    TUT.RPTOBS RPTOBS
    WHERE
    PERSON.PID = RPTOBS.PID AND
    RPTOBS.HDID = 54
    GROUP BY
    PERSON.LASTNAME;
    and was rewrote this to give an average of the values of the last 3 dates (I think..)
    SELECT
    PERSON.LASTNAME,
    RPTOBS.OBSDATE,
    AVG(TO_NUMBER(RPTOBS.OBSVALUE))
    OVER
    (PARTITION BY PERSON.LASTNAME
    ORDER BY RPTOBS.OBSDATE
    ROWS BETWEEN UNBOUNDED PRECEDING AND 2 FOLLOWING)
    AS AVRG3
    FROM
    TUT.PERSON PERSON,
    TUT.RPTOBS RPTOBS
    WHERE
    PERSON.PID = RPTOBS.PID AND
    RPTOBS.HDID = 54;
    (this seemed to be a direct translation of a similar query in the SQL Reference.
    I am getting an error message of:
    ORA-0439 - feature not enabled - OLAP Window Functions
    Can some one tell me why?
    Thanks,
    Will Salomon
    [email protected]

    I haven'y done it personally, but I am told that it's really as simple as de-insatlling the Standard Edition software and then installing the Enterprise Edition. Th einstaller will prompt you for an Oracle SID and you just have to point it to your existing database.
    You may wish to test this proposition before risking your actual system. But, in any case, take a back up.
    With 9i things are simpler: everyting gets installed, you're just not allowed to use the EE features if you haven't paid for them.
    Cheers, APC

  • Analytic function should produce different results

    Hi All
    My question is derived by a usage of the analytic functions with "sliding window". Let's say you have a table as
    GROUP_ID SEQ VALUE
    1 1 1
    1 1 2
    2 2 3
    2 3 4
    Then the query
    select sum( value ) over ( partition by group_id order by group_id, seq ) from a_table
    should produce different values for different runs because rows 1,2 have the same value of SEQ. One run may produce 2 then 1 another one may produce 1 then 2.
    I need to prove it if the statement above is true. Oracle caches data so if run it several times you will see the same result.
    Thanks.

    Why are you using group_id twice, in partition and order by? And why would several "runs" on the same data provide different results?
    C.

  • Can analytical function support this requirement?

    As a result of some Qry, I get the following result set.
    Column1 Column2
    A       100   
    A       200
    A       200
    A       100
    B       200
    B       200
    B       200
    C       100
    C       200
    D       200
    D       200
    E       200
    With this as input i had to take some decision like, For a particular Column1 value, if all the avaiable value of column2 is 200 (B, D & E in this case), i need to do one set of operations, if atleast one of the value for Column2 is non 200 (A & C in this case), i need to do another set of operation. How to frame the If clause or any other approach?
    By using Analytical count(*) function, is it possible to generate something like
    Column1 Column2   Column3(200_count)
    A       100       2
    A       200       2
    A       200       2
    A       100       2
    B       200       3
    B       200       3
    B       200       3
    C       100       1
    C       200       1
    D       200       3
    D       200       3
    E       200       3

    hi,
    Something lilke this?
    with data as (
    select 'A' column1,       100     column2 from dual union all
    select 'A' column1,       200 from dual union all
    select 'A' column1,       200 from dual union all
    select 'A' column1,       100 from dual union all
    select 'B' column1,       200 from dual union all
    select 'B' column1,       200 from dual union all
    select 'B' column1,       200  from dual union all
    select 'C' column1,       100 from dual union all
    select 'C' column1,       200 from dual union all
    select 'D' column1,       200 from dual union all
    select 'D' column1,       200 from dual union all
    select 'E' column1,       200 from dual
    select
    column1,column2,count(0) over (partition by column1,column2) cnt
    from data
    order by column1The above query will count the occuerence of column2 for every column1.
    If you want to count only 200 something like may help you.
    with data as (
    select 'A' column1,       100     column2 from dual union all
    select 'A' column1,       200 from dual union all
    select 'A' column1,       200 from dual union all
    select 'A' column1,       100 from dual union all
    select 'B' column1,       200 from dual union all
    select 'B' column1,       200 from dual union all
    select 'B' column1,       200  from dual union all
    select 'C' column1,       100 from dual union all
    select 'C' column1,       200 from dual union all
    select 'D' column1,       200 from dual union all
    select 'D' column1,       200 from dual union all
    select 'E' column1,       200 from dual
    select
    column1,column2,sum(
    case when column2=200 then
    1
    else
    0
    end
    ) over (partition by column1) cnt
    from data
    order by column1Regards,
    Bhushan
    Edited by: Buga added second query

  • Aggregation of analytic functions not allowed

    Hi all, I have a calculated field called Calculation1 with the following calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
    The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    SH Group            Mr. A            10
    5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
    I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
    I'd like to see ONLY one row as:
    Group Name      Resourse name    Calculation1
    SH Group            Mr. A            10....that it means I grouped by Group Name, Resource Name
    Anyone knows how can I achieve this result or any workarounds ??
    Thanks in advance
    Alex

    Hi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
    Ι start from this situation:
    !http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
    There are 3 calculated fields:
    RANK is the first calculated field:
    ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id  ORDER BY  "Tickets Report Assigned To & Created By COMPL".Select Flag )
    RT Calc is the 2nd calculation:
    CASE WHEN RANK = 1 THEN Resolution_time END
    and Calculation2 is the 3rd calculation:
    AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY  RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
    As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
    I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
    So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
    What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
    In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
    I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
    Do you know a way to fix this problem ?
    Thanks
    Alex

  • Analytic Functions in CONNECT BY Queries

    Can analytic functions be used in a CONNECT BY query? Are there limits?
    This problem occurs in Oracle 11.1.0.6.0, 10.2 and 10.1.
    Here is the presenting problem:
    Starting with data like this:
    CREATE TABLE     enrollment
    (      name          VARCHAR2 (10)
    ,      coursenumber     NUMBER
    INSERT INTO enrollment (name, coursenumber) VALUES ('Ted',      101);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Ted',      102);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Ted',      103);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Mary',      102);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Mary',      104);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Hiro',      101);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Hiro',      104);
    INSERT INTO enrollment (name, coursenumber) VALUES ('Hiro',      105);
    COMMIT;I'm trying to get cross-tab output like this:
    NAME       TXT
    Hiro         101            104  105
    Mary              102       104
    Ted          101  102  103without knowing before-hand what course numbers, or even how many course numbers, will be in the results.
    My strategy was to use LPAD to make the course numbers always occupy 5 spaces.
    If n "columns" needed to be left blank before the number, I wanted to add 5n extra spaces.
    I tried this:
    WITH     universe     AS
         SELECT     name
         ,     coursenumber
         ,     DENSE_RANK () OVER ( ORDER BY        coursenumber)     AS cnum
         ,     ROW_NUMBER () OVER ( PARTITION BY  name
                                   ORDER BY          coursenumber
                           )                         AS snum
         FROM     enrollment
    SELECT     name
    ,     REPLACE ( SYS_CONNECT_BY_PATH ( LPAD ( TO_CHAR (coursenumber)
                                              , 5 * (cnum - LAG (cnum, 1, 0)
                                                                   OVER ( PARTITION BY  name
                                                                             ORDER BY          coursenumber
              )     AS txt
    FROM     universe
    WHERE     CONNECT_BY_ISLEAF     = 1
    START WITH     snum     = 1
    CONNECT BY     snum     = PRIOR snum + 1
    AND          name     = PRIOR name
    ORDER BY     name
    ;but the txt column was always NULL.
    I tried showing some of the intermediate calculations:
    WITH     universe     AS
         SELECT     name
         ,     coursenumber
         ,     DENSE_RANK () OVER ( ORDER BY        coursenumber)     AS cnum
         ,     ROW_NUMBER () OVER ( PARTITION BY  name
                                   ORDER BY          coursenumber
                           )                         AS snum
         FROM     enrollment
    SELECT     name
    ,     REPLACE ( SYS_CONNECT_BY_PATH ( LPAD ( TO_CHAR (coursenumber)
                                              , 5 * (cnum - LAG (cnum, 1, 0)
                                                                   OVER ( PARTITION BY  name
                                                                             ORDER BY          coursenumber
              )     AS txt
    ,     coursenumber
    ,     cnum
    ,     LAG (cnum, 1, 0) OVER ( PARTITION BY  name
                                                ORDER BY          coursenumber
                         )      AS lag_cnum
    FROM     universe
    -- WHERE     CONNECT_BY_ISLEAF     = 1
    START WITH     snum     = 1
    CONNECT BY     snum     = PRIOR snum + 1
    AND          name     = PRIOR name
    ORDER BY     name
    ;and they all seemed reasonable:
    NAME       TXT                            COURSENUMBER       CNUM   LAG_CNUM
    Hiro                                               101          1          0
    Hiro                                               104          4          1
    Hiro                                               105          5          4
    Mary                                               102          2          0
    Mary                                               104          4          2
    Ted                                                101          1          0
    Ted                                                102          2          1
    Ted                                                103          3          2but txt was still NULL.
    I got around the problem by computing the LAG in a sub-query (see [this thread|http://forums.oracle.com/forums/message.jspa?messageID=3875566#3875566]), but I'd like to know why LAG didn't work in the CONNECT BY query, or at least within SYS_CONNECT_BY_PATH.
    I've had other problems before trying to use analytic functions in CONNECT BY queries. Sometimes, the presence of an analytic function woudl cause CONNECT BY to never work, sometimes it would destroy the order of the output. (Sorry, I don't have those examples handy now.)
    Are there limitations on using analytic functions in a CONNECT BY query?
    is there a work-around other than computing the analytic functions in a sub-query?
    Thanks.

    how about
    SQL> with temp as
      2  (select name
      3       , coursenumber
      4    from enrollment
      5  )
      6  , courses as
      7  (select distinct
      8          coursenumber
      9     from enrollment
    10  )
    11  select name
    12       , replace (sys_connect_by_path (case when t_course is not null
    13         then rpad (t_course, 8, ' ')
    14         else rpad (' ', 8, ' ')
    15         end, ';'), ';', ' ') scbp
    16    from (
    17  select t.name
    18       , t.coursenumber t_course
    19       , c.coursenumber c_course
    20       , row_number() over (partition by t.name
    21                                order by c.coursenumber
    22                           ) rn
    23    from temp  t partition by (name)
    24    right outer
    25    join courses c
    26      on c.coursenumber = t.coursenumber
    27  )
    28   where connect_by_isleaf = 1
    29   start with rn = 1
    30   connect by rn = prior rn + 1
    31   and name = prior name;
    NAME       SCBP
    Hiro        101                        104      105
    Mary                 102               104
    Ted         101      102      103

  • Analytic Functions in PL/SQL

    This procedure won't compile - the word PARTITION seems to be the problem - with this error...
    PLS-00103: Encountered the symbol "(" when expecting one of the following: , from
    The query in the cursor runs correctly as a stand-alone query. Can analytic functions not be used in PL/SQL cursors?
    Thanks.
    CREATE OR REPLACE
    PROCEDURE TestAnalyticFunction IS
    CURSOR GetAllTransTypes_Cursor IS
    select transaction_class.trans_desc,
    transaction_code.trans_type ,
    transaction_code.trans_code,
    transaction_code.trans_code_desc,
    sum(tr_tx_amt) as trans_sum,
    RATIO_TO_REPORT(sum(tr_tx_amt)) OVER
    (PARTITION BY transaction_code.trans_type) AS Percentage
    from transaction_code,
    transaction_class,
    transactions
    where TR_POST_DT IS NOT NULL
    AND TR_POST_DT >= '01-DEC-2000'
    AND TR_POST_DT <= '31-JAN-2001'
    AND TRANSACTION_CODE.TRANS_CLASS = TRANSACTION_CLASS.TRANS_CLASS_ID
    AND TRANSACTION_CODE.TRANS_CODE = TRANSACTIONS.TR_TX_CODE
    AND TRANSACTION_CODE.TRANS_TYPE in (1,2,3,4,5,8)
    group by transaction_code.trans_type,
    trans_class,
    trans_desc,
    trans_code,
    trans_code_desc
    order by transaction_code.trans_type, trans_code;
    TYPE TransClassDescType IS TABLE OF transaction_class.trans_desc%TYPE;
    TYPE TransCodeTypeType IS TABLE OF transaction_code.trans_type%TYPE;
    TYPE TransCodeCodeType IS TABLE OF transaction_code.trans_code%TYPE;
    TYPE TransCodeDescType IS TABLE OF transaction_code.trans_code_desc%TYPE;
    TYPE TotalType IS TABLE OF NUMBER(14,2);
    TYPE TotalPctType IS TABLE OF NUMBER(6, 2);
    TransClassDesc TransClassDescType;
    TransCodeType TransCodeTypeType;
    TransCodeCode TransCodeCodeType;
    TransCodeDesc TransCodeDescType;
    Total TotalType;
    TotalPct TotalPctType;
    BEGIN
    OPEN GetAllTransTypes_Cursor;
    FETCH GetAllTransTypes_Cursor BULK COLLECT INTO TransClassDesc,TransCodeType,TransCodeCode,TransCodeDesc,
    Total, TotalPct;
    CLOSE GetAllTransTypes_Cursor;
    END TestAnalyticFunction;
    null

    Some functions just don't seem to work in PL/SQL even though they work fine in SQL*Plus.
    Two such functions I found were NVL2 and RATIO_TO_REPORT.
    Have no clue why yet.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Dale Johnson ([email protected]):
    This procedure won't compile - the word PARTITION seems to be the problem - with this error...
    PLS-00103: Encountered the symbol "(" when expecting one of the following: , from
    The query in the cursor runs correctly as a stand-alone query. Can analytic functions not be used in PL/SQL cursors?
    Thanks.
    CREATE OR REPLACE
    PROCEDURE TestAnalyticFunction IS
    CURSOR GetAllTransTypes_Cursor IS
    select transaction_class.trans_desc,
    transaction_code.trans_type ,
    transaction_code.trans_code,
    transaction_code.trans_code_desc,
    sum(tr_tx_amt) as trans_sum,
    RATIO_TO_REPORT(sum(tr_tx_amt)) OVER
    (PARTITION BY transaction_code.trans_type) AS Percentage
    from transaction_code,
    transaction_class,
    transactions
    where TR_POST_DT IS NOT NULL
    AND TR_POST_DT >= '01-DEC-2000'
    AND TR_POST_DT <= '31-JAN-2001'
    AND TRANSACTION_CODE.TRANS_CLASS = TRANSACTION_CLASS.TRANS_CLASS_ID
    AND TRANSACTION_CODE.TRANS_CODE = TRANSACTIONS.TR_TX_CODE
    AND TRANSACTION_CODE.TRANS_TYPE in (1,2,3,4,5,8)
    group by transaction_code.trans_type,
    trans_class,
    trans_desc,
    trans_code,
    trans_code_desc
    order by transaction_code.trans_type, trans_code;
    TYPE TransClassDescType IS TABLE OF transaction_class.trans_desc%TYPE;
    TYPE TransCodeTypeType IS TABLE OF transaction_code.trans_type%TYPE;
    TYPE TransCodeCodeType IS TABLE OF transaction_code.trans_code%TYPE;
    TYPE TransCodeDescType IS TABLE OF transaction_code.trans_code_desc%TYPE;
    TYPE TotalType IS TABLE OF NUMBER(14,2);
    TYPE TotalPctType IS TABLE OF NUMBER(6, 2);
    TransClassDesc TransClassDescType;
    TransCodeType TransCodeTypeType;
    TransCodeCode TransCodeCodeType;
    TransCodeDesc TransCodeDescType;
    Total TotalType;
    TotalPct TotalPctType;
    BEGIN
    OPEN GetAllTransTypes_Cursor;
    FETCH GetAllTransTypes_Cursor BULK COLLECT INTO TransClassDesc,TransCodeType,TransCodeCode,TransCodeDesc,
    Total, TotalPct;
    CLOSE GetAllTransTypes_Cursor;
    END TestAnalyticFunction;<HR></BLOCKQUOTE>
    null

  • Advantages and disadvantages of Analytical function

    Plz list out the advantages and disadvantages of normal queries and queries using analytical function (Performance wise)

    I'm not sure how you wish to compare?
    Analytical functions give you functionality that cannot otherwise be achieved easily in a lot of cases. They can introduce some performance degredation to a query but you have to compare on a query by query basis to determine if analytical functions or otherwise are the best solution for the issue. If it were as simple as saying that analytical functions are always slower than doing it without analytical functions, then Oracle wouldn't bother introducing them into the language.

  • Analytic Function - Return 2 values

    I am sure I need to use an analytic function to do this, I just cannot seem to get it right. I appreciate the help.
    Table and insert statements:
    create table TST_CK
    DOC_ID NUMBER(6)      not null,
    ROW_SEQ_NBR NUMBER(6) not null,
    IND_VALUE VARCHAR2(2) null
    INSERT INTO TST_CK VALUES ('1','6',NULL);
    INSERT INTO TST_CK VALUES ('1','5',NULL);
    INSERT INTO TST_CK VALUES ('1','4','T');
    INSERT INTO TST_CK VALUES ('1','3','R');
    INSERT INTO TST_CK VALUES ('1','9',NULL);
    INSERT INTO TST_CK VALUES ('1','10',NULL);
    INSERT INTO TST_CK VALUES ('1','7','T');
    INSERT INTO TST_CK VALUES ('1','8','R');
    INSERT INTO TST_CK VALUES ('2','1',NULL);
    INSERT INTO TST_CK VALUES ('2','2',NULL);
    INSERT INTO TST_CK VALUES ('2','3','T');
    INSERT INTO TST_CK VALUES ('2','4','R');
    INSERT INTO TST_CK VALUES ('2','5',NULL);
    INSERT INTO TST_CK VALUES ('2','6',NULL);
    INSERT INTO TST_CK VALUES ('2','7','T');
    INSERT INTO TST_CK VALUES ('2','8','R');
    INSERT INTO TST_CK VALUES ('4','1',NULL);
    INSERT INTO TST_CK VALUES ('4','2',NULL);
    INSERT INTO TST_CK VALUES ('4','3','X1');
    INSERT INTO TST_CK VALUES ('4','4',NULL);
    INSERT INTO TST_CK VALUES ('4','5',NULL);
    INSERT INTO TST_CK VALUES ('4','6',NULL);
    INSERT INTO TST_CK VALUES ('4','7','T');
    INSERT INTO TST_CK VALUES ('4','8','R');
    INSERT INTO TST_CK VALUES ('4','9',NULL);
    INSERT INTO TST_CK VALUES ('4','10',NULL);
    INSERT INTO TST_CK VALUES ('4','11',NULL);
    INSERT INTO TST_CK VALUES ('4','12',NULL);
    INSERT INTO TST_CK VALUES ('4','13','T');
    INSERT INTO TST_CK VALUES ('4','14','R');
    INSERT INTO TST_CK VALUES ('4','15',NULL);
    INSERT INTO TST_CK VALUES ('4','16',NULL);
    COMMIT;Here is what I have tried that gets me close:
    SELECT MAX (TST_CK.DOC_ID), MAX (TST_CK.ROW_SEQ_NBR), TST_CK.IND_VALUE
      FROM ASAP.TST_CK TST_CK
    WHERE (TST_CK.IND_VALUE IS NOT NULL)
    GROUP BY TST_CK.IND_VALUE
    ORDER BY 2 ASCHere is my desired result:
    CV_1      CV_2
    T           ROr even better result would be:
    concat(CV_1,CV_2)With result:
    T,RThanks for looking
    G

    Hi,
    I am sure I need to use an analytic function to do this, I just cannot seem to get it right. I appreciate the help.
    Table and insert statements: ...Thanks for posting the CREATE TABLE and INSERT statements.
    Don't forget to explain how you get the results you want from that sample data.
    GMoney wrote:
    create table TST_CK
    DOC_ID NUMBER(6)      not null,
    ROW_SEQ_NBR NUMBER(6) not null,
    IND_VALUE VARCHAR2(2) null
    INSERT INTO TST_CK VALUES ('1','6',NULL);
    If doc_id and row_seq_nbr are NUMBERs, why are you inserting VARCHAR2 values, such as '1' and '6' (in single-quotes)?
    Here is my desired result:
    CV_1      CV_2
    T           ROr even better result would be:
    concat(CV_1,CV_2)With result:
    T,R
    The results from the query you posted are:
    MAX(TST_CK.DOC_ID) MAX(TST_CK.ROW_SEQ_NBR) IN
                     4                       3 X1
                     4                      13 T
                     4                      14 RWhat do the desired results represent?
    Why do your desired results include 'R' and 'T', but not 'X1'? Why do you want
    'T,R'     and not
    'X1,T,R'     or
    'X1,T'     or
    'T,X1'     or something else?
    Whatever your reasons are, there's a good chance you'll want to use String Aggregation . Your Oracle version is always important, but it's especially important in string aggregation problems, because some helpful new functions have beeen added in recent versions. Always say which version of Oracle (e.g., 11.2.0.3.0) you're using.

  • Analytical  Function

    Hi,
    Is  there  any  analytical  function  to  get  the result
    3 different  rate 3  different  period
    This  particular  loan  completes the  period  of  10  months.But  the  interest  is  computed  on  the  follwoing manner
    For  the  first  slab (3  months)  rate is  10  %  and  in  the  subsequent  slabs  the  interest  should  be  added  along  with  the  principal( 1000)
    Amount
    Rate
    Period in Months
    Computation
    New Product
    1000
    10%
    3
    1000*(10/100)*(3/12)=25
    1025
    1025
    10%
    5
    1025*(10/100)*(5/12)=42.7
    1042.7
    1042.7
    8%
    2
    1042.7*(8/100)*(2/12)=13.9
    1056.6
    Version  11.2.0.2

    SQL> with t
      2  as
      3  (
      4  select 1000 amt, 10 rt, 3 period_in_month from dual
      5  union all
      6  select 1025 amt, 10 rt, 5 period_in_month from dual
      7  union all
      8  select 1042.7 amt, 8 rt, 2 period_in_month from dual
      9  )
    10  select amt
    11       , rt
    12       , period_in_month
    13       , intr_calc
    14       , case when period_t <= 3 then
    15                  (amt - nvl(lag(intr_calc) over(order by amt), 0)) + intr_calc
    16              else
    17                  (amt - nvl(lag(intr_calc) over(order by amt), 0)) +
    18                   sum(case when period_t <= 3 then 0 else intr_calc end) over(order by amt)
    19         end new_prod
    20    from (
    21            select amt
    22                 , rt
    23                 , period_in_month
    24                 , round(amt*(rt/100)*(period_in_month/12), 1) intr_calc
    25                 , sum(period_in_month) over(order by amt) period_t
    26              from t
    27         );
           AMT         RT PERIOD_IN_MONTH  INTR_CALC   NEW_PROD
          1000         10               3         25       1025
          1025         10               5       42.7     1042.7
        1042.7          8               2       13.9     1056.6

  • Using analytical function - value with highest count

    Hi
    i have this table below
    CREATE TABLE table1
    ( cust_name VARCHAR2 (10)
    , txn_id NUMBER
    , txn_date DATE
    , country VARCHAR2 (10)
    , flag number
    , CONSTRAINT key1 UNIQUE (cust_name, txn_id)
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9870,TO_DATE ('15-Jan-2011', 'DD-Mon-YYYY'), 'Iran', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9871,TO_DATE ('16-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9872,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9873,TO_DATE ('18-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9874,TO_DATE ('19-Jan-2011', 'DD-Mon-YYYY'), 'Japan', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9875,TO_DATE ('20-Jan-2011', 'DD-Mon-YYYY'), 'Russia', 1);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9877,TO_DATE ('22-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9878,TO_DATE ('26-Jan-2011', 'DD-Mon-YYYY'), 'Korea', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9811,TO_DATE ('17-Jan-2011', 'DD-Mon-YYYY'), 'China', 0);
    INSERT INTO table1 (cust_name, txn_id, txn_date,country,flag) VALUES ('Peter', 9854,TO_DATE ('13-Jan-2011', 'DD-Mon-YYYY'), 'Taiwan', 0);
    The requirement is to create an additional column in the resultset with country name where the customer has done the maximum number of transactions
    (with transaction flag 1). In case we have two or more countries tied with the same count, then we need to select the country (among the tied ones)
    where the customer has done the last transaction (with transaction flag 1)
    e.g. The count is 2 for both 'China' and 'Japan' for transaction flag 1 ,and the latest transaction is for 'Japan'. So the new column should contain 'Japan'
    CUST_NAME TXN_ID TXN_DATE COUNTRY FLAG country_1
    Peter 9811 17-JAN-11 China 0 Japan
    Peter 9854 13-JAN-11 Taiwan 0 Japan
    Peter 9870 15-JAN-11 Iran 1 Japan
    Peter 9871 16-JAN-11 China 1 Japan
    Peter 9872 17-JAN-11 China 1 Japan
    Peter 9873 18-JAN-11 Japan 1 Japan
    Peter 9874 19-JAN-11 Japan 1 Japan
    Peter 9875 20-JAN-11 Russia 1 Japan
    Peter 9877 22-JAN-11 China 0 Japan
    Peter 9878 26-JAN-11 Korea 0 Japan
    Please let me know how to accomplish this using analytical functions
    Thanks
    -Learnsequel

    Does this work (not spent much time checking it)?
    WITH ana AS (
    SELECT cust_name, txn_id, txn_date, country, flag,
            Sum (flag)
                OVER (PARTITION BY cust_name, country)      n_trx,
            Max (CASE WHEN flag = 1 THEN txn_date END)
                OVER (PARTITION BY cust_name, country)      l_trx
      FROM cnt_trx
    SELECT cust_name, txn_id, txn_date, country, flag,
            First_Value (country) OVER (PARTITION BY cust_name ORDER BY n_trx DESC, l_trx DESC) top_cnt
      FROM ana
    CUST_NAME      TXN_ID TXN_DATE  COUNTRY          FLAG TOP_CNT
    Fred             9875 20-JAN-11 Russia              1 Russia
    Fred             9874 19-JAN-11 Japan               1 Russia
    Peter            9873 18-JAN-11 Japan               1 Japan
    Peter            9874 19-JAN-11 Japan               1 Japan
    Peter            9872 17-JAN-11 China               1 Japan
    Peter            9871 16-JAN-11 China               1 Japan
    Peter            9811 17-JAN-11 China               0 Japan
    Peter            9877 22-JAN-11 China               0 Japan
    Peter            9875 20-JAN-11 Russia              1 Japan
    Peter            9870 15-JAN-11 Iran                1 Japan
    Peter            9878 26-JAN-11 Korea               0 Japan
    Peter            9854 13-JAN-11 Taiwan              0 Japan
    12 rows selected.

  • How can we write this in analytical function..

    select a.employee_id,a.last_name,b.count from employees a, (select manager_id, count(manager_id) as count from employees group by manager_id) b where a.employee_id=b.manager_id;
    As per my requirement I need each manager name and no of employees reporting to him... above query works.. Could anybody help to write the same using analytic function.... Hw this same can be written in effiect way??? (quick performance)
    Please also share the link to download some doc to have good understanding of analytical function..
    Thanks in advance....

    are you trying to do a hierarchical type of query?
    select ename, count(ename) -1 numr_of_emps_under_this_mgr  from  emp
    connect by  empno =prior mgr
    group by ename
    order by count(ename) desc ;
    ENAME     NUMR_OF_EMPS_UNDER_THIS_MGR
    KING     13
    BLAKE     5
    JONES     4
    CLARK     1
    FORD     1
    SCOTT     1
    ADAMS     0
    TURNER     0
    MARTIN     0
    JAMES     0
    SMITH     0
    MILLER     0
    ALLEN     0
    WARD     0Here is the table structure I used (I think you can download it from oracle somewhere)
    CREATE TABLE EMP
      EMPNO     NUMBER(4)                           NOT NULL,
      ENAME     VARCHAR2(10 BYTE),
      JOB       VARCHAR2(9 BYTE),
      MGR       NUMBER(4),
      HIREDATE  DATE,
      SAL       NUMBER(7,2),
      COMM      NUMBER(7,2),
      DEPTNO    NUMBER(2)
    SET DEFINE OFF;
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7369, 'SMITH', 'CLERK', 7902, TO_DATE('12/17/1980 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        800, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7499, 'ALLEN', 'SALESMAN', 7698, TO_DATE('02/20/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1600, 300, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7521, 'WARD', 'SALESMAN', 7698, TO_DATE('02/22/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1250, 500, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7566, 'JONES', 'MANAGER', 7839, TO_DATE('04/02/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        2975, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7654, 'MARTIN', 'SALESMAN', 7698, TO_DATE('09/28/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1250, 1400, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7698, 'BLAKE', 'MANAGER', 7839, TO_DATE('05/01/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        2850, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7782, 'CLARK', 'MANAGER', 7839, TO_DATE('06/09/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        2450, 10);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7788, 'SCOTT', 'ANALYST', 7566, TO_DATE('12/09/1982 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        3000, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, HIREDATE, SAL, DEPTNO)
    Values
       (7839, 'KING', 'PRESIDENT', TO_DATE('11/17/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        5000, 10);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7844, 'TURNER', 'SALESMAN', 7698, TO_DATE('09/08/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1500, 0, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7876, 'ADAMS', 'CLERK', 7788, TO_DATE('01/12/1983 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1100, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7900, 'JAMES', 'CLERK', 7698, TO_DATE('12/03/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        950, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7902, 'FORD', 'ANALYST', 7566, TO_DATE('12/03/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        3000, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7934, 'MILLER', 'CLERK', 7782, TO_DATE('01/23/1982 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1300, 10);
    COMMIT;

  • Just set-up my first IPad (Air) using ICloud.  This resulted in none of my desktop Outlook icons working.  Can anyone tell me how ICloud works and restore functional Outlook icons on my desktop.

    Just set-up my first IPad (Air) using ICloud.  This resulted in none of my desktop Outlook icons working.  Can anyone tell me how ICloud works and restore functional Outlook icons on my desktop?

    Just set-up my first IPad (Air) using ICloud.  This resulted in none of my desktop Outlook icons working.  Can anyone tell me how ICloud works and restore functional Outlook icons on my desktop?

Maybe you are looking for

  • Porting table contents to other system

    Hi experts, We have created custom table ZXXX and table maintenance generator. We have maintained the data in development box. We have ported the transport request to other box. The TR contains function group, table, Table contents (TDDAT, TVDIR) and

  • "Error: Application Not Found" Help?

    Hey everybody, I was just wondering if anyone could help me out here. Everytime I go to open iTunes, I get an error saying "Application Not found" and I don't know why. I did some researching and heard that Nortons Anti virus might be the problem. Wh

  • Header for ADF Input Date Component.

    Hi, I am using af:inputdate, if i click icon it will display a popup which is of choose date(in which we can select the date). For the choose date popup i need a header like"Choose Date". How to do it. If i add a tag in af:convertdatetime with in af:

  • Need interview questions

    hello guys can u tell me where can i find ..or post any FR interview questions

  • Implementing Exchange 2013 DR Site

    Hi All , We have current Exchange 2013 with 2 Mailboxes & 2 CAS servers , we would like to implement Exchange 2013 DR Site , to make only passive copy on DR Site once we have a problem with primary site the mailbox users connecting to DR Site . How c