Ranking over consecutive dates

Hi guys,
I have been struggling with this for some days already and I have tried to implement different solutions I was able to get from searching the internet, and applying some approeaches learned from this forum on similar questions, but maybe if I put que question here someone knows exactly the best way to handle it:
So I have a table like this:
with payments as
select '1' as ID, '20130331' as DateR, 'Not_paid' as Status from dual
union
select '1' as ID, '20130430' as DateR, 'Paid' as Status from dual
union
select '1' as ID, '20130531' as DateR, 'Not_paid' as Status from dual
union
select '2' as ID, '20130331' as DateR, 'Not_paid' as Status from dual
union
select '2' as ID, '20130430' as DateR, 'Not_paid' as Status from dual
union
select '3' as ID, '20130331' as DateR, 'Paid' as Status from dual
union
select '3' as ID, '20130430' as DateR, 'Paid' as Status from dual
union
select '3' as ID, '20130531' as DateR, 'Paid' as Status from dual
And I am trying to get the amount of the current consecutive non-payments from a user. Now, it starts to count everytime the previous payments was Paid or it´s the first payment.
I have tried to get through dense rank, giving me this as output:
select ID, dater, status, dense_rank() over (partition by ID, status order by dater asc) rnk from payments
But I need to get something like this:
ID DATER STATUS RNK
1   20130331  Not_paid  1
1   20130430  Paid  1
1   20130531  Not_Paid  1
2   20130331  Not_paid  1
2   20130430  Not_paid  2
3   20130331  Paid  1
3   20130430  Paid  2
3   20130531  Paid  3
Such that if I want to get the max(rank) to check how many non-payments a user currently has I get that ID has 1 unpayment, ID 2 two consecutive unpayments, and ID 3 has 0 unpayments. This is because on the fourth consecutive non-payment I have to consider the user as churned.

Hi,
Here's one way:
WITH got_grp_num AS
    SELECT  p.*
    ,      ROW_NUMBER () OVER ( PARTITION BY  id
                                ORDER BY      dater
         - ROW_NUMBER () OVER ( PARTITION BY  id, status
                                ORDER BY      dater
                              )   AS grp_num
    FROM    payments  p
SELECT    id, dater, status
,         ROW_NUMBER () OVER ( PARTITION BY  id, status, grp_num
                               ORDER BY      dater
                             )   AS rnk
FROM      got_grp_num
ORDER BY  id, dater
For an explanation of the Fixed Difference technique used here, see
https://forums.oracle.com/forums/thread.jspa?threadID=2302781 and/or
https://forums.oracle.com/forums/thread.jspa?threadID=2303591
By the way, storing date information in a sting column is simply asking for trouble.  Use a DATE column instead.

Similar Messages

  • Consecutive date query... Please help

    Dear All,
    I have this table "att_emp_leaves" that records employees leaves information in the following columns:
    Employee_ID number(8)
    From_Date date
    To_Date
    Now I have to trace records that have consectutive date records, like leave taken again when the previous leaves ends up. The following records will help you understand this.
    Employee_ID , From_Date , To_Date
    2711, 19-DEC-07, 07-JAN-08
    2711, 08-JAN-08 , 10-JAN-08
    or
    3134 , 12-DEC-07 , 03-JAN-08
    3134 , 04-JAN-08 , 04-JAN-08
    In first case employee applied for leaves from 19-dec to 7-jan and then the very next date again from 8-jan till 10-jan.
    In second record set again employee having id 3134 applies leaves just after his previous leaves were ended up. To date of the first record is 03-JAN while 04-Jan just comes after 3-jan i.e. 4-jan.
    There are other records too in this table that might not have consetuive entries of dates and they are to be ignored, records like
    14210 , 01-JAN-08, 02-JAN-08
    14210 , 09-JAN-08, 10-JAN-08
    This record is to be ignored.
    Kindly help me write such a query. I will be very greatful to you.
    Regards, Imran Baig

    Here's a version that will return 1 record per employee_id per sequence of multiple leaves. I added some additional test data to to show that it will handle leave sequences greate than 2, as well as multiple leave sequences. Employee_id 2711 has single sequence of 3 consecutive leaves while employee_id 3134 has two sequences one of length 2 and another 1 year later of length 3.
    with att_emp_leaves as (select 2711 employee_id, to_date('19-DEC-07','dd-mon-yy') from_date, to_date('07-JAN-08','dd-mon-yy') to_date from dual
      union all select 2711, to_date('08-JAN-08','dd-mon-yy'), to_date('10-JAN-08','dd-mon-yy') from dual
      union all select 2711, to_date('11-JAN-08','dd-mon-yy'), to_date('13-JAN-08','dd-mon-yy') from dual
      union all select 3134, to_date('12-DEC-07','dd-mon-yy'), to_date('03-JAN-08','dd-mon-yy') from dual
      union all select 3134, to_date('04-JAN-08','dd-mon-yy'), to_date('04-JAN-08','dd-mon-yy') from dual
      union all select 3134, to_date('12-DEC-08','dd-mon-yy'), to_date('03-JAN-09','dd-mon-yy') from dual
      union all select 3134, to_date('04-JAN-09','dd-mon-yy'), to_date('04-JAN-09','dd-mon-yy') from dual
      union all select 3134, to_date('05-JAN-09','dd-mon-yy'), to_date('07-JAN-09','dd-mon-yy') from dual
      union all select 14210,to_date('01-JAN-08','dd-mon-yy'), to_date('02-JAN-08','dd-mon-yy') from dual
      union all select 14210,to_date('09-JAN-08','dd-mon-yy'), to_date('10-JAN-08','dd-mon-yy') from dual
    ), t1 as (
      select d.*,
          connect_by_root from_date original_from_date,
          sys_connect_by_path(d.to_date,'->') end_dates,
          level leave_cnt
      from att_emp_leaves d
      connect by prior employee_id = employee_id and prior d.to_date = from_date-1
      start with (employee_id, from_date) in
        (select employee_id, from_date
         from att_emp_leaves d1
         where not exists
           (select 1 from att_emp_leaves d2
            where d2.employee_id=d1.employee_id and d2.to_date = d1.from_date-1))
    ), t2 as (
      select t1.*, rank() over (partition by employee_id, original_from_date order by leave_cnt desc) rn
      from t1
    select t2.*, t2.to_date-original_from_date leave_duration from t2
    where rn=1 and leave_cnt > 1
    order by employee_id, original_from_date, to_date
    EMPLOYEE_ID FROM_DATE   TO_DATE     ORIGINAL_FROM_DATE  END_DATES                                LEAVE_CNT  RN  LEAVE_DURATION
    2711        11-JAN-2008 13-JAN-2008 19-DEC-2007         ->07-JAN-2008->10-JAN-2008->13-JAN-2008  3          1   25            
    3134        04-JAN-2008 04-JAN-2008 12-DEC-2007         ->03-JAN-2008->04-JAN-2008               2          1   23            
    3134        05-JAN-2009 07-JAN-2009 12-DEC-2008         ->03-JAN-2009->04-JAN-2009->07-JAN-2009  3          1   26            
    3 rows selected

  • Need help with RANK() on NULL data

    Hi All
    I am using Oracle 10g and running a query with RANK(), but it is not returning a desired output. Pleas HELP!!
    I have a STATUS table that shows the history of order status.. I have a requirement to display the order and the last status date (max). If there is any NULL date for an order then show NULL.
    STATUS
    ORD_NO | STAT | DT
    1 | Open |
    1 | Pending |
    2 | Open |
    2 | Pending |
    3 | Open |1/1/2009
    3 | Pending |1/6/2009
    3 | Close |
    4 | Open |3/2/2009
    4 | Close |3/4/2009
    Result should be (max date for each ORD_NO otherwise NULL):
    ORD_NO |DT
    1 |
    2 |
    3 |
    4 |3/4/2009
    CREATE TABLE Status (ORD_NO NUMBER, STAT VARCHAR2(10), DT DATE);
    INSERT INTO Status VALUES(1, 'Open', NULL);
    INSERT INTO Status VALUES(1, 'Pending', NULL);
    INSERT INTO Status VALUES(2, 'Open', NULL);
    INSERT INTO Status VALUES(2, 'Pending',NULL);
    INSERT INTO Status VALUES(3, 'Open', '1 JAN 2009');
    INSERT INTO Status VALUES(3,'Pending', '6 JAN 2009');
    INSERT INTO Status VALUES(3, 'Close', NULL);
    INSERT INTO Status VALUES(4, 'Open', '2 MAR 2009');
    INSERT INTO Status VALUES(4, 'Close', '4 MAR 2009');
    COMMIT;
    I tried using RANK function to rank all the orders by date. So used ORDER BY cluse on date in descending order thinking that the null dates would be on top and will be grouped together by each ORD_NO.
    SELECT ORD_NO, DT, RANK() OVER (PARTITION BY ORD_NO ORDER BY DT DESC)
    FROM Status;
    ...but the result was something..
    ORD_NO |DT |RANKING
    *1 | | 1*
    *1 | | 1*
    *2 | | 1*
    *2 | | 1*3 | | 1
    3 |1/6/2009 | 2
    3 |1/1/2009 | 3
    4 |3/4/2009 | 1
    4 |3/2/2009 | 2
    I am not sure why didn't the first two ORD_NOs didn't group together and why ranking of 1 was assigned to them. I was assuming something like:
    ORD_NO |DT |RANKING
    *1 | | 1*
    *1 | | 2*
    *2 | | 1*
    *2 | | 1*
    3 | | 1
    3 |1/6/2009 | 2
    3 |1/1/2009 | 3
    4 |3/4/2009 | 1
    4 |3/2/2009 | 2
    Please guide me if I am missing something here?
    Regards
    Sri

    Hi,
    If i well understood, you don't need rank
    SELECT   ord_no, MAX (dt)KEEP (DENSE_RANK LAST ORDER BY dt) dt
        FROM status
    GROUP BY ord_no
    SQL> select * from status;
        ORD_NO STAT       DT
             1 Open
             1 Pending
             2 Open
             2 Pending
             3 Open       2009-01-01
             3 Pending    2009-01-06
             3 Close
             4 Open       2009-03-02
             4 Close      2009-03-04
    9 ligne(s) sélectionnée(s).
    SQL> SELECT   ord_no, MAX (dt)KEEP (DENSE_RANK LAST ORDER BY dt) dt
      2      FROM status
      3  GROUP BY ord_no;
        ORD_NO DT
             1
             2
             3
             4 2009-03-04
    SQL>

  • MDX Rank Over Partition

    Are there any MDX gurus who can help me?
    I am trying to produce an MDX query that generates a ranked result set, within this I am trying to get two levels of ranking based on Net Sales, firstly the ranking within the overall set, and secondly a ranking partitioned by an attribute dimension (the equivalent of RANK () OVER (PARTITION BY ...) in SQL Server), with the final result set sorted alphabetically by the attribute name and secondly by Net Sales. So far I have got the sorting and the overall ranking to work but not the partitioned rank. Any solution will need to be fast as the base dimension has 100K members.
    My current MDX looks like this:
    WITH
    SET [Divisions] AS '[AttributeContract].[Att_CC01].Children'
    SET [ContractsByDiv] AS
    'GENERATE(
    ORDER(
    [AttributeContract].[Att_CC01].Children,
    AttributeContract.CurrentMember.[MEMBER_NAME],
    BASC
    CROSSJOIN(
    AttributeContract.CurrentMember
    ORDER(
    NonEmptySubset(
    UDA([Entity].[Entity Contract Reporting], "Contract")
    [Net Sales],
    BDESC
    MEMBER [Account].[Overall Rank] AS 'Rank([ContractsByDiv].CurrentTuple,[ContractsByDiv],[Net Sales])'
    MEMBER [Account].[Rank In Div] AS '1'
    SELECT
    [Net Sales]
    ,[Overall Rank]
    ,[Rank In Div]
    } ON COLUMNS,
    [ContractsByDiv]
    } ON ROWS
    FROM HCREPRT2.Analysis
    WHERE
    [Year].[FY13],
    [Period].[BegBalance],
    [ISBN Type].[Total ISBN Type],
    [Lifecycle].[Front List],
    [Scenario].[DPG_Budget],
    [Market].[Total Market],
    [Version].[Working],
    [Sales Channel].[Total Sales Channel]
    Any suggestions as to how to do this or whether it is possible?
    Regards,
    Gavin
    Edited by: GavinH on 07-Mar-2012 02:57

    This was the solution I came up with:
    The following query returns a result set with the the data ranked across the overall set and with a ranking partioned by division:
    WITH
    SET [Divisions] AS 'ORDER([AttributeContract].[Att_CC01].Children,AttributeContract.CurrentMember.[MEMBER_NAME],BASC)'
    SET [EntitySet] AS 'ORDER(NonEmptySubset(UDA([Entity].[Entity Contract Reporting], "Contract")),[Net Sales],BDESC)'
    SET [ContractsByDiv] AS
    'GENERATE(
    [Divisions],
    CROSSJOIN(
    AttributeContract.CurrentMember
    NonEmptySubset([EntitySet])
    -- Rank in whole data set
    MEMBER [Account].[Overall Rank] AS 'Rank([ContractsByDiv].CurrentTuple,[ContractsByDiv],[Net Sales])'
    -- Ranking in division
    MEMBER [Account].[Rank In Div] AS
    'Rank(
    ([AttributeContract].CurrentMember,[Entity].[Entity Contract Reporting].CurrentMember),
    CROSSJOIN(
    AttributeContract.CurrentMember
    NonEmptySubset([EntitySet])
    [Net Sales]
    -- Rownumber
    MEMBER [Account].[RowNumber] AS 'RANK([ContractsByDiv].CurrentTuple,[ContractsByDiv],1,ORDINALRANK)'
    SELECT
    [Net Sales]
    ,[Overall Rank]
    ,[Rank In Div]
    ,[RowNumber]
    } ON COLUMNS,
    [ContractsByDiv]
    } ON ROWS
    FROM HCREPRT2.Analysis
    WHERE
    [Year].[FY13],
    [Period].[BegBalance],
    [ISBN Type].[Total ISBN Type],
    [Lifecycle].[Front List],
    [Scenario].[DPG_Budget],
    [Market].[Total Market],
    [Version].[Working],
    [Sales Channel].[Total Sales Channel]
    The key was to use the cross join portion of the generate statement used to create the overall set as the set for the intra divisional ranking.

  • Dear SQL-Experts:  RANK() OVER (...) - Function

    Hello,
    I'm looking for a solution for the following SQL-SELECT-Problem.
    Imagine a table or a view with
    many items with a composite key (5 columns)
    some item-specific data columns and...
    ...at least an integer-column which counts double or more entrys of items with the same key (call it U_EntryCnt)
    What I need is a SELECT-Statement which SELECTs all Items BUT only one of the double entrys (if there are doubles).
    It should SELECT only that one of the double entrys which has the maximum number of the double-entrys in the U_EntryCnt-column.
    My idea is to create a VIEW like (three key-columns in this example):
    SELECT
         RANK() OVER (PARTITION BY U_KeyCol1, U_KeyCol2, U_KeyCol3 ORDER BY U_EntryCnt DESC) AS Rank,
         U_KeyCol1, U_KeyCol2, U_KeyCol3,
         U_DatCol1, U_DatCol2,..........U_DatColN,
            U_EntryCnt
    FROM
         [@TST_EXAMPLE]
    And afterwards SELECTing FROM that VIEW WHERE Rank=1
    A test on a little example table seems to work. But could somebody tell me is this is the right way?
    Thanks,
    Roland
    PS.:
    GROUP BY does not work in my opinion. When I want to SELECT any column, this column must be added to the GROUP BY statement. Once a selected data col differs within the same key doubles I get two results of the same key and this is not wanted.

    Hi Robin,
    thanks for your answer. (I hope I've understand everything right - this problem also seems to lead me to the boundary of my english ;-/ )
    Within the double-key-rows the MAX-Aggregat gives me not the correct result: It does not return the data from the row with the MAX(U_EntryCnt) but rather the maximum of the data within the double-key-rows.
    The best would be an complete example.
    Here is the example-table with (unsorted) keys, data and the entry-counter (only one example DataCol for clearness):
    KeyCol1 |KeyCol2 |KeyCol3 |DataCol1     |EntryCnt
    ================================================
    A     |A     |1       |AA1 XXX     |1
    B     |B     |1       |BB1 Wanted     |2
    A     |A     |1       |AA1 Wanted     |2
    B     |B     |1       |BB1 XXX     |1
    C     |C     |1       |CC1 Wanted     |1
    The wanted rows are marked with "Wanted" in the Data colum for example. Technically they're wanted because these rows are containing the maximum EntryCnt-Number within their key-double rows.
    Robin:
    When you talk about sub-select I think you mean sth. like this:
    SELECT
         T0.U_KeyCol1, T0.U_KeyCol2, T0.U_KeyCol3, MAX(T0.U_EntryCnt),
         (SELECT T1.U_DatCol1
         FROM [@TST_EXAMPLE] T1
         WHERE
         T0.U_KeyCol1=T1.U_KeyCol1 AND
         T0.U_KeyCol2=T1.U_KeyCol2 AND
         T0.U_KeyCol3=T1.U_KeyCol3 AND
         T1.U_EntryCnt=MAX(T0.U_EntryCnt)) AS DatCol1,
         (SELECT T1.U_DatColN
         FROM [@TST_EXAMPLE] T1
         WHERE
         T0.U_KeyCol1=T1.U_KeyCol1 AND
         T0.U_KeyCol2=T1.U_KeyCol2 AND
         T0.U_KeyCol3=T1.U_KeyCol3 AND
         T1.U_EntryCnt=MAX(T0.U_EntryCnt)) AS DatColN
    FROM
         [@TST_EXAMPLE] T0
    GROUP BY
         T0.U_KeyCol1, T0.U_KeyCol2, T0.U_KeyCol3
    Yes: this also works.
    As far as I know every column needs it's own subSelect. Very extensive when we talk about 20 to 40 columns.
    If the RANK function really gives the same result under all circumstances (in this example it does) it's much easier:
    First create a VIEW which contains a Rank-column:
    CREATE VIEW [dbo].[@TST_EXAMPLE_VIEW] AS
    SELECT
         RANK() OVER (PARTITION BY U_KeyCol1, U_KeyCol2, U_KeyCol3 ORDER BY U_EntryCnt DESC) AS Rank,
         Code, Name,
         U_KeyCol1, U_KeyCol2, U_KeyCol3,
         U_DatCol1, U_DatCol2, U_DatCol3, U_DatCol4,
         U_EntryCnt
    FROM
         [@TST_EXAMPLE]
    And now the condition WHERE Rank=1 seems to give the wanted rows (in the example it does :-):
    SELECT * FROM [@TST_EXAMPLE_VIEW] WHERE Rank=1
    Because this is a much more clearly arranged query, it would be nice if somebody could confirm that this is also a right way.
    Another question is which of the two Query-examples may have the best SQL-Server performance (which ist faster)

  • Consecutive dates count as one

    Hi ,
    I have a table which looks like this 
    str_ID       x_ID                 y_ID         Date_of_absence
    1527 218823433
    490705 2013-05-15 00:00:00
    1527 218823433
    490837 2013-05-17 00:00:00
    1527 218823433
    492239 2013-06-15 00:00:00
    1527 218823433
    493082 2013-07-04 00:00:00
    1527 218823433
    493147 2013-07-05 00:00:00
    1527 218823433
    493195 2013-07-06 00:00:00
    1527 218823433
    493227 2013-07-07 00:00:00
    1527 218823433
    493268 2013-07-08 00:00:00
    1527 218823433
    494776 2013-08-06 00:00:00
    1527 218823433
    494828 2013-08-07 00:00:00
    1527 218823433
    494905 2013-08-08 00:00:00
    1527 218823433
    495450 2013-08-17 00:00:00
    1527 218823433
    495485 2013-08-18 00:00:00
    As you can see there are 13 rows.
     But what i want to do is avoid 2 or three consecutive dates and count them as one which would result in different row count
    help please.

    E.g.
    DECLARE @Sample TABLE
    str_ID INT,
    x_ID INT,
    y_ID INT,
    Date_of_absence DATE
    INSERT INTO @Sample
    VALUES ( 1527, 218823433, 490705, '2013-05-15' ),
    ( 1527, 218823433, 490837, '2013-05-17' ),
    ( 1527, 218823433, 492239, '2013-06-15' ),
    ( 1527, 218823433, 493082, '2013-07-04' ),
    ( 1527, 218823433, 493147, '2013-07-05' ),
    ( 1527, 218823433, 493195, '2013-07-06' ),
    ( 1527, 218823433, 493227, '2013-07-07' ),
    ( 1527, 218823433, 493268, '2013-07-08' ),
    ( 1527, 218823433, 494776, '2013-08-06' ),
    ( 1527, 218823433, 494828, '2013-08-07' ),
    ( 1527, 218823433, 494905, '2013-08-08' ),
    ( 1527, 218823433, 495450, '2013-08-17' ),
    ( 1527, 218823433, 495485, '2013-08-18' );
    WITH Dates AS
    SELECT *,
    LAG(Date_of_absence, 1, '19000101') OVER ( ORDER BY Date_of_absence ) AS PrevDate,
    LEAD(Date_of_absence, 1, '20990101') OVER ( ORDER BY Date_of_absence ) AS NextDate
    FROM @Sample S
    Diffs AS
    SELECT *,
    DATEDIFF(DAY, PrevDate, Date_of_absence) AS DiffPrev,
    DATEDIFF(DAY, Date_of_absence, NextDate) AS DiffNext
    FROM Dates D
    GroupToggels AS
    SELECT *,
    CASE WHEN DiffNext != DiffPrev AND DiffPrev != 1 THEN 1 ELSE 0 END AS GroupToggle
    FROM Diffs D
    SELECT *,
    SUM(GroupToggle) OVER ( ORDER BY Date_of_absence ) AS GroupID
    FROM GroupToggels GT
    ORDER BY 4;
    When you don't have 2012+ then replace the LEAD and LAG by self-joins over ROW_NUMBER()+-1.

  • How can I disable e-mail and text messages when I'm close to or over my data quota ?

    How can I disable e-mail and text messages when I'm close to or over my data quota ?
    Is this possible? I don't see any options?
    All I see is this:
      Receive confirmations of many account transactions via Email and/or free Text Alerts delivered right to your cell phone. You can receive these confirmations for  
       transactions done online, over the phone or from your handset. Account management Text Alerts do not count against any Messaging allowances.
          Your bill is ready for review.
         Payment Confirmations: Your payment has been  received    Your credit/adjustment     has been applied.1
         Account Activity Confirmations     Plan changes     Feature changes1

    What phone?
    You would need to disable data on the phone itself.  You wouldn't do it online on your account.
    Note also that text messages do not count toward your data allowance.

  • Sharepoint Custom calendar – Hover over the date to add a new item is not working – Sharepoint 2010

    Hi,
    In my Sharepoint visual web part i am using default sharepoint calendar view. But Mouse hover over the date to add a new item is not working. Please see this image below i need the same add new item functionality. 
    

    Hi Sudhanthira,
    Couple of simmilar queries i can see from Madhu posted.
    Please follow this thread:-
    http://social.technet.microsoft.com/Forums/en-US/sharepoint2010programming/thread/b62f9b7e-2ce1-4efd-905c-9cc5471ad216
    To be or Not to Be..The question is this only......

  • How do I block kids phone so they don't go over their data limit?

    My son continues to go over his data plan every month.  I want to block the data so when his data usage is used, he cannot go over the usage until it starts over the next month.  I do not want to buy a more expensive data plan

    I have a hard time understanding how this is Verizon's fault.   Having a phone is a responsibility and part of that is living within the limits of the plan.  Also the idea of post paid is to always have service, and pay after if you go over, thus the "post".  This is important for many to always have service that is not cut off. 
    There is is a solution for those that cannot generally stick to limits, prepaid service.  Get your son prepaid service and when the limits are reached, no service.  Or get the family base.  But at least understand what post paid actually means. 

  • How to use to use rank over() function in block coding

    Hi,
    I am having problem with using rank () over function in block coding. I can't use it in declaration section with select statement.
    How to use in executable section of pl sql ?
    --Sujan                                                                                                                                                                                                                                                                                                                                                                                                           

    thanks

  • Shared Photo Streams pushed to device over cellular data

    Hi,
    I created today three shared photo streams in iPhoto and all of a sudden my iPhone started getting very hot and battery being drained quickly (it's an oldish 3GS, and the battery level was decreasing even though the iPhone was connected to the charger...).
    In a nutshell: it seems that whilst data from the "ordinary" Photo Stream are uploaded/downloaded only when the device is connected to a wi-fi netwrok, if you have shared Photo Streams enabled on a iOS 6 device, picture data in these photo streams are "pushed" to the device also over cellular data, regardless of any other setting.
    Apart draining the battery very fast, this may lead to using inadvertently a lot of data traffic in your data plan.
    Did anyone notice a similar behavior and find out how to prevent it, apart disabling Shared Photo Streams alltogether, of course, i.e. did anyone find a a way to ensure shared photo streams are pushed to the device only under wi-fi?
    cheers
    -gianpiero

    I suppose you could disable Shared Photo Steam while you are at cellular network and enable it again when you are at a wi-fi environment.   In Settings>iCloud>Photo Stream> turn Shared Photo Stream to Off.
    The pop up warning tells you that this will delete all shared photos from your device.  What it doesn't say is, since shared photo stream remains stored in iCloud, all you shared and shared with you photo streams will be push back to your device once you enable Shared Photo Steam in wi-fi.  The downside of course is if you have a lot of photos in your shared photo steams, it will take some time to download them.  But at least you know for sure your cellular data is not affected.
    Btw, Shared Photo Steams doesn't count towards iCloud storage, just like Photo Steams.  Maybe That's why turning off iCloud document doesn't affect shared photo steams.
    See also:
    http://support.apple.com/kb/HT4486
    http://support.apple.com/kb/HT4858

  • Anyone going over their Data now that they have a Samsung S5?

    I just recently (3 weeks) purchased a Samsung S5 and it shows I am using Data when I am positive I am connected to WIFI (such as I am at home and I know our WIFI is working). We're going over our data limit for the first time ever. It even shows data usage this morning while I was at home and again...the wifi is working at home. Anyone have any answers?

    Go into Settings ... Data Usage, and look at your Mobile data and wifi usage.  Which apps are using the mobile data? 
    We have 4 lines on our family plan, and over the years have had several smartphones (iPhones, Blackberry, Android and Windows phones, various models).  Only once did I see a data "spike", when my daughter first got her iPhone 5C.  It was gobbling data, till she went into her app list and shut off the data for lots of things. 
    Whenever we get a new (smart) phone, I typically don't activate it until I have used wifi to download all my apps and get my email hooked up.  Once it all works on wifi, then I'll activate the new phone.  I make sure my apps are set to update over wifi only (it may be a setting within each app), and my photos are backing up over wifi only.  Facebook now auto plays videos, and you need to change the setting to play videos on wifi only.  Same with email - set it to down load attachments over wifi only.
    There are lots of ways data your device can use data; finding out which apps and what processes are the culprits (by looking at the data usage on the phone) can help you to use your data allotment most effectively.

  • Can't download podcast episodes over cellular data

    I am running the current ios software and have the current Apple Podcast app installed. Since updating to 7.0.4 and installing the current app version, I now am having issues with downloading podcast episodes using cellular data specifically. You try and download an episode and it just freezes and then says "download error, tap to retry". You then get a massive popup saying "unable to download podcast" and underneath this, it says the podcast episode "could not be downloaded at this time". This happens not just with one specific podcast. I have tried everything I can think of and that Apple have suggested. I have:
    - Unsubscribed then resubscribed to podcasts I have seen this issue with
    - Closed out of the app and gone back in
    - Ensured at all times I have been attempting this, that cellular data has been on for podcasts
    - Switched cellular data off for podcasts and as a whole then I have switched back on
    - Done a hard reset of the phone
    - Deleted the Podcast app itself, reinstalled and then tried to download an episode over cellular data
    - Reset Network Settings
    - Performed a full restore of the device
    After doing the restore, I set the phone up as new and then installed just the podcast app and was able to download an episode over cellular data. I then restored my data from backup and then all of a sudden, I had the same issue again. I did another reset of the network settings and then the issue went away. The issue has returned yet again today. I called Apple to advise and they say they will investigate and get back to me.
    Has anyone else had this issue and have you managed to find a permanent solution? When I am on wifi, I don't have this issue at all. I attempted to download another app and a song from itunes over 3G and had no issues. As such, I can only conclude there must either be an issue with the app itself or ios7.0.4 has done some damage here. I like to download my podcasts on the train to work so don't always have the ability to download over 3G.
    A very annoying issue!

    I am having same problem, except mine is a movie rental on 3rd gen ipad wifi. It seems like this problem has been occuring for quite a few years and apple cant be bothered to fix it. It is really annoying because none of the fixes work. My movie is 99% downloaded and i cant play it or delete or retry the download. Its gona be a few months before anyone from apple decides to post to this thread anyway. Its a real shame. In the apple store not one "genius" cud help with this either.

  • How to use rank() over(...) in a map?

    I use rank() over(...) in a Filter operator
    validate give a error: ora-30483

    Currently OWB does not directly support analytical functions. The (not very elegant) workaround could be implementing these feature in custom transformations.
    Regards:
    Igor

  • App submission took forever and transmitted over 1GB data

    Hi,
    I'm trying to submite an app to iTunes App Store from XCode.  I am using the Product -> Archive -> Distribute ... route.
    The problem is that I'm stuck in the "Your application is being uploaded" screen for over 25 mins.  During this period, I could see my outgoing transfer rate is from 500KB/s to 2.0MB/s, and over 1GB data has been trasmitted.  But right now the progress bar just passed 1/3.
    The file size of my app is smaller than 20MB.  So I'm wondering what is taking so long, and why so much data has been transmitted.  (If I cancel the submission, the network activity immediately decreases to under 100KB/s.)
    Anyone encountering similar situations?
    Thanks

    I had the same issue. I rebuilt the archive and now the problem has gone.

Maybe you are looking for

  • How to migrate some page  from a workspace to another workspace

    APEX 4.0.2.00.07 Oracle 10.1.2.0.0 I have 2 workspaces, the one regard as Dev env, another regard as Pro. env. When the change need to be migrated, the some page of Dev workspace should be copy to Pro. one. I export the pages what I want to move from

  • Named Set doesn't show aggregated values in excel while browsing

    Hi All, I have create a named to show Top 20 Clients Alphabetical. When I browse the named set, it is not showing aggregated values in the below.You can see in the image. Named Set Expression:  FILTER(      Order ([Client].[Client].members,[Measures]

  • Need help in solving Linked lists queries

    I have encountered this code part a Node head, tail; head = null; tail = null; head = new Node("Beef", head); //what does that mean?part b Node head, tail; head = null; tail = null; node  new 3 = new Node("Pork"); tail.next = new3; tail = new3;I am t

  • Why does my screen not work properly

    I have had issues with my phone and sometimes I cannot use it because the screen does nothing but bounce.  I have updated and rebooted as well aNdebele I am very disappointed with the fact that most of the time when I need it most it does not work. 

  • F.5D -  BS Readjust -Original GL is getting Dr / Cr  instead of Clg A/c

    Hi all, In case of Tcode - F.5D / F.5E system is Debiting / Crediting the FICO Reconciliation account (GL - 60200 - Defined for F.5D only) to adjust the Business Area entries but  in some cases directly Debiting /Crediting " AP - CG Dedutc (VAT) - Cu