Distinct count inside a measure group with other measures

Hello,
I have 1 distinct count inside a measure group with other measures, sum, count etc. I know this is not recommended due to poor processing performance and query response time.
Processing performance I can live with if it means not having another measure group, which increases processing time anyway.
I have used the recommended approach before and it generated many questions about what this second measure group is for (visible via excel), even though I made the distinct count appear in the main measure group via a calculated measure.
(it would be nice if you could hide measure groups)
However my question is: is query response time only effected when the distinct count is used in the query? Or is query response time effected regardless if the distinct count is used or not??
Below is an extract from the 2005 distinct count optimizer white paper. It’s not completely clear but I assume if effects queries regardless if distinct count is used or not?
"By adding other measures to the measure group holding a distinct count measure, all of the other measures will be at the same granularity as the distinct count measure, resulting in inefficient data structures and suboptimal
queries."

You might also be interested in reading this blog post, which deals with a similar scenario, to get a feeling for some of the things that might be going on behind the scenes:
http://cwebbbi.wordpress.com/2012/11/27/storage-engine-caching-measures-and-measure-groups/
Chris
Check out my MS BI blog I also do
SSAS, PowerPivot, MDX and DAX consultancy
and run public SQL Server and BI training courses in the UK

Similar Messages

  • Can I use virtual bench to measure/communicate with other device by I2C etc?

    Hi,
    Can I use vitrual bench to measure I2C/SPI/CAN by digital I/O or logic analyzer?
    And is it possible using digital I/O to communicate with other device by I2C etc?
    Thanks,
    Jimmy

    You might be able to get it to work (not sure how fast the DIO on the Virtual Bench can go), but for I2C, I would recommend using something that is actually meant for it, like the USB-8451 or USB-8452.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Sharing Address Book (and groups) With Others

    I know there is a way to share address book with others through mac.com, but I'm wondering if there is a way to do it without mac.com ...

    Yes, there are quite a few other options. I developed a solution for this called Address Book and Calendar Server (http://www.addressbookserver.com). There seems to be also an option to use Google etc.
    PS: I am the developer of AddressBookServer.com and naturally biased towards it.

  • How to compare the value node of a for-each-group with other for-each-group

    Hello!
    I have a report in Oracle BI Publisher (10.1.3.2) with several data set. My XML schema is something like
    <DATA>
    <PARAMETERS>
    <MY_PARAMETERS>
    <A_ID>12345</A_ID>
    <DESCRIPTION>ABC</DESCRIPTION>
    <VALUE>111111</VALUE>
    </MY_PARAMETERS>
    <MY_PARAMETERS>
    <A_ID>12345</A_ID>
    <DESCRIPTION>DEF</DESCRIPTION>
    <VALUE>222222</VALUE>
    </MY_PARAMETERS>
    <MY_PARAMETERS>
    <A_ID>67890</A_ID>
    <DESCRIPTION>ABC</DESCRIPTION>
    <VALUE>333333</VALUE>
    </MY_PARAMETERS>
    </PARAMETERS>
    <NAMES>
    <MY_NAMES>
    <A_ID>12345</A_ID>
    <NAME>ASDF</NAME>
    </MY_NAMES>
    <MY_NAMES>
    <A_ID>67890</A_ID>
    <NAME>EFGH</NAME>
    </MY_NAMES>
    </NAMES>
    <VALUES>
    <MY_VALUES>
    <A_ID>12345<A_ID>
    <VALUE>10987</VALUE>
    <DESCRIPTION>ASDFG</DESCRIPTION>
    </MY_VALUES>
    <MY_VALUES>
    <A_ID>12345<A_ID>
    <VALUE>26385</VALUE>
    <DESCRIPTION>EFGHI</DESCRIPTION>
    </MY_VALUES>
    <MY_VALUES>
    <A_ID>67890<A_ID>
    <VALUE>24355</VALUE>
    <DESCRIPTION>ASDFG</DESCRIPTION>
    </MY_VALUES>
    </VALUES>
    </DATA>
    I'm trying to build a rtf template in Word using this XML schema. The "A_ID" nodes in each group in my data have the same value. I want for each "A_ID" take the respective values in /DATA/VALUES/MY_VALUES.
    <?for-each-group:MY_PARAMETERS;./A_ID?>
    <?for-each:current-group()?>
    <?choose:?><?when: DESCRIPTION='ABC'?>
    <?VALUE?>
    <?end when?><?end choose?>
    <?end for-each?>
    <?for-each:current-group()?>
    <?choose:?><?when: DESCRIPTION='DEF'?>
    <?VALUE?>
    <?end when?><?end choose?>
    <?end for-each?>
    <?/DATA/NAMES/MY_NAMES/VALUE?>
    <?for-each-group:/DATA/VALUES/MY_VALUES;./A_ID?>
    <?for-each:current-group()?>
    <?choose:?><?when: DESCRIPTION='ASDFG'?>
    <?VALUE?> <---------------- I obtain for this node the '24355' and '10987' values
    <?end when?><?end choose?>
    I want to know how to obtain only '24355' value, this is, the value for A_ID (/DATA/VALUES/MY_VALUES) = A_ID (/DATA/PARAMETERS/MY_PARAMETERS).
    Can someone help me?

    CREATE OR REPLACE TRIGGER "TEST_TRG"
       BEFORE UPDATE OF "STATUS"
       ON "TABLE1"
       FOR EACH ROW
    BEGIN
       IF (:NEW.status = 'HOLD')
       THEN
          INSERT INTO table2
                      (status
               VALUES (:NEW.status
       END IF;
    END;You should learn how to write PL/SQL code.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.apress.com/9781430235125
    http://apex.oracle.com/pls/apex/f?p=31517:1
    http://www.amazon.de/Oracle-APEX-XE-Praxis/dp/3826655494
    -------------------------------------------------------------------

  • Scxi 1126 frequency measurement along with other scxi modules

    I want to measure frequency signal using sxci 1126. I have gone through the example code for sxci 1126. I am using NI-DAQ and Labview 7.1.
    The example code is applicable when am acquiring only from 1126. I have to acquire data also from 1102 102B along with 1126.
    In the example code along with AI config.vi and Ai Start.vi there are two additional vi's like AI paramete.vi and AI Trigger config.vi I understand that they are used for reading from 1126. My worry is how this is going to affect my acquisition from other scxi modules as all the modules are in multiplexed mode.

    Hi Bipin,
    Looking at your code, I see that you are still using Traditional DAQ. Since you have LV 7.1, you can take advantage of the new features of DAQmx.
    With DAQmx, you can create a virtual task that includes all of the different types of measurements you want to make. Once you set up the task in MAX, you can place it on the block diagram and automatically generate the code necessary to execute this.
    To do this, open up the Measurement and Automation Exlporer (MAX). Select "Data Neighborhood" from the tree on the left. Then click the button labeled "Create New." Create a DAQmx Virtual Channel. Follow the series of menus to set up your first type of acquisition (1126 frequency measurement). When you are
    done creating the task, you can add the second type of measurement to that same task. In the task config page, you will see a white box with the name of the channel you just created in it. Above the name you will see an "Add" button. Click on this button and follow the menus to set up your second measurement (1102 module).
    Once they are set up, save the task and open LabVIEW. Place the DAQmx Task Name constant on your block diagram found in All Functions >> Data Acquisition >> DAQmx... the task name constant is a purple colored box. Clicking on the box once its on the block diagram will expand a menu that will let you choose your task. Once selected, right click on the task name and choose Generate Code >> Example. This will automatically generate the code necessary to run the tasks. Simply hit the run button and enjoy!
    Thanks,
    Sal

  • Regular measures(measures with SUM function) are not working along Distinct count measures

    Hi All,
    I am creating a cube that got to have a distinct count measure and a sum measure. if i have created only sum measure then it is working fine. if i create both measures and process the cube only distinct count measure is populated. the sum measure is showing
    all blank values. i am using 2008 R2, and creating 2 different measure groups for both measures, after i include the distinct count measure the sum measure becoming null. can you please help me with this? i am breaking my head for last 2 days on this.. Thank
    You

    Ramesh, measures are affected by the context of the queries that contain them, for example and in some cases, you can get a different total count of something by two different queries, this is because the context of the first query is different than
    the second one ... keep this in mind.
    Now, I've noticed that you are "creating 2 different measure
    GROUPS for both measures", and i guess that you are trying to view those two measures _which are from different measure
    groups_ at the same time and in the same report.
    considering the info in the first point and as you are create the calculated measures in two different measure
    groups, I'm not sure but i guess that this is the problem, and i suggest you create those two calculated measures
    in the same measure group, then try to view them again and let's see.
    if the previous point didn't solve it, please post the expressions you are using to create the calculated measures, maybe this will help in finding the problem.  

  • Unexpected error durung process after adding a new measure group

    I add a new measure group with distinct count in my cube
    I process my cube then I have:
    Erreur interne : Une erreur inattendue s'est produite. Erreurs dans le moteur de stockage OLAP : Une erreur s'est produite lors du traitement de la partition 'Requirement' du groupe de mesures 'Formalized 1' pour le cube 'Requirements_view' à partir de la
    base de données SeikoCube. (internal error : unexpected error happens ... An error occured when processing the partition for measure group for the cube)
    If I delete this measure, it works well.
    Whap happens ? How could I resolve it ?
    Thanks in advance

    Hi Fiacre663,
    According to your description, you encounter the error while processing the cube after added the distinct count to cube, right? In your scenario, which process option are you used to process the cube?
    Generally, if you add a measure you are changing the structure of the cube which will invalidate the cube, the easiest way to get the cube "fully operational" again would be to do a ProcessFull. The same thing applies when removing a measure.
    Besides, please ensure that the steps to add distinct count are correct. There are different options for creating a distinct count measure in SSAS. Please refer to the link below to see the details.
    http://www.mssqltips.com/sqlservertip/3043/different-options-for-creating-a-distinct-count-measure-in-ssas/
    Regards,
    Charlie Liao
    TechNet Community Support

  • Error: The sort order specified for distinct count records is incorrect

    When processing a measure group with a distinct count measure in it, i get the following error:
    "The sort order specified for distinct count records is incorrect."
    I have no idea what this means - any ideas?

    I had the same problem and your fix worked.  In more detail, the problematic field was contract_no.  I added a named calculation to the table in the Data Source View with the formula CHECKSUM(contract_no).  Then I created the distinct count measure on that named calculation.  And, lo and behold, the errors disappeared! 
    Thank you to Frank.
     - CindyCindy P Hoskey

  • Accumulative Distinct Counts

    I’ve been at this for two days now. Could someone lead me down the right path? Given the following data set:
    create table data_owner.test_data
    item_number varchar2(10 byte),
    store_number varchar2(10 byte),
    calendar_year varchar2(10 byte),
    calendar_week varchar2(10 byte),
    units_sold integer
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '31', '2010', '51', 4)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '16', '2010', '51', 2)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '31', '2010', '52', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '27', '2010', '52', 1)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '16', '2011', '1', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '27', '2011', '2', 5)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '20', '2011', '2', 4)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '27', '2010', '51', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '16', '2010', '52', 2)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '20', '2010', '52', 1)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '16', '2011', '1', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '31', '2011', '2', 3)
    select * from test_data
    item_number     store_number     calendar_year     calendar_week     units_sold
    1111     31     2010     51     4
    1111     16     2010     51     2
    1111     31     2010     52     3
    1111     27     2010     52     1
    1111     16     2011     1     3
    1111     27     2011     2     5
    1111     20     2011     2     4
    2222     27     2010     51     3
    2222     16     2010     52     2
    2222     20     2010     52     1
    2222     16     2011     1     3
    2222     31     2011     2     3
    My desired result is a sum of units sold and an accumulative distinct count of store numbers grouped by item, year, and week. i.e.:
    item_number     calendar_year     calendar_week     store_count     sum(units_sold)
    1111     2010     51     2     6
    1111     2010     52     3     4
    1111     2011     1     3     3
    1111     2011     2     4     9
    2222     2010     51     1     3
    2222     2010     52     3     3
    2222     2011     1     3     3
    2222     2011     2     4     3
    I can’t seem to get the store count right. I’ve been trying various methods of the count(distinct store_number) over (…) analytic function, but nothing works. Thanks.

    Hi,
    Interesting problem!
    When using analytic functions, you can't use both DISTINCT and ORDER BY. Too bad; that sure would be convenient.
    The most general solution is to use aggregate functions instead of analytic functions, and do a self-join to pair every row ("table" l, for "later" in the query below) with every earleir row ("table" e below) for the same item:
    SELECT       l.item_number
    ,       l.calendar_year
    ,       l.calendar_week
    ,       COUNT (DISTINCT e.store_number)     AS store_count
    ,       SUM (l.units_sold)
         / COUNT (DISTINCT e.ROWID)          AS total_units_sold
    FROM       test_data   e
    JOIN       test_data   l      ON     e.item_number     = l.item_number
    AND                               e.calendar_year || LPAD (e.calendar_week, 2)
                                    <= l.calendar_year || LPAD (l.calendar_week, 2)
    GROUP BY  l.item_number
    ,       l.calendar_year
    ,       l.calendar_week
    ORDER BY  l.item_number
    ,       l.calendar_year
    ,       l.calendar_week
    ;You might think about storing a DATE (say, the date when the week begins) instead of year and week. It would simplify this query, and probably lots of other ones, too. I realize that might complicate some other queries, but I think you'll fiond a net gain.
    Thanks for posting the CREATE TABLE and INSERT statements; that helps a lot!
    Edited by: Frank Kulash on Nov 18, 2011 12:48 PM
    Here's an analytic solution. As you can see, it requires more code, and more complicated code, but it might perform better:
    WITH     got_r_num   AS
         SELECT     item_number
         ,     calendar_year
         ,     calendar_week
         ,     units_sold
         ,     ROW_NUMBER () OVER ( PARTITION BY  item_number
                                   ,                    store_number
                             ORDER BY        calendar_year
                             ,                calendar_week
                           )      AS r_num
         FROM    test_data
    SELECT DISTINCT
         item_number
    ,     calendar_year
    ,     calendar_week
    ,     COUNT ( CASE
                        WHEN  r_num = 1
                  THEN  1
                    END
               )             OVER ( PARTITION BY  item_number
                                      ORDER BY      calendar_year
                          ,          calendar_week
                                 )                    AS store_count
    ,       SUM (units_sold) OVER ( PARTITION BY  item_number
                                    ,             calendar_year
                          ,             calendar_week
                             )                         AS  total_units_sold
    FROM       got_r_num
    ORDER BY  item_number
    ,            calendar_year
    ,       calendar_week
    ;This approah will not work in all windowing situations. It's okay fo this job, but not if you wanted,for example, a count of distinct stores from the last 6 weeks, and the report covers more than 6 weeks.

  • Update Measurement Point with new Position

    Hi,
    I need to update existing measuring points with new measurement positions. I was looking at FM "MEASUREM_POINT_RFC_SINGLE_002", but don't know how to pass the new Measurement Position to that.
    Can anybody please let me know how has already implemented this?
    Thanks.

    I was looking at FM 'MEASUREM_POINT_DIALOG_SINGLE' to update Masurement Point.
    I wrote the code like this. But it's not updating the Measurement Point with the new position.
    Not sure if I missed anything.
    Pelase help.
    Thanks.
    PARAMETERS: p_point type imrc_point,
                             p_psort type imrc_psort.
    DATA: l_rimr03 type rimr03,
           l_ind(1) type c.
      l_rimr03-mandt = sy-mandt.
    l_rimr03-point = p_point.
    l_rimr03-psort = p_psort.
    BREAK-POINT.
    CALL FUNCTION 'MEASUREM_POINT_DIALOG_SINGLE'
    EXPORTING
       ACTIVITY_TYPE                  = '2'
       MEASUREMENT_POINT              = p_point
       NO_DIALOG                      = 'X'
       IS_RIMR03                      = l_rimr03
    IMPORTING
       INDICATOR_UPDATE               = l_ind
    EXCEPTIONS
       IMPTT_NOT_FOUND                = 1
       TYPE_NOT_FOUND                 = 2
       OBJECT_NOT_FOUND               = 3
       NO_AUTHORITY                   = 4
       POINT_IS_REFMP                 = 5
       POINT_IS_NOT_REFMP             = 6
       OTHERS                         = 7

  • Report using Tabular Model and Measures based on Distinct Counts

    Hello,
    I am creating a report that should present something like this:
    YEAR-1 | MONTH-1 | MONTH-2 | MONTH-3... | YEAR | MONTH-1 | MONTH-2 | MONTH-3...
    My problem is that when designing the dataset to support this layout I drag the Year, Month and Distinct count Measure, but on the report when I want the value for the YEAR level I don't have it and I cannot sum the months value...
    What is the best aproach to solve this? Do I really have to go to advanced mode and customize my MDX or DAX? Can't basic users do something like this that seems so trivial and needed?
    Thank you
    Luis Simões

    Hi Luis,
    According to your description, you create a Reporting Services report using Analysis Service Tabular Model as the datasource, now what you want is sum the months value on year level, right?
    In your scenario, you can add the Month field to column group, add a parent group using Year Field and then add a Total on Month group. In this case, Reporting Services will sum the months value on Year level. I have tested it on my local environment, the
    screenshot below is for you reference.
    Reference:Lesson 6: Adding Grouping and Totals (Reporting Services)
    If this is not what you want, please describe your dataset structure, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Count(*) in a analytic situation with group by order by

    Hello every body,
    I have a count(*) problem in an sql with analytics function on a table
    when I want to have all his column in the result
    Say I have a table
    mytable1
    CREATE TABLE MYTABLE1
    MY_TIME TIMESTAMP(3),
    PRICE NUMBER,
    VOLUME NUMBER
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.01.664','DD-MM-YY HH24:MI:SS:FF3' ),49.55,704492 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.02.570','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705136 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707313 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,706592 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.30.695','DD-MM-YY HH24:MI:SS:FF3' ),49.55,705581 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.227','DD-MM-YY HH24:MI:SS:FF3' ),49.55,707985 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.31.820','DD-MM-YY HH24:MI:SS:FF3' ),49.56,708494 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.32.258','DD-MM-YY HH24:MI:SS:FF3' ),49.57,708955 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.36.180','DD-MM-YY HH24:MI:SS:FF3' ),49.58,709519 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710502 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,710102 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.352','DD-MM-YY HH24:MI:SS:FF3' ),49.59,709962 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.52.399','DD-MM-YY HH24:MI:SS:FF3' ),49.59,711427 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.44.977','DD-MM-YY HH24:MI:SS:FF3' ),49.6,710902 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711379 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.45.550','DD-MM-YY HH24:MI:SS:FF3' ),49.6,711302 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.00.50.492','DD-MM-YY HH24:MI:SS:FF3' ),49.62,711417 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.49,715587 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.47.712','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715166 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.57.790','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715469 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714833 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.821','DD-MM-YY HH24:MI:SS:FF3' ),49.53,714914 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.24.493','DD-MM-YY HH24:MI:SS:FF3' ),49.54,714136 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713387 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.19.977','DD-MM-YY HH24:MI:SS:FF3' ),49.55,713562 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712172 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.274','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713287 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.09.117','DD-MM-YY HH24:MI:SS:FF3' ),49.59,713206 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712984 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.836','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712997 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712185 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.01.08.695','DD-MM-YY HH24:MI:SS:FF3' ),49.59,712261 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.32.244','DD-MM-YY HH24:MI:SS:FF3' ),49.46,725577 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724664 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723366 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725242 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.26.181','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725477 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724521 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,723943 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.25.540','DD-MM-YY HH24:MI:SS:FF3' ),49.49,724086 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.34.103','DD-MM-YY HH24:MI:SS:FF3' ),49.49,725609 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720166 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.5,720066 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718524 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.696','DD-MM-YY HH24:MI:SS:FF3' ),49.5,722086 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,718092 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.11.774','DD-MM-YY HH24:MI:SS:FF3' ),49.5,715673 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.15.118','DD-MM-YY HH24:MI:SS:FF3' ),49.51,719666 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.02.12.555','DD-MM-YY HH24:MI:SS:FF3' ),49.52,719384 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728830 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.11.884','DD-MM-YY HH24:MI:SS:FF3' ),49.48,726609 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.28.963','DD-MM-YY HH24:MI:SS:FF3' ),49.48,728943 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.45.947','DD-MM-YY HH24:MI:SS:FF3' ),49.49,729627 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,726830 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.494','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733653 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.46.510','DD-MM-YY HH24:MI:SS:FF3' ),49.49,733772 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.12.259','DD-MM-YY HH24:MI:SS:FF3' ),49.49,727830 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.59.119','DD-MM-YY HH24:MI:SS:FF3' ),49.5,735772 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.03.47.369','DD-MM-YY HH24:MI:SS:FF3' ),49.5,734772 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740621 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.12.369','DD-MM-YY HH24:MI:SS:FF3' ),49.48,740538 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.20.463','DD-MM-YY HH24:MI:SS:FF3' ),49.48,741021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.10.588','DD-MM-YY HH24:MI:SS:FF3' ),49.49,740138 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738320 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737122 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736424 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737598 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739360 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.135','DD-MM-YY HH24:MI:SS:FF3' ),49.49,736924 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.260','DD-MM-YY HH24:MI:SS:FF3' ),49.49,737784 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738145 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.744','DD-MM-YY HH24:MI:SS:FF3' ),49.49,739134 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.09.463','DD-MM-YY HH24:MI:SS:FF3' ),49.49,738831 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742421 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.580','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741777 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.215','DD-MM-YY HH24:MI:SS:FF3' ),49.5,742021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.48.433','DD-MM-YY HH24:MI:SS:FF3' ),49.5,741091 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.56.840','DD-MM-YY HH24:MI:SS:FF3' ),49.51,743021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.04.57.511','DD-MM-YY HH24:MI:SS:FF3' ),49.52,743497 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.270','DD-MM-YY HH24:MI:SS:FF3' ),49.52,744021 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750292 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,747382 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.17.699','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749939 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.15.152','DD-MM-YY HH24:MI:SS:FF3' ),49.53,749414 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.00.433','DD-MM-YY HH24:MI:SS:FF3' ),49.53,744882 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.08.110','DD-MM-YY HH24:MI:SS:FF3' ),49.54,749262 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.168','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748418 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.01.152','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748243 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.05.07.293','DD-MM-YY HH24:MI:SS:FF3' ),49.54,748862 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.09.433','DD-MM-YY HH24:MI:SS:FF3' ),49.51,750414 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.262','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750930 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,751986 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.28.887','DD-MM-YY HH24:MI:SS:FF3' ),49.53,750986 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.997','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753900 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.30.887','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753222 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,753022 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.29.809','DD-MM-YY HH24:MI:SS:FF3' ),49.55,752847 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.42.622','DD-MM-YY HH24:MI:SS:FF3' ),49.56,755385 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.31.120','DD-MM-YY HH24:MI:SS:FF3' ),49.56,754385 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759087 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.50.341','DD-MM-YY HH24:MI:SS:FF3' ),49.6,759217 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.49.590','DD-MM-YY HH24:MI:SS:FF3' ),49.6,758701 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.57.262','DD-MM-YY HH24:MI:SS:FF3' ),49.6,761049 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.637','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757827 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.48.120','DD-MM-YY HH24:MI:SS:FF3' ),49.6,757385 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.466','DD-MM-YY HH24:MI:SS:FF3' ),49.62,761001 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760109 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759617 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.278','DD-MM-YY HH24:MI:SS:FF3' ),49.62,760265 );
    insert into mytable1 values (to_timestamp ('04-MAR-08 09.06.56.137','DD-MM-YY HH24:MI:SS:FF3' ),49.62,759954 );
    so if I do
    SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
    MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
    MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
    -- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
    TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
    price ,
    COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC,volume ASC ) TRADE,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC,volume ASC ) OPEN ,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC,volume DESC) CLOSE ,
    lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
    FROM mytable1
    WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
    AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
    GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
    price ,
    volume
    ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
    price ,
    num;
    i have
    NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
    1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.59|
    2|49.55|49.62|04/03/2008 09:00:00|49.55|2|49.55|49.59|49.55
    3|49.55|49.62|04/03/2008 09:00:00|49.55|3|49.55|49.59|49.55
    4|49.55|49.62|04/03/2008 09:00:00|49.55|4|49.55|49.59|49.55
    5|49.55|49.62|04/03/2008 09:00:00|49.55|5|49.55|49.59|49.55
    6|49.55|49.62|04/03/2008 09:00:00|49.55|6|49.55|49.59|49.55
    7|49.55|49.62|04/03/2008 09:00:00|49.56|7|49.55|49.59|49.55
    8|49.55|49.62|04/03/2008 09:00:00|49.57|8|49.55|49.59|49.56
    9|49.55|49.62|04/03/2008 09:00:00|49.58|9|49.55|49.59|49.57
    10|49.55|49.62|04/03/2008 09:00:00|49.59|10|49.55|49.59|49.58
    11|49.55|49.62|04/03/2008 09:00:00|49.59|11|49.55|49.59|49.59
    12|49.55|49.62|04/03/2008 09:00:00|49.59|12|49.55|49.59|49.59
    13|49.55|49.62|04/03/2008 09:00:00|49.59|13|49.55|49.59|49.59
    14|49.55|49.62|04/03/2008 09:00:00|49.6|14|49.55|49.59|49.59
    15|49.55|49.62|04/03/2008 09:00:00|49.6|15|49.55|49.59|49.6
    16|49.55|49.62|04/03/2008 09:00:00|49.6|16|49.55|49.59|49.6
    17|49.55|49.62|04/03/2008 09:00:00|49.62|17|49.55|49.59|49.6
    Witch is errouneous
    because
    if I do'nt put the volume column in the script I get another result
    SELECT DISTINCT row_number() over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) num,
    MIN(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) low ,
    MAX(price) over (partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) high ,
    -- sum(volume) over( partition by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 order by trunc(my_time, 'hh24') + (trunc(to_char(my_time,'mi')))/24/60 asc ) volume,
    TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 TIME ,
    price ,
    COUNT( *) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ,price ASC ) TRADE,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ASC ) OPEN ,
    first_value(price) over( partition BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 DESC) CLOSE ,
    lag(price) over ( order by TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60) lag_all
    FROM mytable1
    WHERE my_time > to_timestamp('04032008:09:00:00','DDMMYYYY:HH24:MI:SS')
    AND my_time < to_timestamp('04032008:09:01:00','DDMMYYYY:HH24:MI:SS')
    GROUP BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60 ,
    price
    ORDER BY TRUNC(my_time, 'hh24') + (TRUNC(TO_CHAR(my_time,'mi')))/24/60,
    price ,
    num;
    I get this
    NUM|LOW|HIGH|TIME|PRICE|TRADE|OPEN|CLOSE|LAG_ALL
    1|49.55|49.62|04/03/2008 09:00:00|49.55|1|49.55|49.55|
    2|49.55|49.62|04/03/2008 09:00:00|49.56|2|49.55|49.55|49.55
    3|49.55|49.62|04/03/2008 09:00:00|49.57|3|49.55|49.55|49.56
    4|49.55|49.62|04/03/2008 09:00:00|49.58|4|49.55|49.55|49.57
    5|49.55|49.62|04/03/2008 09:00:00|49.59|5|49.55|49.55|49.58
    6|49.55|49.62|04/03/2008 09:00:00|49.6|6|49.55|49.55|49.59
    7|49.55|49.62|04/03/2008 09:00:00|49.62|7|49.55|49.55|49.6
    How can I have the right count with all the column of the table?
    Babata

    I'm not sure what in your eye the "right count" is. but I think the DISTINCT keyword is hiding the problems that you have. It could also be the reason for the different number of results between query one and query two.

  • MeasureExpression property - Is there a possibility to implement distinct count with exclude empty?

    I have a measure which needs to have distinct count aggregation with exclude empty.
    Is there a possibility to specify MeasureExpression property to achieve the same? I have set the aggregation of the measure to none and I have typed the following function,
    Count(Distinct(Column_name),EXCLUDEEMPTY)
    where as the measure reads 0(zero) upon processing. Is there any other way to achieve this?
    The data feed has nulls included too for the column which this is to be achieved, which cannot be modified in the data level.
    Regards,
    Kantha Girish

    Hi Kantha,
    According to your description, you want to implement the distinct count aggregation, right? In this case, we can use a query like
    count(nonempty([DimName].[HierarchyName].[LevelName].members,[Measures].[MyMeasure]))
    Here are some blogs about how to implemet distinct count aggregation, please refer to the link below.
    http://blog.sqltechie.com/2009/09/distinctcount-analysis-service.html
    http://richardlees.blogspot.com/2008/10/alternative-to-physical-distinct-count.html
    If I have anything misunderstood, please point it out.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Use Variance for Distinct Count of Group Results

    Post Author: Judith
    CA Forum: Crystal Reports
    Hi there,
    I am new to CR. I am using CR 2008. I am stuck (every two minutes) and it would be great, if you could help me with this one:
    I have a list of people who are talking to each other:A to B, B to A, C to D, D to A etc. Then I wanted to see, who has most friends and I have created Groups to have a Distinct Count on how many people are talking to A, and to B, and to C etc. that worked fine. What I would like to do is to find out is who of these has most people they are talking to. Or even better, what is the variance of the resulting subtotals. Simply using Variance or Maximum doesn't seem to work on the DistinctCount Summary.
    I would very much appreciate any help on this.
    Judith

    Post Author: Jagan
    CA Forum: Crystal Reports
    I understand the issue, I don't understand why you think DistinctCount at two different levels should total up. Consider this sample data:Facility, EmployeeA, 1A, 2B, 1B, 3
    Facility A's distinct count => 2Facility B's distinct count => 2Report's distinct count => 3
    Use DistinctCount() at the group level and create a formula to sum these counts yourself and print that in the report footer.

  • Specified Order Grouping does not show if Distinct Count is Zero

    Post Author: Hieu
    CA Forum: General
    Hello,
    I'm using Crystal Reports XI R2 with SQL data source. I have a cross-tab report with grouping in specified order. It's a report of applicants applying to a college. The grouping is of various majors (degrees). The report summarizes (distinct counts) the number of applicants for the groups of majors. The problem is that if the count for a group of major is zero, then that Named Group does not appear at all in the cross-tab report. I want the Name Group to appear with the count of "0".
    I notice this same phenomenon with specified order grouping anywhere and not just in a cross-tab. I have tried changing "convert database/other null values to default" but nothing working yet.
    Any help will be most appreciated. Thanks.

    Post Author: synapsevampire
    CA Forum: General
    It's not a phenomenon, it's how databases and SQL works.
    You didn't get any rows back for those with a zero distinct count (otherwise the count would be 1 or more, right?), so Crystal doesn't show any data for those groups.
    So to display a zero for those that do not exist would reuire either advanced SQL, or manual summaries.
    One method for manual summaries is to use Running Totals. Select distinct count of the applicants and group by the majors, then in the evaluate use a formula place:
    {table.majors} = "Blah 1"
    Creating a seperate Running Total for every group and replacing "Blah 1" with the various majors.
    -kai

Maybe you are looking for

  • Combo Box items missing

    Hi, I'm new to ADF and have a few basic questions. 1)From a View Object in the Data Control Palette I dragged and dropped an attribute as Combo Box. When I run the panel no items are shown, although data is available at the database. I actually expec

  • Split a shape in two parts?

    Hello, I've drawn a Pin (not the one attached, just as a reference) and I'm trying to split it in way of either a zigzag or wavy line. I've placed a wavy line by going to draw line, distort and transform, then putting it on top of my shape and trying

  • Digitally Signing

    Hi This is Amar i want to know how to create a digital signature in sap, what are requirements for creating. if possible send me any link or if have any document please amarsap fico g Regards, Amarnadh

  • I-photo application is gone

    I logged on to my computer today and I have no I-photo application. the icon is still present at the bottom of the screen but nothing opens. I used finder to open applications and were I-photo used to be is now a blank space. When I looked in the pic

  • White iPhone 4S 16GB battery life and getting hot

    About 2 months ago I got a White iPhone 4S 16GB and it was working fine until I got back from Spain. The battery life now seems to be draining very quickly like for instants today It was fully charged at 7:30 and by 10:45 it was on around 30% battery