Counting number of non NULL values in a list

Given a list of 5 columns, how can I find out how many are not null?
Eg.
select '&Parm1','&Parm2','&Parm3','&Parm4','&Parm5', number_of_values_not_null('&Parm1','&Parm2','&Parm3','&Parm4','&Parm5')
from Table
where Criteria
would give me:
Parm1: A
Parm2: B
Parm3:
Parm4:
Parm5:
A, B, , , , 2
Thanks

NVL2 might be slightly shorter, TABLE () might be slightly more intuitive. Probably six of one...
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> SELECT NVL2 ('A', 1, 0) +
  2         NVL2 ('B', 1, 0) +
  3         NVL2 (NULL, 1, 0) +
  4         NVL2 (NULL, 1, 0) +
  5         NVL2 (NULL, 1, 0) result
  6  FROM   DUAL;
    RESULT
         2
SQL> CREATE OR REPLACE TYPE varchar2_table AS TABLE OF VARCHAR2 (4000);
  2  /
Type created.
SQL> SELECT COUNT (*)
  2  FROM   TABLE (varchar2_table ('A', 'B', NULL, NULL, NULL))
  3  WHERE  column_value IS NOT NULL;
  COUNT(*)
         2
SQL>

Similar Messages

  • Counting non null values

    I have a column of data and there are values and nulls how would I count just the values on a summary?
    Everything I have tried give me the total number of rows not the non null values.....
    tia
    Rose

    no you did not say something wrong -- but when we included a case statement for another field in the sql ---
    when we referenced the new field and tried to sum it in bi publisher gave us a 'Na' don't know why......
    Rose

  • Distinct Count of Non-null Values

    I have a table that has one column for providerID and then a providerID in each of several columns if the provider is under a particular type of contract. 
    I need a distict count of each provider under each type of contract for every county in the US.
    distinct count is almost always one more than the actual distict count because most counties have at least one provider that does not have a particular contract and the distict count counts the null value as a distict value.
    I know I can alter the fields to have a zero for nulls, ask for a minimum count and then subtract 1 from the distict count if the minimum is zero, but I hope there is an easier way to figure distict counts of non-null values.
    any suggestions?
    Thanks,
    Jennifer

    Hello,
    *I need a distict count of each provider under each type of contract for every county in the US*
    To the above requiremetn,
    I will suggest the following approach.
    Use group expert formula  for country, contract and provider.
    Now you will have the hierarchy to which level you want to apply distinct count. You can do it as suggested by ken hamady.
    Regards
    Usama

  • LAG & LEAD functions... Any Way to Retrieve the 1st non-NULL Values?

    My question is this... Has anyone found an elegant way of getting the LAG & LEAD functions to move to the 1st NON-NULL value within the partition, rather than simply using a hard-coded offset value?
    Here's some test data...
    IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
    CREATE TABLE #temp (
    BranchID INT NOT NULL,
    RandomValue INT NULL,
    TransactionDate DATETIME
    PRIMARY KEY (BranchID, TransactionDate)
    INSERT #temp (BranchID,RandomValue,TransactionDate) VALUES
    (339,6, '20060111 00:55:55'),
    (339,NULL, '20070926 23:32:00'),
    (339,NULL, '20101222 10:51:35'),
    (339,NULL, '20101222 10:51:37'),
    (339,1, '20101222 10:52:00'),
    (339,1, '20120816 12:02:00'),
    (339,1, '20121010 10:36:00'),
    (339,NULL, '20121023 10:47:53'),
    (339,NULL, '20121023 10:48:08'),
    (339,1, '20121023 10:49:00'),
    (350,1, '20060111 00:55:55'),
    (350,NULL, '20070926 23:31:06'),
    (350,NULL, '20080401 16:34:54'),
    (350,NULL, '20080528 15:06:39'),
    (350,NULL, '20100419 11:05:49'),
    (350,NULL, '20120315 08:51:00'),
    (350,NULL, '20120720 11:48:35'),
    (350,1, '20120720 14:48:00'),
    (350,NULL, '20121207 08:10:14')
    What I'm trying to accomplish... In this instance, I'm trying to populate the NULL values with the 1st non-null preceding value. 
    The LAG function works well when there's only a single null value in a sequence but doesn't do the job if there's more than a singe NULL in the sequence.
    For example ...
    SELECT
    t.BranchID,
    t.RandomValue,
    t.TransactionDate,
    COALESCE(t.RandomValue, LAG(t.RandomValue, 1) OVER (PARTITION BY t.BranchID ORDER BY t.TransactionDate)) AS LagValue
    FROM
    #temp t
    Please note that I am aware of several methods of accomplishing this particular task, including self joins, CTEs and smearing with variables.
    So, I'm not looking for alternative way of accomplishing the task... I'm wanting to know if it's possible to do this with the LAG function.
    Thanks in advance,
    Jason
    Jason Long

    I just wanted to provide a little follow-up now that I had a little time to check up and digest Itzik’s article and tested the code posed by Jingyang.
    Turns out the code posted by Jingyang didn’t actually produce the desired results but it did get me pointed in the right direction (partially my fault for crappy test data that didn’t lend itself to easy verification). That said, I did want to post the version
    of the code that does produce the correct results.
    IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
    CREATE TABLE #temp (
    BranchID INT NOT NULL,
    RandomValue INT NULL,
    TransactionDate DATETIME
    PRIMARY KEY (BranchID, TransactionDate)
    INSERT #temp (BranchID,RandomValue,TransactionDate) VALUES
    (339,6, '20060111 00:55:55'), (339,NULL, '20070926 23:32:00'), (339,NULL, '20101222 10:51:35'), (339,5, '20101222 10:51:37'),
    (339,2, '20101222 10:52:00'), (339,2, '20120816 12:02:00'), (339,2, '20121010 10:36:00'), (339,NULL, '20121023 10:47:53'),
    (339,NULL, '20121023 10:48:08'), (339,1, '20121023 10:49:00'), (350,3, '20060111 00:55:55'), (350,NULL, '20070926 23:31:06'),
    (350,NULL, '20080401 16:34:54'), (350,NULL, '20080528 15:06:39'), (350,NULL, '20100419 11:05:49'), (350,NULL, '20120315 08:51:00'),
    (350,NULL, '20120720 11:48:35'), (350,4, '20120720 14:48:00'), (350,2, '20121207 08:10:14')
    SELECT
    t.BranchID,
    t.RandomValue,
    t.TransactionDate,
    COALESCE(t.RandomValue,
    CAST(
    SUBSTRING(
    MAX(CAST(t.TransactionDate AS BINARY(4)) + CAST(t.RandomValue AS BINARY(4))) OVER (PARTITION BY t.BranchID ORDER BY t.TransactionDate ROWS UNBOUNDED PRECEDING)
    ,5,4)
    AS INT)
    ) AS RandomValueNew
    FROM
    #temp AS t
    In reality, this isn’t exactly a true answer to the original question regarding the LAG & LEAD functions, being that it uses the MAX function instead, but who cares? It still uses a windowed function to solve the problem with a single pass at the data.
    I also did a little additional testing to see if casting to BINARY(4) worked across the board with a variety of data types or if the number needed to be adjusted based the data… Here’s one of my test scripts…
    IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
    CREATE TABLE #Temp (
    ID INT,
    Num BIGINT,
    String VARCHAR(25),
    [Date] DATETIME,
    Series INT
    INSERT #temp (ID,Num,String,Date,Series) VALUES
    (1, 2, 'X', '19000101', 1), ( 2, 3, 'XX', '19000108', 1),
    (3, 4, 'XXX', '19000115', 1), ( 4, 6, 'XXXX', '19000122', 1),
    (5, 9, 'XXXXX', '19000129', 1), ( 6, 13, 'XXXXXX', '19000205', 2),
    (7, NULL, 'XXXXXXX', '19000212', 2),
    (8, NULL, 'XXXXXXXX', '19000219', 2),
    (9, NULL, 'XXXXXXXXX', '19000226', 2),
    (10, NULL, 'XXXXXXXXXX', '19000305', 2),
    (11, NULL, NULL, '19000312', 3), ( 12, 141, NULL, '19000319', 3),
    (13, 211, NULL, '19000326', 3), ( 14, 316, NULL, '19000402', 3),
    (15, 474, 'XXXXXXXXXXXXXXX', '19000409', 3),
    (16, 711, 'XXXXXXXXXXXXXXXX', '19000416', 4),
    (17, NULL, NULL, '19000423', 4), ( 18, NULL, NULL, '19000430', 4),
    (19, NULL, 'XXXXXXXXXXXXXXXXXXXX', '19000507', 4), ( 20, NULL, NULL, '19000514', 4),
    (21, 5395, NULL, '19000521', 5),
    (22, NULL, NULL, '19000528', 5),
    (23, 12138, 'XXXXXXXXXXXXXXXXXXXXXXX', '19000604', 5),
    (24, 2147483647, 'XXXXXXXXXXXXXXXXXXXXXXXX', '19000611', 5),
    (25, NULL, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 5),
    (26, 27310, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 6),
    (27, 9223372036854775807, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 6),
    (28, NULL, NULL, '19000618', 6),
    (29, NULL, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 6),
    (30, 27310, NULL, '19000618', 6)
    SELECT
    ID,
    Num,
    String,
    [Date],
    Series,
    CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(4))) OVER (ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,4) AS BIGINT) AS NumFill,
    CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(4))) OVER (PARTITION BY t.Series ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,4) AS BIGINT) AS NumFillWithPartition,
    CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(8))) OVER (ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,8) AS BIGINT) AS BigNumFill,
    CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(8))) OVER (PARTITION BY t.Series ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,8) AS BIGINT) AS BIGNumFillWithPartition,
    CAST(SUBSTRING(MAX(CAST(t.ID AS BINARY(4)) + CAST(t.String AS BINARY(255))) OVER (ORDER BY t.ID ROWS UNBOUNDED PRECEDING), 5,255) AS VARCHAR(25)) AS StringFill,
    CAST(SUBSTRING(MAX(CAST(t.ID AS BINARY(4)) + CAST(t.String AS BINARY(25))) OVER (PARTITION BY t.Series ORDER BY t.ID ROWS UNBOUNDED PRECEDING), 5,25) AS VARCHAR(25)) AS StringFillWithPartition
    FROM #Temp AS t
    Looks like BINARY(4) is just fine for any INT or DATE/DATETIME values. Bumping it up to 8 was need to capture the largest BIGINT value. For text strings, the number simply needs to be set to the column size. I tested up to 255 characters without a problem.
    It’s not included here, but I did notice that the NUMERIC data type doesn’t work at all. From what I can tell, SS doesn't like casting the binary value back to NUMERIC (I didn't test DECIMAL).
    Thanks again,
    Jason
    Jason Long

  • Retreive non-Null values

    Hi everybody
    How to retreive the values those are not Null from a table??
    I mean I want to display non-Null values from a column or set of columns(if possible)??
    Thanks in advance

    But I don't have any criteria ... But you have criteria - namely to retrieve not null values!
    Maybe you need to be more clear (with sample in- and output) of what you'd actually like to achieve.

  • Eliminating the null values from popup list item

    Dear All,
    i create a popup list,at runtime it shows a null blank value among the values i specified while the combo box is not,
    i want to eleminate the blank null value from the popup list.
    Need Help.
    Thanks & Regards.

    Okay,
    i create a popup list, populate it in runtime with create_group_from_query builtin.
    now when i run the form and click the list item, it display a null value among the other values which are
    return from the create_group_from_query .
    the procedure is below
    procedure Department_proc is
    rg recordgroup;
    n number;
    begin
    remove_record_group('RG');------  this is another procedure which checks for Record group existance and remove it.
    rg=:=create_group_from_query('RG4','SELECT NAME,TO_CHAR(DEPT_ID) FROM TAB_DEPT_SECTION
    UNION
    SELECT '||'''All Departments'''||'as name,'||'''0'''||' as name from dual');
    n:=populate_group(rg4);
    populate_list('control_block.department',rg4);
    end;
    it display a null value among the other values for the first time is run the form,but when i click it and select a value from it
    and the onward click dont show the null value.i want to eleminate the null value even for the first time when the user run the
    from.
    Best Regards

  • Need to count number of non-empty (not null) occurrences for an element in XML

    Hello,
        I have some XML data stored in a CLOB, there is a fragment shown below:
    <ADDR>
    <AutoNameOfOrganisation>2nd Image</AutoNameOfOrganisation>
    <AutoLine1>105 Risbygate Street</AutoLine1>
    <AutoLine2> </AutoLine2>
    <AutoLine3> </AutoLine3>
    <AutoU40.16>BURY ST. EDMUNDS</AutoU40.16>
    <AutoStateOrCounty>Suffolk</AutoStateOrCounty>
    <AutoU40.13>IP33 3AA</AutoU40.13>
    <AutoU40.14>UNITED KINGDOM</AutoU40.14>
    </ADDR>
    <ADDR>
    <AutoNameOfOrganisation></AutoNameOfOrganisation>
    <AutoLine1>2 2nd Avenue</AutoLine1>
    <AutoLine2> </AutoLine2>
    <AutoLine3> </AutoLine3>
    <AutoU40.16>HULL</AutoU40.16>
    <AutoStateOrCounty>East Riding of Yorkshire</AutoStateOrCounty>
    <AutoU40.13>HU6 9NT</AutoU40.13>
    <AutoU40.14>UNITED KINGDOM</AutoU40.14>
    </ADDR>
       What I would like to do is to be able to look at the whole of the xml and derive a count for a particular element where there is data .
       In the above case <AutoNameOfOrganisation> is not null in the first record and null in the second whilst <AutoLine3> is null in both records, so I'd like the counts to be 1 for <AutoNameOfOrganisation> and 0 for <AutoLine3>.
       The reason for this is so that I can avoid showing a column in a report if there is no data for that particular column in the results.
       I'm using Oracle 11G.
    Thanks for any help,
    Chris

    Here's one way :
    select count(AutoNameOfOrganisation)
         , count(AutoLine3)
    from my_table t
       , xmltable(
           '/root/ADDR'
           passing xmlparse(document my_clob)
           columns AutoNameOfOrganisation varchar2(30) path 'normalize-space(AutoNameOfOrganisation)'
                 , AutoLine3              varchar2(30) path 'normalize-space(AutoLine3)'
         ) x ;
    In your sample fragment there's no root element so I've added one to test the query (though it's also possible to work with real XML fragments).
    You may also omit normalize-space() calls if empty nodes are really empty. In your sample, there's a whitespace so I've added the function to strip it off.

  • SQL Challenge - Returning count=0 for non-existing values

    Hello there,
    I have a question about our requirement and an SQL query. I have posted this to some email groups but got no answer yet.
    Here is the test case:
    SQL> conn ...
    Connected.
    -- create the pattern table and populate
    SQL> create table pattern(id number, keydescription varchar2(50));
    Table created.
    SQL> insert into pattern values(1,'hata1');
    1 row created.
    SQL> insert into pattern values(2,'hata2');
    1 row created.
    SQL> insert into pattern values(3,'hata3');
    1 row created.
    SQL> insert into pattern values(4,'hata4');
    1 row created.
    SQL> insert into pattern values(5,'hata5');
    1 row created.
    SQL> select * from pattern;
    ID KEYDESCRIPTION
    1 hata1
    2 hata2
    3 hata3
    4 hata4
    5 hata5
    SQL> commit;
    Commit complete.
    -- create the messagetrack and populate
    SQL> create table messagetrack(pattern_id number, realdate date);
    Table created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 13:00:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 13:05:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(2,to_date('26/08/2007 13:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(3,to_date('26/08/2007 14:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(4,to_date('26/08/2007 15:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> insert into messagetrack values(1,to_date('26/08/2007 15:15:00','dd/mm/yyyy hh24:MI:ss'));
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from messagetrack;
    PATTERN_ID REALDATE
    1 26-AUG-07
    1 26-AUG-07
    2 26-AUG-07
    3 26-AUG-07
    4 26-AUG-07
    1 26-AUG-07
    6 rows selected.
    Now, we have this simple query:
    SQL> select p.KeyDescription as rptBase , to_char( mt.realdate,'dd') as P1 , to_char(mt.realdate,'HH24') as P2, count(*) as countX
    2 from messageTrack mt, Pattern p
    3 Where mt.realDate >= to_date('26/08/2007 13:00:00','dd/MM/yyyy hh24:MI:ss')
    4 and mt.realDate <= to_date('27/08/2007 20:00:00','dd/MM/yyyy hh24:MI:ss')
    5 and mt.pattern_id=p.id
    6 group by p.KeyDescription, to_char(mt.realdate,'dd'), to_char( mt.realdate,'HH24')
    7 order by p.KeyDescription, to_char(mt.realdate,'dd'), to_char(mt.realdate,'HH24');
    RPTBASE P1 P2 COUNTX
    hata1 26 13 2
    hata1 26 15 1
    hata2 26 13 1
    hata3 26 14 1
    hata4 26 15 1
    But the result we need should contain the pattern values(hata1, hata2, hata3 and hata4) for each time interval(hour) although there are might be no records of some patterns for some hours.
    The result for our test case should look like this:
    RPTBASE P1 P2 COUNTX
    hata1 26 13 2
    hata1 26 14 0
    hata1 26 15 0
    hata2 26 13 1
    hata2 26 14 0
    hata2 26 15 0
    hata3 26 13 0
    hata3 26 14 1
    hata3 26 15 0
    hata4 26 13 0
    hata4 26 14 0
    hata4 26 15 1
    Our version is 10.2.0.2
    On my discussions some said model clause may be used, but i don't know model clause much and can't imagine how to use.
    You can download the test case code above to reproduce from:
    http://www.bhatipoglu.com/files/query1.txt
    You can see the output above more clearly(monospace font) on:
    http://www.bhatipoglu.com/files/query1_output.txt
    Additionally, I want to state that, in the resulting table, we don't want all the patterns(hata1, hata2, hata3, hata4 and hata5). We just want the ones that exists on messageTrack table(hata1, hata2, hata3 and hata4) as you see on the result.
    Thanks in advance.

    Here is an attempt with the Model Clause:
    Edit: I should mention that I created a view out of your original query.
    SELECT rptbase
          ,day
          ,hour
          ,countx
    FROM demoV
      MODEL
        DIMENSION BY (rptbase, day, hour)
        MEASURES (countx)
          RULES(countx[
                        FOR rptbase IN (SELECT rptbase
                                        FROM demoV)
                        ,FOR day IN   (SELECT day
                                        FROM demoV)
                        ,FOR hour FROM 13 to 15 INCREMENT 1
                        ] =
                        NVL(countx[CV(rptbase),CV(day),CV(hour)],0)
                order by 1,2,3;Which produces the following
    RPTBASE                                    DAY                 HOUR               COUNTX                
    hata1                                              26                     13                     2                     
    hata1                                              26                     14                     0                     
    hata1                                              26                     15                     1                     
    hata2                                              26                     13                     1                     
    hata2                                              26                     14                     0                     
    hata2                                              26                     15                     0                     
    hata3                                              26                     13                     0                     
    hata3                                              26                     14                     1                     
    hata3                                              26                     15                     0                     
    hata4                                              26                     13                     0                     
    hata4                                              26                     14                     0                     
    hata4                                              26                     15                     1               Note my Hata1 26 15 has a countx of 1 (I believe that this is correct and that your sample result is incorrect, if this is not the case, please explain why it should be 0)
    Message was edited by:
    JS1

  • NULL value in static list of values???

    I'm creating a static list of values and I want to check against these values in a table with a sql query. My question is, can I have the value for one of the items be NULL? Like, is the following static list of values ok:
    STATIC2:No Status;,Hire;HIRE,Reject;REJECT,Maybe Later;LATER
    I want it so that if someone selects "No Status" in my select list, that I check my column in the table for all records that have NULL for this value. Is that possible? Or will I have to create 2 separate queries (1 for NULL, and 1 for all the other values)?

    Actually, I had tried it, but wasn't getting an expected result. I couldn't figure out if this issue was causing my error, or something else, so I posted the question asking if it was possible to do so to try and track down where my error was coming from.

  • Clarification needed on the behaviour of count with null values

    Hi friends,
    I am confused about the result given by the count aggregate function when dealing with null. Please can anybody clarify about that to me. Here is what i tried
    CREATE TABLE Demo ( a int);
    INSERT INTO Demo VALUES
    (1),
    (2),
    (NULL),
    (NULL);
    SELECT COUNT(COALESCE(a,0)) FROM Demo WHERE a IS NULL; -- Returns 2
    SELECT COUNT(ISNULL(a,0)) FROM Demo WHERE a IS NULL; -- Returns 2
    SELECT COUNT(*) FROM Demo WHERE a IS NULL; -- Returns 2
    SELECT COUNT(a) FROM Demo WHERE a IS NULL; -- Returns 0
    Please can anybody explain me why the result is same for the first three select statements and why not for the last select statement. And what does COUNT(*) actually mean?
    With Regards
    Abhilash D K

    There is a difference to the logic when using a column name versus "*" - which is explained in the documentation (and reading it is the first thing you should do when you do not understand a particularly query/syntax).  When you supply a column
    (or expression) to the count function, only the non-null values are counted.  Isnull and coalesce will obviously replace a NULL values, therefore the result of the expression will be counted. 
    1 and 2 are effectively the same - you replace a null value in your column with 0 and each query only selects rows with a null column value.  The count of non-null values of your expression is therefore 2 - the number of rows in your resultset.
    3 is the number of rows in the resultset since you supplied "*" instead of a column.  The presence of nulls is irrelevant as documented.
    4 is the number of non-null values in the resultset since you DID supply a column.  Your resultset had no non-null values, therefore the count is zero.

  • Selectivity for non-pupular value in Height based Histograms.

    Hi,
    I wanted to check how optimizer calculates the cardinality/selectivity for a value which is not popular and histogram is height based histograms.
    Following is the small test case (Version is 11.2.0.1) platform hpux
    create table t1 (
           skew    not null,
           padding
    as
    /* with generator as (
    select --+ materialize
           rownum id
    from all_objects
    where rownum <= 5000
    select /*+ ordered use_nl(v2) */
         v1.id,
         rpad('x',400)
    from
        generator  v1,
        generator v2
    where
       v1.id <= 80
    and
       v2.id <= 80
    and
       v2.id <= v1.id
    ;Following is the table stats:
    SQL> select count(*) from t1;
      COUNT(*)
          3240
    SQL> exec dbms_stats.gather_table_stats('SYS','T1',cascade=>TRUE, estimate_percent => null, method_opt => 'for all columns size 75');
    PL/SQL procedure successfully completed.
    SQL> select column_name,num_distinct,density,num_buckets from dba_tab_columns where table_name='T1';
    COLUMN_NAME                    NUM_DISTINCT    DENSITY NUM_BUCKETS
    SKEW                                     80 .013973812          75
    PADDING                                   1 .000154321           1
    SQL> select endpoint_number, endpoint_value from dba_tab_histograms where column_name='SKEW' and table_name='T1' order by endpoint_number;
    ENDPOINT_NUMBER ENDPOINT_VALUE
                  0              1
                  1              9
                  2             13
                  3             16
                  4             19
                  5             21
                  6             23
                  7             25
                  8             26
                  9             28
                 10             29
    ENDPOINT_NUMBER ENDPOINT_VALUE
                 11             31
                 12             32
                 13             33
                 14             35
                 15             36
                 16             37
                 17             38
                 18             39
                 19             40
                 20             41
                 21             42
    ENDPOINT_NUMBER ENDPOINT_VALUE
                 22             43
                 23             44
                 24             45
                 25             46
                 26             47
                 27             48
                 28             49
                 29             50
                 30             51
                 32             52
                 33             53
    ENDPOINT_NUMBER ENDPOINT_VALUE
                 34             54
                 35             55
                 37             56
                 38             57
                 39             58
                 41             59
                 42             60
                 43             61
                 45             62
                 46             63
                 48             64
    ENDPOINT_NUMBER ENDPOINT_VALUE
                 49             65
                 51             66
                 52             67
                 54             68
                 56             69
                 57             70
                 59             71
                 60             72
                 62             73
                 64             74
                 66             75
    ENDPOINT_NUMBER ENDPOINT_VALUE
                 67             76
                 69             77
                 71             78
                 73             79
                 75             80
    60 rows selected.Checking the selectivity for value 75(which is the popular value as per information from dba_tab_histograms
    SQL> set autotrace on
    SQL> select count(*) from t1 where skew=75;
      COUNT(*)
            75
    Execution Plan
    Plan hash value: 4273422929
    | Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |       |     1 |     3 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE   |       |     1 |     3 |            |          |
    |*  2 |   INDEX RANGE SCAN| T1_I1 |    86 |   258 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SKEW"=75)Skipped the Statistics information for keep example short.
    selectivity for 75 (popular value) = 2/75 = 0.02666
    Cardinality for 75 is = selectivity * num_rows = 0.02666*3240 = 86.3784 (rounded to 86) >> Here selectivity and cardinality are correct and displayed in autotrace.
    SQL> select count(*) from t1 where skew=8;
      COUNT(*)
             8
    Execution Plan
    Plan hash value: 4273422929
    | Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |       |     1 |     3 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE   |       |     1 |     3 |            |          |
    |*  2 |   INDEX RANGE SCAN| T1_I1 |    29 |    87 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SKEW"=8)how the cardinality is 29 calculated. I think the formula for selectivity is
    select for 1(non popular value) = density * num_rows = .013973812 * num_rows (which is 45 approx) but in autotrace its 29
    SQL> select count(*) from t1 where skew = 46;
      COUNT(*)
            46
    Execution Plan
    Plan hash value: 4273422929
    | Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |       |     1 |     3 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE   |       |     1 |     3 |            |          |
    |*  2 |   INDEX RANGE SCAN| T1_I1 |    29 |    87 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SKEW"=46)46 is also non popular value
    So how the value is calculated for these values?

    Your example seems to be based on Jonathan Lewis's article:
    http://jonathanlewis.wordpress.com/2012/01/03/newdensity/
    In this article, he does walk through the calculation of selectivity for non-popular values.
    The calculation is not density but NewDensity, as seen in a 10053 trace, which takes into account the number of non-popular values AND the number of non-popular buckets
    The article describes exactly how 29 is derived.Hi Dom,
    Yes i used the same sample script of create the data sets. I should have checked Jonathan's blog for new density calculations. So selectivity works out as two either ways
    1) selectivity(non popular) = Newdensity(took from 10053 traces) * rum_rows
    or
    2) non-popular rows/ non_popular values (where non-popular values can be derived from 10053 traces and non popular rows are (3240 * (74-31)/74 = ) 1883
    Thanks for pointing to right blog

  • Problem in summation on a column with possible null values

    Hi,
    I want to do summation on a column.
    If I use <?sum(amount)?>, if there is any null value,its giving NaN as output.
    From the forum I got the below syntax
    <?sum(AMOUNT[number(.)!='NaN'])?>
    but it is also not giving me the expected result. Its always displays 0.
    I want some thing like sum(NVL(amount,0)). Could some body please help me out?
    Thanks in Advance,
    Thiru

    If the column has many, many null values, and you want to use the index to identify the rows with non-null values, this is a good thing, as a B*Tree index will not index the nulls at all, so, even though your table may be very large, with many millions of rows, this index will be small and efficient, cause it will only contain index entries for those rows where the column is not null.
    Hope that helps,
    -Mark

  • DataSet/DataGrid/Null values

    Strange thing - when I bind a dataset to a datagrid and the
    dataset column has null values, I cannot edit it. If I try to edit
    the value, it just returns to the previous value when I press the
    Enter key.
    If all the rows start out with a non-null value, I can edit
    them without any problem. Any thoughts?
    TIA

    Hello
    While that is true for a unique index on columns where all values are null, it is not the case where one of the values is not null:
    SQL> CREATE TABLE dt_test_nulls (id number, col1 varchar2(1))
      2  /
    Table created.
    SQL> CREATE UNIQUE INDEX dt_test_nulls_i1 on dt_test_nulls(id)
      2  /
    Index created.
    SQL> insert into dt_test_nulls values(null,'Y')
      2  /
    1 row created.
    SQL> insert into dt_test_nulls values(null,'N')
      2  /
    1 row created.
    SQL> create unique index dt_test_nulls_i2 on dt_test_nulls(id,col1)
      2  /
    Index created.
    SQL> insert into dt_test_nulls values(null,'N')
      2  /
    insert into dt_test_nulls values(null,'N')
    ERROR at line 1:
    ORA-00001: unique constraint (BULK1.DT_TEST_NULLS_I2) violated
    SQL> insert into dt_test_nulls values(null,null)
      2  /
    1 row created.
    SQL> insert into dt_test_nulls values(null,null)
      2  /
    1 row created.I just thought it was worth pointing out.
    HTH
    David
    Message was edited by:
    david_tyler

  • How to avoid the null values from xml publisher.

    I am creating a report which have the claim numbers with the values CLA001,CLA111,null, null . when i preview my report it is showing some spaces for null values also. How can i avoid the spaces from the report.
    I am giving for loop for the claim numbers in the template.
    <?for-each:ROW?> <?sort:CLAIMNUMBER;'ascending';data-type='text'?>
    <?CLAIMNUMBER?>
    <?end for-each?>
    Please help me out to solve this problem.
    Thanks,
    vasanth.

    Hi Sheshu,
    According to your description, you are experiencing the null values and infinity values when browser the calculated measure, right?
    Based on my research, the issue is caused by that dividing a non-zero or non-null value by zero or null. In this cases, we need to check for division by zero to avoid this situation. Here is the sample query for you reference.
    IIF(
    Measures.[Measure B]=0,null,
    Measures.[Measure A] / Measures.[Measure B]
    If you have any questions, please feel free to ask.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Exclude NULL values from SUM and AVG calculation

    Hi,
    I have column in report that contains some NULL values. When i perform SUM,MAX,MIN or AVG calculation on this column the NULL values are treated as '0' and included in calculation. Is there any way to exclude them while calculating aggregate functions? 
    As a result MIN calculation on values (NULL,0.7,0.5,0.9) gives me output as 0 when it should have been 0.5 
    Can someone please help ?
    Thanks and Regards,
    Oliver D'mello

    Hi Oliver,
    According to your description, you want to ignore the NULL values when you perform aggregation functions.
    In this scenario, aggregate functions always ignore the NULL values, because their operation objects are non-null values. So I would like to know if you have assigned “0” for NULL values. I would appreciate it if you could provide some screenshots about
    your expressions or reports.
    Besides, we have tested in our environment using  Min() function. The expression returns the minimum value among the non-null numeric values. Please refer to the screenshots below:
    Reference:
    Min Function (Report Builder and SSRS)
    Aggregate Functions Reference (Report Builder and SSRS)
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu

Maybe you are looking for