Count Over(Patition by ) equivalent in MDX

Good Afternoon ,
I'm trying to replicate the COUNT in MDX. Im trying to get the average values. My query gives me the exact results in the sql server. I'm trying to get the average work done per each employee and the average revenue he/she generated. I have SUM(WorkDone)
and SUM(Revenue). I want count of days in a month based of the calendar i have as look up. For the month of february 2014 am getting 23 days, but when i do the count in SSAS am getting 400.
Please see the below sample data for secempid=1368, at the end of the sample data please see the code that i'm running to get the average values. The data is collected for 23 days in this case, for 16 hours each day.
loc ampm workdone Secempid Date Rev
1 10AM 0 1368 2/1/14 12:00 AM NULL
1 11AM 0 1368 2/1/14 12:00 AM NULL
1 12PM 0 1368 2/1/14 12:00 AM NULL
1 1PM 0 1368 2/1/14 12:00 AM NULL
1 2PM 0 1368 2/1/14 12:00 AM NULL
1 3PM 0 1368 2/1/14 12:00 AM NULL
1 4PM 0 1368 2/1/14 12:00 AM NULL
1 5AM 0 1368 2/1/14 12:00 AM NULL
1 5PM 0 1368 2/1/14 12:00 AM NULL
1 6AM 0 1368 2/1/14 12:00 AM NULL
1 6PM 0 1368 2/1/14 12:00 AM NULL
1 7AM 0 1368 2/1/14 12:00 AM NULL
1 7PM 0 1368 2/1/14 12:00 AM NULL
1 8AM 0 1368 2/1/14 12:00 AM NULL
1 8PM 0 1368 2/1/14 12:00 AM NULL
1 9AM 0 1368 2/1/14 12:00 AM NULL
1 10AM 0 1368 2/3/14 12:00 AM NULL
1 11AM 0 1368 2/3/14 12:00 AM NULL
1 12PM 0 1368 2/3/14 12:00 AM NULL
1 1PM 0 1368 2/3/14 12:00 AM NULL
1 2PM 0 1368 2/3/14 12:00 AM NULL
1 3PM 0 1368 2/3/14 12:00 AM NULL
1 4PM 0 1368 2/3/14 12:00 AM NULL
1 5AM 0 1368 2/3/14 12:00 AM NULL
1 5PM 0 1368 2/3/14 12:00 AM NULL
1 6AM 0 1368 2/3/14 12:00 AM NULL
1 6PM 0 1368 2/3/14 12:00 AM NULL
1 7AM 0 1368 2/3/14 12:00 AM NULL
1 7PM 0 1368 2/3/14 12:00 AM NULL
1 8AM 0 1368 2/3/14 12:00 AM NULL
1 8PM 0 1368 2/3/14 12:00 AM NULL
1 9AM 0 1368 2/3/14 12:00 AM NULL
1 10AM 0 1368 2/4/14 12:00 AM NULL
1 11AM 0 1368 2/4/14 12:00 AM NULL
1 12PM 0 1368 2/4/14 12:00 AM NULL
1 1PM 0 1368 2/4/14 12:00 AM NULL
1 2PM 0 1368 2/4/14 12:00 AM NULL
1 3PM 0 1368 2/4/14 12:00 AM NULL
1 4PM 0 1368 2/4/14 12:00 AM NULL
1 5AM 0 1368 2/4/14 12:00 AM NULL
1 5PM 0 1368 2/4/14 12:00 AM NULL
1 6AM 0 1368 2/4/14 12:00 AM NULL
1 6PM 0 1368 2/4/14 12:00 AM NULL
1 7AM 0 1368 2/4/14 12:00 AM NULL
1 7PM 0 1368 2/4/14 12:00 AM NULL
1 8AM 0 1368 2/4/14 12:00 AM NULL
1 8PM 0 1368 2/4/14 12:00 AM NULL
1 9AM 0 1368 2/4/14 12:00 AM NULL
1 10AM 0 1368 2/5/14 12:00 AM NULL
1 11AM 0 1368 2/5/14 12:00 AM NULL
1 12PM 0 1368 2/5/14 12:00 AM NULL
1 1PM 0 1368 2/5/14 12:00 AM NULL
1 2PM 0 1368 2/5/14 12:00 AM NULL
1 3PM 0 1368 2/5/14 12:00 AM NULL
1 4PM 0 1368 2/5/14 12:00 AM NULL
1 5AM 0 1368 2/5/14 12:00 AM NULL
1 5PM 0 1368 2/5/14 12:00 AM NULL
1 6AM 0 1368 2/5/14 12:00 AM NULL
1 6PM 0 1368 2/5/14 12:00 AM NULL
1 7AM 0 1368 2/5/14 12:00 AM NULL
1 7PM 0 1368 2/5/14 12:00 AM NULL
1 8AM 0 1368 2/5/14 12:00 AM NULL
1 8PM 0 1368 2/5/14 12:00 AM NULL
1 9AM 0 1368 2/5/14 12:00 AM NULL
1 10AM 0 1368 2/6/14 12:00 AM NULL
1 11AM 0 1368 2/6/14 12:00 AM NULL
1 12PM 0 1368 2/6/14 12:00 AM NULL
1 1PM 0 1368 2/6/14 12:00 AM NULL
1 2PM 0 1368 2/6/14 12:00 AM NULL
1 3PM 0 1368 2/6/14 12:00 AM NULL
1 4PM 0 1368 2/6/14 12:00 AM NULL
1 5AM 0 1368 2/6/14 12:00 AM NULL
1 5PM 0 1368 2/6/14 12:00 AM NULL
1 6AM 0 1368 2/6/14 12:00 AM NULL
1 6PM 0 1368 2/6/14 12:00 AM NULL
1 7AM 0 1368 2/6/14 12:00 AM NULL
1 7PM 0 1368 2/6/14 12:00 AM NULL
1 8AM 0 1368 2/6/14 12:00 AM NULL
1 8PM 0 1368 2/6/14 12:00 AM NULL
1 9AM 0 1368 2/6/14 12:00 AM NULL
1 10AM 0 1368 2/7/14 12:00 AM 4.55
1 11AM 0 1368 2/7/14 12:00 AM 4.55
1 12PM 0 1368 2/7/14 12:00 AM 4.55
1 1PM 0 1368 2/7/14 12:00 AM 4.55
1 2PM 41.66666667 1368 2/7/14 12:00 AM 4.55
1 3PM 111.6666667 1368 2/7/14 12:00 AM 4.55
1 4PM 100 1368 2/7/14 12:00 AM 4.55
1 5AM 0 1368 2/7/14 12:00 AM 4.55
1 5PM 50 1368 2/7/14 12:00 AM 4.55
1 6AM 0 1368 2/7/14 12:00 AM 4.55
1 6PM 0 1368 2/7/14 12:00 AM 4.55
1 7AM 0 1368 2/7/14 12:00 AM 4.55
1 7PM 0 1368 2/7/14 12:00 AM 4.55
1 8AM 0 1368 2/7/14 12:00 AM 4.55
1 8PM 0 1368 2/7/14 12:00 AM 4.55
1 9AM 0 1368 2/7/14 12:00 AM 4.55
1 10AM 0 1368 2/8/14 12:00 AM NULL
1 11AM 0 1368 2/8/14 12:00 AM NULL
1 12PM 0 1368 2/8/14 12:00 AM NULL
1 1PM 0 1368 2/8/14 12:00 AM NULL
1 2PM 0 1368 2/8/14 12:00 AM NULL
1 3PM 0 1368 2/8/14 12:00 AM NULL
1 4PM 0 1368 2/8/14 12:00 AM NULL
1 5AM 0 1368 2/8/14 12:00 AM NULL
1 5PM 0 1368 2/8/14 12:00 AM NULL
1 6AM 0 1368 2/8/14 12:00 AM NULL
1 6PM 0 1368 2/8/14 12:00 AM NULL
1 7AM 0 1368 2/8/14 12:00 AM NULL
1 7PM 0 1368 2/8/14 12:00 AM NULL
1 8AM 0 1368 2/8/14 12:00 AM NULL
1 8PM 0 1368 2/8/14 12:00 AM NULL
1 9AM 0 1368 2/8/14 12:00 AM NULL
1 10AM 0 1368 2/10/14 12:00 AM NULL
1 11AM 0 1368 2/10/14 12:00 AM NULL
1 12PM 0 1368 2/10/14 12:00 AM NULL
1 1PM 0 1368 2/10/14 12:00 AM NULL
1 2PM 0 1368 2/10/14 12:00 AM NULL
1 3PM 0 1368 2/10/14 12:00 AM NULL
1 4PM 0 1368 2/10/14 12:00 AM NULL
1 5AM 0 1368 2/10/14 12:00 AM NULL
1 5PM 0 1368 2/10/14 12:00 AM NULL
1 6AM 0 1368 2/10/14 12:00 AM NULL
1 6PM 0 1368 2/10/14 12:00 AM NULL
1 7AM 0 1368 2/10/14 12:00 AM NULL
1 7PM 0 1368 2/10/14 12:00 AM NULL
1 8AM 0 1368 2/10/14 12:00 AM NULL
1 8PM 0 1368 2/10/14 12:00 AM NULL
1 9AM 0 1368 2/10/14 12:00 AM NULL
1 10AM 0 1368 2/11/14 12:00 AM NULL
1 11AM 0 1368 2/11/14 12:00 AM NULL
1 12PM 0 1368 2/11/14 12:00 AM NULL
1 1PM 0 1368 2/11/14 12:00 AM NULL
1 2PM 0 1368 2/11/14 12:00 AM NULL
1 3PM 0 1368 2/11/14 12:00 AM NULL
1 4PM 0 1368 2/11/14 12:00 AM NULL
1 5AM 0 1368 2/11/14 12:00 AM NULL
1 5PM 0 1368 2/11/14 12:00 AM NULL
1 6AM 0 1368 2/11/14 12:00 AM NULL
1 6PM 0 1368 2/11/14 12:00 AM NULL
1 7AM 0 1368 2/11/14 12:00 AM NULL
1 7PM 0 1368 2/11/14 12:00 AM NULL
1 8AM 0 1368 2/11/14 12:00 AM NULL
1 8PM 0 1368 2/11/14 12:00 AM NULL
1 9AM 0 1368 2/11/14 12:00 AM NULL
1 10AM 0 1368 2/12/14 12:00 AM NULL
1 11AM 0 1368 2/12/14 12:00 AM NULL
1 12PM 0 1368 2/12/14 12:00 AM NULL
1 1PM 0 1368 2/12/14 12:00 AM NULL
1 2PM 0 1368 2/12/14 12:00 AM NULL
1 3PM 0 1368 2/12/14 12:00 AM NULL
1 4PM 0 1368 2/12/14 12:00 AM NULL
1 5AM 0 1368 2/12/14 12:00 AM NULL
1 5PM 0 1368 2/12/14 12:00 AM NULL
1 6AM 0 1368 2/12/14 12:00 AM NULL
1 6PM 0 1368 2/12/14 12:00 AM NULL
1 7AM 0 1368 2/12/14 12:00 AM NULL
1 7PM 0 1368 2/12/14 12:00 AM NULL
1 8AM 0 1368 2/12/14 12:00 AM NULL
1 8PM 0 1368 2/12/14 12:00 AM NULL
1 9AM 0 1368 2/12/14 12:00 AM NULL
1 10AM 0 1368 2/13/14 12:00 AM NULL
1 11AM 0 1368 2/13/14 12:00 AM NULL
1 12PM 0 1368 2/13/14 12:00 AM NULL
1 1PM 0 1368 2/13/14 12:00 AM NULL
1 2PM 0 1368 2/13/14 12:00 AM NULL
1 3PM 0 1368 2/13/14 12:00 AM NULL
1 4PM 0 1368 2/13/14 12:00 AM NULL
1 5AM 0 1368 2/13/14 12:00 AM NULL
1 5PM 0 1368 2/13/14 12:00 AM NULL
1 6AM 0 1368 2/13/14 12:00 AM NULL
1 6PM 0 1368 2/13/14 12:00 AM NULL
1 7AM 0 1368 2/13/14 12:00 AM NULL
1 7PM 0 1368 2/13/14 12:00 AM NULL
1 8AM 0 1368 2/13/14 12:00 AM NULL
1 8PM 0 1368 2/13/14 12:00 AM NULL
1 9AM 0 1368 2/13/14 12:00 AM NULL
1 10AM 0 1368 2/14/14 12:00 AM 1.45
1 11AM 0 1368 2/14/14 12:00 AM 1.45
1 12PM 0 1368 2/14/14 12:00 AM 1.45
1 1PM 0 1368 2/14/14 12:00 AM 1.45
1 2PM 0 1368 2/14/14 12:00 AM 1.45
1 3PM 50 1368 2/14/14 12:00 AM 1.45
1 4PM 0 1368 2/14/14 12:00 AM 1.45
1 5AM 0 1368 2/14/14 12:00 AM 1.45
1 5PM 0 1368 2/14/14 12:00 AM 1.45
1 6AM 0 1368 2/14/14 12:00 AM 1.45
1 6PM 0 1368 2/14/14 12:00 AM 1.45
1 7AM 0 1368 2/14/14 12:00 AM 1.45
1 7PM 0 1368 2/14/14 12:00 AM 1.45
1 8AM 0 1368 2/14/14 12:00 AM 1.45
1 8PM 46.66666667 1368 2/14/14 12:00 AM 1.45
1 9AM 0 1368 2/14/14 12:00 AM 1.45
1 10AM 0 1368 2/15/14 12:00 AM 4.35
1 11AM 0 1368 2/15/14 12:00 AM 4.35
1 12PM 0 1368 2/15/14 12:00 AM 4.35
1 1PM 0 1368 2/15/14 12:00 AM 4.35
1 2PM 0 1368 2/15/14 12:00 AM 4.35
1 3PM 0 1368 2/15/14 12:00 AM 4.35
1 4PM 0 1368 2/15/14 12:00 AM 4.35
1 5AM 0 1368 2/15/14 12:00 AM 4.35
1 5PM 0 1368 2/15/14 12:00 AM 4.35
1 6AM 0 1368 2/15/14 12:00 AM 4.35
1 6PM 88.33333333 1368 2/15/14 12:00 AM 4.35
1 7AM 0 1368 2/15/14 12:00 AM 4.35
1 7PM 100 1368 2/15/14 12:00 AM 4.35
1 8AM 0 1368 2/15/14 12:00 AM 4.35
1 8PM 100 1368 2/15/14 12:00 AM 4.35
1 9AM 0 1368 2/15/14 12:00 AM 4.35
1 10AM 0 1368 2/18/14 12:00 AM NULL
1 11AM 0 1368 2/18/14 12:00 AM NULL
1 12PM 0 1368 2/18/14 12:00 AM NULL
1 1PM 0 1368 2/18/14 12:00 AM NULL
1 2PM 0 1368 2/18/14 12:00 AM NULL
1 3PM 0 1368 2/18/14 12:00 AM NULL
1 4PM 0 1368 2/18/14 12:00 AM NULL
1 5AM 0 1368 2/18/14 12:00 AM NULL
1 5PM 0 1368 2/18/14 12:00 AM NULL
1 6AM 0 1368 2/18/14 12:00 AM NULL
1 6PM 0 1368 2/18/14 12:00 AM NULL
1 7AM 0 1368 2/18/14 12:00 AM NULL
1 7PM 0 1368 2/18/14 12:00 AM NULL
1 8AM 0 1368 2/18/14 12:00 AM NULL
1 8PM 0 1368 2/18/14 12:00 AM NULL
1 9AM 0 1368 2/18/14 12:00 AM NULL
1 10AM 0 1368 2/19/14 12:00 AM NULL
1 11AM 0 1368 2/19/14 12:00 AM NULL
1 12PM 0 1368 2/19/14 12:00 AM NULL
1 1PM 0 1368 2/19/14 12:00 AM NULL
1 2PM 0 1368 2/19/14 12:00 AM NULL
1 3PM 0 1368 2/19/14 12:00 AM NULL
1 4PM 0 1368 2/19/14 12:00 AM NULL
1 5AM 0 1368 2/19/14 12:00 AM NULL
1 5PM 0 1368 2/19/14 12:00 AM NULL
1 6AM 0 1368 2/19/14 12:00 AM NULL
1 6PM 0 1368 2/19/14 12:00 AM NULL
1 7AM 0 1368 2/19/14 12:00 AM NULL
1 7PM 0 1368 2/19/14 12:00 AM NULL
1 8AM 0 1368 2/19/14 12:00 AM NULL
1 8PM 0 1368 2/19/14 12:00 AM NULL
1 9AM 0 1368 2/19/14 12:00 AM NULL
1 10AM 0 1368 2/20/14 12:00 AM NULL
1 11AM 0 1368 2/20/14 12:00 AM NULL
1 12PM 0 1368 2/20/14 12:00 AM NULL
1 1PM 0 1368 2/20/14 12:00 AM NULL
1 2PM 0 1368 2/20/14 12:00 AM NULL
1 3PM 0 1368 2/20/14 12:00 AM NULL
1 4PM 0 1368 2/20/14 12:00 AM NULL
1 5AM 0 1368 2/20/14 12:00 AM NULL
1 5PM 0 1368 2/20/14 12:00 AM NULL
1 6AM 0 1368 2/20/14 12:00 AM NULL
1 6PM 0 1368 2/20/14 12:00 AM NULL
1 7AM 0 1368 2/20/14 12:00 AM NULL
1 7PM 0 1368 2/20/14 12:00 AM NULL
1 8AM 0 1368 2/20/14 12:00 AM NULL
1 8PM 0 1368 2/20/14 12:00 AM NULL
1 9AM 0 1368 2/20/14 12:00 AM NULL
1 10AM 0 1368 2/21/14 12:00 AM 2.95
1 11AM 0 1368 2/21/14 12:00 AM 2.95
1 12PM 0 1368 2/21/14 12:00 AM 2.95
1 1PM 0 1368 2/21/14 12:00 AM 2.95
1 2PM 0 1368 2/21/14 12:00 AM 2.95
1 3PM 0 1368 2/21/14 12:00 AM 2.95
1 4PM 0 1368 2/21/14 12:00 AM 2.95
1 5AM 0 1368 2/21/14 12:00 AM 2.95
1 5PM 0 1368 2/21/14 12:00 AM 2.95
1 6AM 0 1368 2/21/14 12:00 AM 2.95
1 6PM 0 1368 2/21/14 12:00 AM 2.95
1 7AM 0 1368 2/21/14 12:00 AM 2.95
1 7PM 95 1368 2/21/14 12:00 AM 2.95
1 8AM 0 1368 2/21/14 12:00 AM 2.95
1 8PM 100 1368 2/21/14 12:00 AM 2.95
1 9AM 0 1368 2/21/14 12:00 AM 2.95
1 10AM 0 1368 2/22/14 12:00 AM 3.5
1 11AM 0 1368 2/22/14 12:00 AM 3.5
1 12PM 0 1368 2/22/14 12:00 AM 3.5
1 1PM 0 1368 2/22/14 12:00 AM 3.5
1 2PM 0 1368 2/22/14 12:00 AM 3.5
1 3PM 0 1368 2/22/14 12:00 AM 3.5
1 4PM 0 1368 2/22/14 12:00 AM 3.5
1 5AM 0 1368 2/22/14 12:00 AM 3.5
1 5PM 16.66666667 1368 2/22/14 12:00 AM 3.5
1 6AM 0 1368 2/22/14 12:00 AM 3.5
1 6PM 21.66666667 1368 2/22/14 12:00 AM 3.5
1 7AM 0 1368 2/22/14 12:00 AM 3.5
1 7PM 100 1368 2/22/14 12:00 AM 3.5
1 8AM 0 1368 2/22/14 12:00 AM 3.5
1 8PM 95 1368 2/22/14 12:00 AM 3.5
1 9AM 0 1368 2/22/14 12:00 AM 3.5
1 10AM 0 1368 2/24/14 12:00 AM 3.95
1 11AM 0 1368 2/24/14 12:00 AM 3.95
1 12PM 0 1368 2/24/14 12:00 AM 3.95
1 1PM 0 1368 2/24/14 12:00 AM 3.95
1 2PM 0 1368 2/24/14 12:00 AM 3.95
1 3PM 0 1368 2/24/14 12:00 AM 3.95
1 4PM 0 1368 2/24/14 12:00 AM 3.95
1 5AM 0 1368 2/24/14 12:00 AM 3.95
1 5PM 0 1368 2/24/14 12:00 AM 3.95
1 6AM 0 1368 2/24/14 12:00 AM 3.95
1 6PM 63.33333333 1368 2/24/14 12:00 AM 3.95
1 7AM 0 1368 2/24/14 12:00 AM 3.95
1 7PM 100 1368 2/24/14 12:00 AM 3.95
1 8AM 0 1368 2/24/14 12:00 AM 3.95
1 8PM 100 1368 2/24/14 12:00 AM 3.95
1 9AM 0 1368 2/24/14 12:00 AM 3.95
1 10AM 0 1368 2/25/14 12:00 AM 3.25
1 11AM 0 1368 2/25/14 12:00 AM 3.25
1 12PM 0 1368 2/25/14 12:00 AM 3.25
1 1PM 0 1368 2/25/14 12:00 AM 3.25
1 2PM 0 1368 2/25/14 12:00 AM 3.25
1 3PM 0 1368 2/25/14 12:00 AM 3.25
1 4PM 0 1368 2/25/14 12:00 AM 3.25
1 5AM 0 1368 2/25/14 12:00 AM 3.25
1 5PM 0 1368 2/25/14 12:00 AM 3.25
1 6AM 0 1368 2/25/14 12:00 AM 3.25
1 6PM 76.66666667 1368 2/25/14 12:00 AM 3.25
1 7AM 0 1368 2/25/14 12:00 AM 3.25
1 7PM 100 1368 2/25/14 12:00 AM 3.25
1 8AM 0 1368 2/25/14 12:00 AM 3.25
1 8PM 40 1368 2/25/14 12:00 AM 3.25
1 9AM 0 1368 2/25/14 12:00 AM 3.25
1 10AM 0 1368 2/26/14 12:00 AM 0.6
1 11AM 0 1368 2/26/14 12:00 AM 0.6
1 12PM 0 1368 2/26/14 12:00 AM 0.6
1 1PM 0 1368 2/26/14 12:00 AM 0.6
1 2PM 0 1368 2/26/14 12:00 AM 0.6
1 3PM 0 1368 2/26/14 12:00 AM 0.6
1 4PM 0 1368 2/26/14 12:00 AM 0.6
1 5AM 0 1368 2/26/14 12:00 AM 0.6
1 5PM 0 1368 2/26/14 12:00 AM 0.6
1 6AM 0 1368 2/26/14 12:00 AM 0.6
1 6PM 0 1368 2/26/14 12:00 AM 0.6
1 7AM 0 1368 2/26/14 12:00 AM 0.6
1 7PM 0 1368 2/26/14 12:00 AM 0.6
1 8AM 0 1368 2/26/14 12:00 AM 0.6
1 8PM 38.33333333 1368 2/26/14 12:00 AM 0.6
1 9AM 0 1368 2/26/14 12:00 AM 0.6
1 10AM 0 1368 2/27/14 12:00 AM 4.9
1 11AM 0 1368 2/27/14 12:00 AM 4.9
1 12PM 0 1368 2/27/14 12:00 AM 4.9
1 1PM 0 1368 2/27/14 12:00 AM 4.9
1 2PM 0 1368 2/27/14 12:00 AM 4.9
1 3PM 0 1368 2/27/14 12:00 AM 4.9
1 4PM 0 1368 2/27/14 12:00 AM 4.9
1 5AM 0 1368 2/27/14 12:00 AM 4.9
1 5PM 25 1368 2/27/14 12:00 AM 4.9
1 6AM 0 1368 2/27/14 12:00 AM 4.9
1 6PM 100 1368 2/27/14 12:00 AM 4.9
1 7AM 0 1368 2/27/14 12:00 AM 4.9
1 7PM 100 1368 2/27/14 12:00 AM 4.9
1 8AM 0 1368 2/27/14 12:00 AM 4.9
1 8PM 100 1368 2/27/14 12:00 AM 4.9
1 9AM 0 1368 2/27/14 12:00 AM 4.9
1 10AM 0 1368 2/28/14 12:00 AM 7.1
1 11AM 23.33333333 1368 2/28/14 12:00 AM 7.1
1 12PM 91.66666667 1368 2/28/14 12:00 AM 7.1
1 1PM 58.33333333 1368 2/28/14 12:00 AM 7.1
1 2PM 0 1368 2/28/14 12:00 AM 7.1
1 3PM 0 1368 2/28/14 12:00 AM 7.1
1 4PM 76.66666667 1368 2/28/14 12:00 AM 7.1
1 5AM 0 1368 2/28/14 12:00 AM 7.1
1 5PM 100 1368 2/28/14 12:00 AM 7.1
1 6AM 0 1368 2/28/14 12:00 AM 7.1
1 6PM 100 1368 2/28/14 12:00 AM 7.1
1 7AM 0 1368 2/28/14 12:00 AM 7.1
1 7PM 23.33333333 1368 2/28/14 12:00 AM 7.1
1 8AM 0 1368 2/28/14 12:00 AM 7.1
1 8PM 0 1368 2/28/14 12:00 AM 7.1
1 9AM 0 1368 2/28/14 12:00 AM 7.1
select l.*,(l.totrev/l.cnt) as AvgRev,(l.totworkdone)/l.cnt as AVgworkdone
from
(select k.*,SUM(Case when k.rn=k.cnt1 then k.Rev end) over(partition by k.[ Secempid] ) as totrev
from
(SELECT [ampm]
,[workdone]
,[loc]
,[Secempid]
,[Date]
,[Rev]
,COUNT(date) over(partition by [Secempid],AMPM ) as cnt
,SUM(workdone) over(partition by [Secempid])as totworkdone
,row_number() over(partition by [Secempid],date,rev order by [Secempid],date,rev) as rn
,COUNT(rev)over(partition by [Secempid],date,rev ) as cnt1
FROM MYTABLE
where [Secempid]='1368'
and DATE between '2014-02-01' and '2014-02-28'
group by [ampm]
,[workdone]
,[loc]
,[ Secempid]
,[Date]
,[Rev]
)k
)l
SV

Hi S,
There are many ways to solve your challenge. What I have found very efficient, and not to complicated, is to add a measure group, which simply has days as facts, with one attached dimension, dates. Your measure could be [Business Days], which has a '1' for
every business day. Then, wherever you are in the cube, you can get Measures.[Business Days] for the number of business days. It can be the denominator in your average calc.
Of course, you can also dynamically count a filtered set of days in your date dimension (filtered by business day). Which would also work, but its performance would not be as good as a physical measure.
Hope that helps,
Richard

Similar Messages

  • Guidance on use of "COUNT(*) OVER () * 5" in a select query.

    Hello Friends,
    I was reading one article, in which one table was created for demo. Following was the statements for there.
    CREATE TABLE source_table
    NOLOGGING
    AS
    SELECT ROWNUM AS object_id
    , object_name
    , object_type
    FROM all_objects;
    INSERT /*+ APPEND */ INTO source_table
    SELECT ROWNUM (COUNT(*) OVER () * 5)+ AS object_id
    , LOWER(object_name) AS object_name
    , SUBSTR(object_type,1,1) AS object_type
    FROM all_objects;
    INSERT /*+ APPEND */ INTO source_table
    SELECT ROWNUM (COUNT(*) OVER() * 10)+ AS object_id
    , INITCAP(object_name) AS object_name
    , SUBSTR(object_type,-1) AS object_type
    FROM all_objects;
    Can anyone please tell me the purpose of *"ROWNUM + (COUNT(*) OVER () * 5)"* in above 2 insert statements, or suggest me some document on that.
    I don't know about its usage, and want to learn that..
    Regards,
    Dipali..

    The insert statements that you have listed are using Oracle Analytic Functions. Some examples of these functions can be found here: [Oracle Analytic Functions|http://www.psoug.org/reference/analytic_functions.html|Oracle Analytic Functions]
    Effectively what that says is the following:
    1. "COUNT(*) OVER ()" = return the number of rows in the entire result set
    2. Multiply that by 5 (or 10 depending on the insert)
    3. Add the current ROWNUM value to it.
    This can be shown with a simple example:
    SQL> edit
    Wrote file sqlplus_buffer.sql
      1  SELECT *
      2  FROM
      3  (
      4     SELECT ROWNUM r,
      5             (COUNT(*) OVER ()) AS ANALYTIC_COUNT,
      6             5,
      7             ROWNUM + (COUNT(*) OVER () * 5) AS RESULT
      8     FROM all_objects
      9  )
    10* WHERE r <= 10
    SQL> /
             R ANALYTIC_COUNT          5     RESULT
             1          14795          5      73976
             2          14795          5      73977
             3          14795          5      73978
             4          14795          5      73979
             5          14795          5      73980
             6          14795          5      73981
             7          14795          5      73982
             8          14795          5      73983
             9          14795          5      73984
            10          14795          5      73985
    10 rows selected.
    SQL> SELECT COUNT(*) from all_objects;
      COUNT(*)
         14795Hope this helps!
    Note the the statements you provided will not actually execute because of the extra "+" signs on either side. I have removed them.

  • Counting Over Last 3500 appearances with Where Clause

    I'm Using Sql Server Studio 2014
    I have a Table containing the following columns:
    AutoId  Assembly_No  [Rank]   
    1          Assembly1       2
    2          Assembly2       1
    3          Assembly1       2
    4          Assembly1       1
    5          Assembly1       0
    6          Assembly2       2
    7          Assembly2       1
    I'm Trying to Run a query that Will count over the last 3500 times that a specific Assembly_No has been that has a rank > 0. For simplicity's sake we can use we can look at the last 2 times that an assembly has been ran. So the results that I'm expecting
    should look like this
    Assembly_NO   Count
    Assembly2        2
    Assembly1        1
    This resulted in assembly2 being counted twice for ids 6 and 7 and only once for ids 4 and 5 because only one rank was > 0. AutoID is an identity column so the most recent values will be determined using this number.
    The query below can count over all of the Assemblies ran, however I'm having trouble only counting over the last 2 for each assembly
    Select Assembly_No, Count(*) as Count
    From TblBuild
    Where Rank > 0
    Group By Assembly_No
    Returns the following 
    Assembly_no  Count
    Assembly2      3
    Assembly1      3

    Looks like this should return what you are expecting:
    --drop table #temp
    create table #temp
    autoid int,
    assembly_no nvarchar(10),
    rank_no int
    insert into #temp (autoid, assembly_no, rank_no) values (1, 'Assembly1', 2)
    insert into #temp (autoid, assembly_no, rank_no) values (2, 'Assembly2', 1)
    insert into #temp (autoid, assembly_no, rank_no) values (3, 'Assembly1', 2)
    insert into #temp (autoid, assembly_no, rank_no) values (4, 'Assembly1', 1)
    insert into #temp (autoid, assembly_no, rank_no) values (5, 'Assembly1', 0)
    insert into #temp (autoid, assembly_no, rank_no) values (6, 'Assembly2', 2)
    insert into #temp (autoid, assembly_no, rank_no) values (7, 'Assembly2', 1)
    select t.assembly_no, count(nullif(t.rank_no, 0))
    from #temp t
    inner join 
    select autoid, rank_no, row_number() over (partition by assembly_no order by autoid desc) as ct
    from #temp
    )x
    on x.autoid = t.autoid
    and x.ct <= 2
    group by t.assembly_no

  • IR Problem with COUNT (*) OVER () AS apxws_row_cnt

    Hi all,
    i have a query wich is ran under 2 seconds when ecexuted in SQL (TOAD),
    when i use this query in an interactive report then it takes minutes.
    When i reviewd the query which APEX is sending to the DB i noticed an additional select clause:
    SELECT columns,
    COUNT ( * ) OVER () AS apxws_row_cnt
    FROM (
    SELECT *
    FROM ( query
    ) r
    ) r
    WHERE ROWNUM <= TO_NUMBER (:APXWS_MAX_ROW_CNT)
    when i remove the COUNT ( * ) OVER () AS apxws_row_cnt then the query is fast again.
    How can i change the IR so that the COUNT ( * ) OVER () AS apxws_row_cnt doesn't appear annymore.
    I removed the pagination (- no pagination selected -)
    I put the query in a new IR report in a new page, the COUNT ( * ) OVER () AS apxws_row_cnt still appears.
    Any suggestions i can try ?
    Regards,
    Marco

    Marco1975 wrote:
    Hi all,
    i have a query wich is ran under 2 seconds when ecexuted in SQL (TOAD), I doubt that.
    I think your query returns many rows. Did you see all the rows in Toad in 2 secs?
    If the answer is NO then the query is not finished after 2 secs. It just showed you the first few rows.
    Almost every tool including apex does the same.
    However if you want to know how many rows are returned then there is no way around doing the whole select until the last row.
    Then the result can be shown.
    APEX or a developer might use something like the analytic function "count(*) over ()" to get this result already on the first row. The database still needs to fetch all the rows. Howeverthe result set is not transported over the network to the client, which might also save a lot of time compared to not doing it on the database level.

  • Barcode counting over groups

    Hi all,
    I am working on a barcode for my invoices print. This barcode counts over the invoices from 1 to 16 and then starts over again. For example if i have a set of 6 invoices each of 3 pages the barcode numbering will be as follows:
    invoice 1 page 1: 1
    invoice 1 page 2: 2
    invoice 1 page 3: 3
    invoice 2 page 1: 4
    invoice 2 page 2: 5
    invoice 5 page 3: 15
    invoice 6 page 1: 16
    invoice 6 page 2: 1
    invoice 6 page 3: 2
    I have tried to solve this with some xsl-fo programming, but i can't get the page-number or total pages per invoice into a variable. So I can't determine how many pages have been printed for previous invoices and which is the next number to print in the barcode.
    Hope you can give me some advice.

    Looks like this should return what you are expecting:
    --drop table #temp
    create table #temp
    autoid int,
    assembly_no nvarchar(10),
    rank_no int
    insert into #temp (autoid, assembly_no, rank_no) values (1, 'Assembly1', 2)
    insert into #temp (autoid, assembly_no, rank_no) values (2, 'Assembly2', 1)
    insert into #temp (autoid, assembly_no, rank_no) values (3, 'Assembly1', 2)
    insert into #temp (autoid, assembly_no, rank_no) values (4, 'Assembly1', 1)
    insert into #temp (autoid, assembly_no, rank_no) values (5, 'Assembly1', 0)
    insert into #temp (autoid, assembly_no, rank_no) values (6, 'Assembly2', 2)
    insert into #temp (autoid, assembly_no, rank_no) values (7, 'Assembly2', 1)
    select t.assembly_no, count(nullif(t.rank_no, 0))
    from #temp t
    inner join 
    select autoid, rank_no, row_number() over (partition by assembly_no order by autoid desc) as ct
    from #temp
    )x
    on x.autoid = t.autoid
    and x.ct <= 2
    group by t.assembly_no

  • Reg: counter over reading .

    what is  "CntrOverReadg"  in the ik01  while creating measurement point counter.
    can any tell me the usage of this.  what data shd i"ve to enter in this cntrOverReadg

    Its counter overflow reading. It describes the maximum possible reading that physical measurement point can measure.
    For example: A car odometer can measure up to 99999 km of distance covered by it. Hence after the car odometer reaches the value of 99999 km it starts again from 0 km. However the total distance covered by the should be 99999 plus the reading after counter over flow.
    Lets say you have provided the counter overflow reading e.g. 99 for any measurement point for which the current counter reading is 10. If you will create the measurement document with reading less than 10 (let say 4) then the system will consider it as counter overflow occurred and will show the total reading as 103 (i.e 99 + 4) and counter reading as 4.
    Hope it helps!
    Regards,
    Saif Ali Momin

  • Hu is related to undershipment and count over ?

    Hi everyone ,
      Can anyone tell me how HU is related to undershipment and count over ?

    Did you try a reset? Hold down on the sleep and home buttons at the same time until the Apple logo appears on the screen, then let go of the buttons.

  • Counter over flow reading

    What is the utilization of counter over flow reading and annual estimate in counters? Is there any report or any application where it helps us in tracking?
    Regards,
    VM
    Edited by: V M on Jun 1, 2008 9:35 AM

    hi
    counter overflow reading is used wherein your counter reading overflows
    for example your milometer will only show 9999 miles ,once this counter has reached overflow  occurs ,i.e. the counter starts to count upwards from 0000 again
    Annual estimate is used for scheduling purpose for performance based scheduling and for multiple counter plan ,
    Multiple counter plan
    The system uses the current date as the start date and automatically calculates the planned dates based on the maintenance cycles, the scheduling parameters, the estimated annual performance and the last counter readings
    Performance based
    The system automatically calculates the planned date and call date based on the maintenance packages, the scheduling parameters, the estimated annual performance and the counter reading at the start of the cycle
    regards
    thyagarajan

  • Need help using count over function

    I have the following query
    Select student_id, OM, TM, TP, count(rownum) over (order by OM desc) PS from
    (select
    er.student_id, sum(er.obtained_marks) OM, sum(ds.max_marks) TM,
    to_char(sum(er.obtained_marks)/sum(ds.max_marks)*100,'990.00') TP
    from
    tbl_exam_results er, tbl_date_sheet ds
    where
    ds.date_sheet_id = er.date_sheet_id and ds.class_id = 77 and ds.exam_id = 3 and ds.session_id = 1 group by er.student_id
    results in
    <div style="width: 889px"><div class="fielddata"><div>
    <div>STUDENT_ID OM TM TP PS
    1825 291 300 97.00 1
    3717 290 300 96.67 2
    2122 289 300 96.33 3
    3396 287 300 95.67{color:#ff6600} *5 &lt;--*{color}
    4554 287 300 95.67{color:#ff6600} *5 &lt;--*{color}
    1847 281 300 93.67 6
    1789 279 300 93.00 7
    5254 277 300 92.33 8
    1836 258 300 86.00 9
    4867 250 260 96.15 10
    1786 249 300 83.00 11
    4659 245 300 81.67 12
    1835 241 300 80.33 *{color:#ff6600}15 &lt;--{color}*
    1172 241 270 89.26*{color:#ff6600} 15 &lt;--{color}*
    3696 241 300 80.33 *{color:#ff6600}15 &lt;--{color}*
    3865 234 300 78.00 16
    5912 215 300 71.67 17
    5913 204 300 68.00 *{color:#ff6600}19 &lt;--{color}*
    3591 204 300 68.00 *{color:#ff6600}19 &lt;--{color}*
    1830 184 250 73.60 20
    </div>
    </div>
    </div>
    </div>
    <div style="width: 889px"><div class="fielddata"><div>
    But I want as following
    <div>STUDENT_ID OM TM TP PS
    1825 291 300 97.00 1
    3717 290 300 96.67 2
    2122 289 300 96.33 3
    3396 287 300 95.67 *{color:#ff6600}4 &lt;={color}*
    4554 287 300 95.67 *{color:#ff6600}4 &lt;={color}*
    1847 281 300 93.67 {color:#ff6600}5 the following entry{color}
    1789 279 300 93.00 6
    5254 277 300 92.33 7
    1836 258 300 86.00 8
    4867 250 260 96.15 9
    1786 249 300 83.00 10
    4659 245 300 81.67 11
    1835 241 300 80.33 {color:#ff6600}*12 &lt;=*{color}
    1172 241 270 89.26{color:#ff6600} *12 &lt;=*{color}
    3696 241 300 80.33 {color:#ff6600}*12 &lt;=*{color}
    3865 234 300 78.00{color:#ff6600} 13 the following entry{color}
    5912 215 300 71.67 14
    5913 204 300 68.00 *{color:#ff6600}15&lt;={color}*
    3591 204 300 68.00 *{color:#ff6600}15 &lt;={color}*
    1830 184 250 73.60 {color:#ff6600}16{color} {color:#ff6600}the following entry{color}
    </div>
    Thanks in advance for any help
    </div>
    </div>
    </div>
    <div style="width: 889px"></div>
    Edited by: sabir786 on Jan 14, 2009 4:13 AM
    Edited by: sabir786 on Jan 14, 2009 4:17 AM

    Since I do not understand at all what you are trying to do, I cannot correct your query, but I can explain the results.
    The analytic function is doing a running count of the number of records that have been outout so far. With no duplicates, this is somewhat clearer.
    SQL> WITH t AS (SELECT 1 om FROM dual UNION ALL
      2             SELECT 2 FROM dual UNION ALL
      3             SELECT 3 FROM dual UNION ALL
      4             SELECT 4 FROM dual UNION ALL
      5             SELECT 5 FROM dual)
      6  SELECT om, COUNT(rownum) OVER (ORDER BY om) ps
      7  FROM t;
            OM         PS
             1          1
             2          2
             3          3
             4          4
             5          5However, when you have duplicates, both duplicate values get the running count from the last of the duplicates (i.e. the highest running count). Here, I have duplicated 4 and see what I get:
    SQL> WITH t AS (SELECT 1 om FROM dual UNION ALL
      2             SELECT 2 FROM dual UNION ALL
      3             SELECT 3 FROM dual UNION ALL
      4             SELECT 4 FROM dual UNION ALL
      5             SELECT 4 FROM dual UNION ALL
      6             SELECT 5 FROM dual)
      7  SELECT om, COUNT(rownum) OVER (ORDER BY om) ps
      8  FROM t;
            OM         PS
             1          1
             2          2
             3          3
             4          5
             4          5
             5          6The "second" 4 record had a running count of 5 (i.e. it was the fifth record output), so both 4's get the same count. Changing the order by to descending shows the same effect, it just changes the running count:
    SQL> WITH t AS (SELECT 1 om FROM dual UNION ALL
      2             SELECT 2 FROM dual UNION ALL
      3             SELECT 3 FROM dual UNION ALL
      4             SELECT 4 FROM dual UNION ALL
      5             SELECT 4 FROM dual UNION ALL
      6             SELECT 5 FROM dual)
      7  SELECT om, COUNT(rownum) OVER (ORDER BY om DESC) ps
      8  FROM t;
            OM         PS
             5          1
             4          3
             4          3
             3          4
             2          5
             1          6John

  • Cisco WLC AP count over SNMP

    Hi,
    Is it possible to monitore a quantity of AP on Cisco WLC and quantity of wireless clients?
    I was found only list of AP names over snmp...
    Thanks in advance

    Hi, Ralf
    If not late
    I use script directly in monitoring system
    main ()
    VALUE=`snmpwalk -v 2c -c xxxCommunityxxx X.X.X.X 1.3.6.1.4.1.9.9.513.1.1.1.1.2 | wc -l`
    echo "Message: Warning! Number of registed APs decriased."
    echo "Data:Count"
    echo "Count\t$VALUE"
    exit 0
    main $*
    This is shell. but you can use simple only one line
    `snmpwalk -v 2c -c xxxCommunityxxx X.X.X.X 1.3.6.1.4.1.9.9.513.1.1.1.1.2 | wc -l`
    (from linux)

  • Count Over Count

    Hi, i was looking for some help in regards to understanding the general rule of using the count function. I am trying to build my knowledge of the different functions in order for me to have a better understanding so that I am writing my sql queries in the right way.
    What i was looking to find out was trying to understand the count function. As i understand I can apply the following in order to give me a count of a particular set of data.
    select number, count(*) AS no_count
    from test
    group by job_id;What I am trying to understand is if for some reason I wanted to use the results of the "count(*) AS no_count" to give me another set of results i.e. sum all values over 2 for example, would i write it like the following:
    select number, count(*) AS no_count,
    select count(no_count) having etc...
    from test
    group by job_id;or is the general rule that I would have to write a select within a select?
    If somebody could please help, would really appreciate it. Btw if there is any documentation on using count, as well as using this with other functions i.e. sum than that would be very useful as well.
    Thanks in advance.

    Solomon, thanks for your help. Apologies if i have not explained my question properly. The problem is that I haven't created the tables and have wrote my sample data and i am then trying to work out solutions before attempting to create the tables etc, which is probably where I am going wrong.
    The job_ids can be repeated, first job_count should give me total job counts belonging to each job_id.
    For example in your first dataset you have a job count for all jobs i.e. manager has 3 records with 1 job_count. So I would then like a column to give me a total count of job_count for each criteria i.e. manager had 3 total jobs. I have tried to breakdown the dataset you have shown alongwith the extras I am trying to add, to hopefully explain what I am looking for.
    JOB               JOB_COUNT       TOTAL_JOB_COUNT               OVER_1
    MANAGER                    1                    3                                   0
    PRESIDENT                  1                    1                                   1
    CLERK                         1                    4                                   0
    SALESMAN                  4                    4                                   0
    ANALYST                    2                    2                                   0
    MANAGER                   1                    3                                   0
    MANAGER                   1                    3                                   0
    CLERK                        1                    4                                   0
    CLERK                        2                    4                                   0
    [/CODE]
    So this tells from all jobs which job was dealt with first time so in this case it would be the president, the rest of the jobs were repeated.
    The total_job_count would be written like:select job, count(*) as TOTAL_JOB_COUNT
    but its the over_1 (or sum maybe, not sure) that is based on the results within the total_job_count that I need to look into to find values that equal to 1. Hence I thought I would have to write a count of a count, which is what I am not clear on.
    Sorry for the inconvenience, and really appreciate your help and time.
    Thanks
    Apologies, not sure how to write the resultset as added but appears all over the place.
    Edited by: 973436 on 17-Dec-2012 04:06                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Load Cycle Count over 7500 in just 5 days on new DV4 series notebook

    Hi,
      I'm really concerned about my new laptop i bought just a week ago (dv4-4172); it happens that the disk is making a lot of noise just like the infamous 'clicking sound of death' and reporting up to now, more than 7500 load cycles in just 5 days of operation and also, 2 errors in the CRC of the UDMA. Since my hard disk (Hitachi HTS547564A9E384 640.1 GB) was behaving like that i ran several test on the disk (HP's BIOS run-tests, HDD Tune, smartctl (short and the long one), etc.) but i'm not getting a single read-write error, that is, the system hasn't any functional/operational problem, yet. Furthermore, my disk's lifespan is 600000 load cycles and i'm getting about 1500 in just a day, you don't even need a calculator to realize the harddisk won't last more than 1.5 years. The constant clicking of the disk is also getting on my nerves too .
      The question here is, is it normal for this hard disk to have an average count of 1500 load/unload in a day (i.e. 1 cycle every minute) or is my disk the only one with suicidal attitudes?. My last notebook's harddisk (Seagate) didn't get more than 100k cycles in 2 years of operation, that's why i'm concerned...
      Naturally, hdparm solves the issue, but (partially or permanently) disabling my harddisk's APM is even more suicidal than my disk's actual attitude, and by the way, you need some technical skillset to change those values (I'm a linux sysadmin, so i knew how to solve that specific issue), why is the default configuration set as this? or is HP expecting every single user to know how to modify the APM or, if you don't, to have a seat while watching your hard disk self destruction take place.
    Thanks in advance.
    - melvin
    FYI, the problem is agnostic of which operating system i use, even in the POST stage the harddisk starts with that annoying clicking (and its corresponding cycle count increased), and yeah, i've updated all my drivers and followed every suggestion hp support assistant gave to me, like the nice guy i am.
    This question was solved.
    View Solution.

    Hello uncle_sam,
    You’re having a problem with the hard drive.
    The drive is not supposed to be making any noise.
    Since the unit is still new I’d call HP and have them send you a new HDD.
    Here is a link to the phone number.
    Clicking the White Kudos star on the left is a way to say Thanks!
    Let me know how everything goes.
    Have a good day.

  • 24V counter over the network

    Hi all,
    I'm searching a low-cost device which is able to count two 24V signals (up time : approx 1ms), and accessible by network.
    Do you have any suggestion to help me ?
    Best regards,
    V-F

    Hi V-F,
    NI sells many different types of devices and it is not easy to decide what is best without more information.
    My suggestion would be to contact a technical sales representative at your local NI office and explain what you are trying to accomplish. They will then suggest the best devices for your application and try to take all aspects of what you are doing in consideration.
    (There is a link at the bottom of the ni.com page to contact your local representative)
    As far as I am concerned, you're probably looking for a 1-slot cDAQ with an appopriate digital module.
    Regards,
    Joseph Tagg

  • Grand Total SUM of COUNT

    Hi all!
    I am trying to grab a total of the COUNTS and SUM them together. I am trying to build a chart with graphics builder, but alas, that software has a 2000 character limit for its queries, so the inline view approach did not work. Here is my query:
    SELECT
    to_char(to_date(papf.attribute1, 'YYYYMMDD'), 'YYYY') Base_Entry_Year,
    count(*)
    -- SUM(count(*)) does not work
    FROM
    per_all_assignments_f paaf,
    per_all_people_f papf,
    per_jobs pj,
    per_job_definitions pjd
    WHERE
    papf.person_id = paaf.person_id
    AND paaf.ass_attribute14 IS NOT NULL
    AND to_char(to_date(papf.attribute1, 'YYYYMMDD'), 'YYYY') IS NOT NULL
    AND (papf.employee_number IS NOT NULL OR papf.person_type_id = 3)
    AND paaf.job_id = pj.job_id
    AND pj.job_definition_id = pjd.job_definition_id
    AND pjd.segment1 NOT IN ('09140018', '09340006', '09140037', '09140051', '09140052', '09140054', '09140055')
    GROUP BY
    to_char(to_date(papf.attribute1, 'YYYYMMDD'), 'YYYY')
    ORDER BY
    to_char(to_date(papf.attribute1, 'YYYYMMDD'), 'YYYY')
    Any ideas on how I can accomplish this? Thanks!
    Steve

    Hi,
    I ran two different queries:
    1.
    select KeyCol, count(*), Sum(count(*)) over ()
      from Test t
    group by t.KeyCol ;
    Result:
            KEYCOL     COUNT     SUM(COUNT(*))OVER()
    1     DHP     398     5505
    2     FIH     1     5505
    3     FIN     2971     5505
    4     FLP     5     5505
    5     HTBC     1     5505
    6     Z99     22     5505
    7          2107     55052.
    select KeyCol, count(*), count(*) over ()
      from test t
    group by KeyCol;
    Result:
            KEYCOL     COUNT     COUNT(*)OVER()
    1     DHP     398     7
    2     FIH     1     7
    3     FIN     2971     7
    4     FLP     5     7
    5     HTBC     1     7
    6     Z99     22     7
    7          2107     7This shows that both queries are not equivalent.
    Thanks,
    Dharmesh Patel

  • IR row count

    Apex 4.0.2
    I have an IR with one of the columns as count() over ()*
    When filters are applied to the IR, I expected the column to reflect the number of rows in the filtered result set but it remains unchanged.
    Any idea why? Thanks
    [Yes, I know I can use a *Rows X to Y of Z* pagination to get the same result but I was just curious]

    Interactive Reports do not execute the query on the database when you apply filters. They only work on the result set retrieved by the report query. If you have noticed, >
    the page is not submitted on applying filters.That is not correct. The IR query is re-executed every time. This can be verified by running the IR, changing some data and then applying a filter, it will pull in the latest data. The page doesn't need to be submitted because it uses AJAX(XMLHTTP) to communicate with the database.
    I think I know what is happening here. Suppose the IR query is of the form
    select empno,name,city,count(*) over () tot from empWhen the IR applies a filter, it does select * from (the query above) where <filter>. Since it doesn't re-write my query, the count(*) over () analytic function stays inside the query and it's value doesn't change since the filter predicate is not pushed inside the query. For the count(*) to reflect the result set after the filter, APEX would have to re-write the query as select x.*,count(*) over () tot from (my query) where <filter> and it doesn't do that.
    I guess it makes sense and using Pagination X to Y of Z really does the same thing but I find that pagination slows down the IR although it is really equivalent to the count(*) over (). Or maybe the Pagination causes the database to really fetch and discard all the rows while the analytic function does some other, more efficient magic. Oh well.
    Thanks

Maybe you are looking for

  • Reg events in alv

    Hi, I created  three radiobuttons  in alv report including with line items. whenver double click  line  items , its going next screen., Here my requirement is when i click  one click  radiobutton should selec

  • Can't make purchases from a specific app?

    I bought about $200 worth of things from an app, and tried entering a new card to make more purchases. Now I keep getting an error saying it cannot contact itunes store, like it is locked out, but I can buy other things on other apps just fine... How

  • Timed out exception

    Hi all, I have been getting a timed out exception in my EJB layer when performing a query that will return 89,000 results. First, there is no issue with our Oracle database as the results are returned withing seconds when performed from the command l

  • Select/configure relevant hostname in cluster environment

    The actual situation of my problem is more complex, but I will try to reduce it to the relevant part: I have a web service and a servlet running within a TomCat4 instance, which is controlled by HPServiceGuard. This means: the JVM of TomCat runs with

  • Viewing video on TV

    I noticed that the Line-in connection cable for docks is the same type of connector as headphones have. I tried to link video using the headphone jack. I turned on TV out, plugged in, and let her run. The sound worked, but the video didn't, there was