Oracle 8i JServer leaks 100Mb per hour

The 8.1.5 aurora jvm appears to leak a significant amount
of memory whenever a session is ended or a new session is
created. Is this a known bug? Is there a workaround?
In particular, this is a problem for job queues that
execute java. In only a few hours several 100Mb of ram
will leak for each snp process on my system. I assume
this is because the job queues start a new session
for each job.
Example:
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
1405 oracle 0 0 220M 182M 48144 S 0 0.0 19.2 7:34 ora_snp0
8023 oracle 3 0 35988 35M 33956 D 0 4.9 3.7 0:04 oracle
The snp0 process has at this point been executing a few hundres
jobs over a period of two hours. The second process has been
executing the same java code, but within one session.
The most intersting part is that the memory isn't shared,
but my snp processes will lose hundreds of Mb each, quickly
using up all system ram.
I've traced the leaks to being static variables in java.
The following test code shows what i believe is the problem:
class Boog
String name;
public Boog (String n) { name = n; System.out.println ("Boog "+name+" init"); }
protected void finalize () { System.out.println ("Boog "+name+" cleanup"); }
public void print () { System.out.println ("Boog "+name+" printing"); }
class Test
static Boog boog = new Boog("one");
public static String test ()
boog.print();
Boog boog2 = new Boog("two");
boog2.print();
boog2 = null;
return "done";
loadjava -u scott/tiger -r -v Test.java Boog.java
create function javatest return string as
language java name 'Test.test () return java.lang.String';
select javatest from dual;
select javatest from dual;
connect scott/tiger
select javatest from dual;
now lets look at the trace file for the process:
Oracle8i Enterprise Edition Release 8.1.5.0.2 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
ORACLE_HOME = /ora01/app/oracle/products/8.1.5
System name: Linux
Release: 2.2.14
Version: #9 SMP Sun Mar 5 18:52:44 CST 2000
Machine: i686
Redo thread mounted by this instance: 1
Oracle process number: 15
Unix process pid: 2082, image: oracle@localhost (TNS V1-V3)
*** SESSION ID:(19.600) 2000.04.01.22.39.43.794
Boog one init
Boog one printing
Boog two init
Boog two printing
Boog two cleanup
*** 2000.04.01.22.40.27.455
Boog one printing
Boog two init
Boog two printing
Boog two cleanup
*** SESSION ID:(19.602) 2000.04.01.22.40.35.499
Boog one init
Boog one printing
Boog two init
Boog two printing
Boog two cleanup
As expected Boog one is reused in the second call in the
first session. Boog two is immediately cleaned when it's
nulled out.
Boog one is never cleaned up when the session is ended!
Help!...
null

To clean up your memory at the end of a call
you may use the End-Of-Call Callback.
The following code is the example given by Oracle in their documentation:
import oracle.aurora.memoryManager.Callback;
import oracle.aurora.memoryManager.EndOfCallRegistry;
public class EndOfCallExmpl extends Object {
static String cachedField = null;
private static Callback thunk = null;
static void clearCachedField() {
System.out.println("before clearing the cached field");
cachedField = null;
thunk = null;}
public static String getCachedField() {
if (cachedField == null) {
// save thunk in static field so it doesn't get reclaimed
// by garbage collector
thunk = new Callback () {
public void act(Object obj) {
EndOfCallExmpl.clearCachedField();
// register thunk to clear cachedField at end of call
EndOfCallRegistry.registerCallback(thunk);
// finally, set the cached field
cachedField = "hello world";
return cachedField;}
null

Similar Messages

  • Redo Log Files - more than 12 per hour

    Hello @all,
    I have a problem with my redo log files. I get more than 12 switches per Hour. I have 3 Files with 5oM size. I increased the sitz to 15o M, but
    I still have 12 switches per hour.
    Do anyone know, what I did wrong?
    Database:
    Oracle 9i
    Thanks
    Martin

    user9528362 wrote:
    Hello @all,
    yes I know, that 3 switches per hour are perfekt, but I had increased the Size from 5o M to 15oM already and the amount from switches, are not reduced.
    So there must be something else, that causes the log switches.Martin,
    As I said somewhere above too, 150meg is a tiny size if you are managing a production db. I have already mentioned that make your log file size to at least 500meg and than check. As for the high redo activity, only you can confirm that whether this has been started from now only or was happening before too? In anycase, for an active oltp, 500 to 1gb of redo log file size should be okay.
    For the extra redo generation, you have been given link for mining log files using logminer. Try using it to see what is causing extra redo.
    HTH
    Aman....

  • View to find Physical read and Write per hour

    HI,
    My oracle database is running with 11.2.0.3 and we are analysing the database. we would like to understand what the average load time per hour is for  a database under normal circumstances e.g. 5 TB /Hr ?
    What I want to know is, Is it possible to find the amount physical reads and writes happened per hour basis on database. Is there any view to find this ?
    Thanks,
    Kesav.

    1e36fe40-5d07-412e-8ed2-704c647ed7a7 wrote:
    HI,
    My oracle database is running with 11.2.0.3 and we are analysing the database. we would like to understand what the average load time per hour is for  a database under normal circumstances e.g. 5 TB /Hr ?
    What I want to know is, Is it possible to find the amount physical reads and writes happened per hour basis on database. Is there any view to find this ?
    Thanks,
    Kesav.
    number of READ operations?
    number of bytes read?
    SQL> desc v$IOSTAT_FILE
    Name                                      Null?    Type
    FILE_NO                                            NUMBER
    FILETYPE_ID                                        NUMBER
    FILETYPE_NAME                                      VARCHAR2(28)
    SMALL_READ_MEGABYTES                               NUMBER
    SMALL_WRITE_MEGABYTES                              NUMBER
    LARGE_READ_MEGABYTES                               NUMBER
    LARGE_WRITE_MEGABYTES                              NUMBER
    SMALL_READ_REQS                                    NUMBER
    SMALL_WRITE_REQS                                   NUMBER
    SMALL_SYNC_READ_REQS                               NUMBER
    LARGE_READ_REQS                                    NUMBER
    LARGE_WRITE_REQS                                   NUMBER
    SMALL_READ_SERVICETIME                             NUMBER
    SMALL_WRITE_SERVICETIME                            NUMBER
    SMALL_SYNC_READ_LATENCY                            NUMBER
    LARGE_READ_SERVICETIME                             NUMBER
    LARGE_WRITE_SERVICETIME                            NUMBER
    ASYNCH_IO                                          VARCHAR2(9)
    ACCESS_METHOD                                      VARCHAR2(11)
    RETRIES_ON_ERROR                                   NUMBER
    SQL>

  • Q: Count concurrent sessions per hours in a specified interval

    Hi,
    I have this table wich contains SESSIONID, CREATEDATE, LASTCHECKDATE, EXPIREDATE, PARTNERID
    We need to make a query that would return the number of max concurent session per hours for the interval specified.
    For example, for last week, on a per hour bases, or day bases, the top concurent session for each hour or day depending on the report.
    can do the number of new session on each hour with this query;
    SELECT TO_CHAR(createdate, 'YYYY/MM/DD HH24') ||'h ' start_time, COUNT( SESSIONID ) new_sessions, name || ' {' || partnerid || '}' as Partner
    FROM uws
    WHERE expiredate IS NOT NULL
    and partnerid=25
    and TO_CHAR(createdate,'YYYY/MM') = '2010/05'
    group by TO_CHAR(createdate, 'YYYY/MM/DD HH24') ||'h ', name || ' {' || partnerid || '}'
    ORDER BY 1 DESC;
    I think I should use MAX(count(sessionid)) and probably some DECODE when c1 between createdate and lastcheckdate...
    This would need to run on sqlplus from a shell script if possible and even chart it on Google Charts.
    Any help appreciated, note that I am not an Oracle expert..
    Edited by: user11954725 on Jul 19, 2010 5:55 PM

    Thanks Frank,
    I think we are very close to the solution I am looking for now;
    Here is the script you gave me (with little modifications) and the output;
    WITH     all_hrs          AS
         SELECT     min_hr + ((LEVEL - 1) / 24)     AS period
         ,     min_hr + (LEVEL / 24)     AS next_period
         FROM     (
                   SELECT TRUNC (MIN (createdate), 'HH')     AS min_hr
                   ,     TRUNC (MAX (LASTHEARTBEATDATE), 'HH')     AS max_hr
                   FROM     userwebsession
    where createdate <= TO_DATE('07-MAY-2010 00.00.00','DD-MON-RR HH24.MI.SS')
    and LASTHEARTBEATDATE >= TO_DATE('07-MAY-2010 23.59.59','DD-MON-RR HH24.MI.SS')
         CONNECT BY     LEVEL <= 1 + (24 * (max_hr - min_hr))
    SELECT     TO_DATE(a.period,'DD-MON-YY hh24') "Period"
    ,     COUNT (u.userwebsessionid)     AS sessions
    FROM          all_hrs     a
    LEFT OUTER JOIN     userwebsession     u     ON     a.period     <= u.LASTHEARTBEATDATE
                        AND     u.createdate     <= a.next_period
    group by a.period
    ORDER BY a.period
                   SELECT TRUNC (MIN (createdate), 'HH')     AS min_hr
                   ,     TRUNC (MAX (LASTHEARTBEATDATE), 'HH')     AS max_hr
                   FROM     SPEAKESL.userwebsession
    where createdate <= TO_DATE('07-MAY-2010 00.00.00','DD-MON-RR HH24.MI.SS')
    and LASTHEARTBEATDATE >= TO_DATE('07-MAY-2010 23.59.59','DD-MON-RR HH24.MI.SS');
    produce output;
    Period SESSIONS
    19-APR-10 15
    19-APR-10 12
    19-APR-10 15
    19-APR-10 18
    19-APR-10 6
    19-APR-10 7
    19-APR-10 6
    19-APR-10 16
    19-APR-10 18
    19-APR-10 21
    19-APR-10 19
    19-APR-10 24
    19-APR-10 15
    19-APR-10 7
    19-APR-10 10
    19-APR-10 6
    19-APR-10 9
    19-APR-10 7
    19-APR-10 6
    20-APR-10 5
    20-APR-10 5
    20-APR-10 6
    20-APR-10 7
    20-APR-10 7
    20-APR-10 13
    20-APR-10 7
    20-APR-10 6
    20-APR-10 4
    20-APR-10 8
    20-APR-10 8
    20-APR-10 6
    20-APR-10 14
    20-APR-10 7
    20-APR-10 5
    20-APR-10 14
    20-APR-10 9
    20-APR-10 9
    20-APR-10 7
    20-APR-10 5
    20-APR-10 4
    20-APR-10 5
    20-APR-10 3
    20-APR-10 4
    21-APR-10 4
    21-APR-10 5
    21-APR-10 5
    21-APR-10 5
    21-APR-10 5
    21-APR-10 5
    21-APR-10 6
    21-APR-10 7
    21-APR-10 8
    21-APR-10 14
    21-APR-10 7
    21-APR-10 8
    21-APR-10 4
    21-APR-10 6
    21-APR-10 10
    21-APR-10 26
    21-APR-10 14
    21-APR-10 10
    21-APR-10 12
    21-APR-10 6
    21-APR-10 7
    21-APR-10 6
    21-APR-10 5
    21-APR-10 6
    22-APR-10 7
    22-APR-10 7
    22-APR-10 7
    22-APR-10 6
    22-APR-10 7
    22-APR-10 8
    22-APR-10 9
    22-APR-10 5
    22-APR-10 21
    22-APR-10 7
    22-APR-10 34
    22-APR-10 29
    22-APR-10 29
    22-APR-10 10
    22-APR-10 21
    22-APR-10 17
    22-APR-10 50
    22-APR-10 43
    22-APR-10 43
    22-APR-10 26
    22-APR-10 13
    22-APR-10 16
    22-APR-10 15
    22-APR-10 35
    23-APR-10 6
    23-APR-10 3
    23-APR-10 4
    23-APR-10 4
    23-APR-10 2
    23-APR-10 3
    23-APR-10 2
    23-APR-10 2
    23-APR-10 4
    23-APR-10 11
    23-APR-10 6
    23-APR-10 14
    23-APR-10 16
    23-APR-10 20
    23-APR-10 11
    23-APR-10 20
    23-APR-10 43
    23-APR-10 30
    23-APR-10 46
    23-APR-10 41
    23-APR-10 26
    23-APR-10 50
    23-APR-10 51
    23-APR-10 66
    24-APR-10 4
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 2
    24-APR-10 5
    24-APR-10 5
    24-APR-10 3
    24-APR-10 2
    24-APR-10 3
    24-APR-10 5
    24-APR-10 6
    24-APR-10 5
    24-APR-10 4
    24-APR-10 3
    24-APR-10 4
    24-APR-10 4
    24-APR-10 2
    24-APR-10 2
    25-APR-10 2
    25-APR-10 2
    25-APR-10 2
    25-APR-10 2
    25-APR-10 2
    25-APR-10 2
    25-APR-10 3
    25-APR-10 3
    25-APR-10 4
    25-APR-10 4
    25-APR-10 4
    25-APR-10 3
    25-APR-10 2
    25-APR-10 2
    25-APR-10 5
    25-APR-10 6
    25-APR-10 4
    25-APR-10 5
    25-APR-10 4
    25-APR-10 5
    25-APR-10 6
    25-APR-10 5
    25-APR-10 3
    25-APR-10 3
    09-MAY-10 7
    09-MAY-10 8
    09-MAY-10 8
    09-MAY-10 6
    09-MAY-10 8
    09-MAY-10 8
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    09-MAY-10 5
    10-MAY-10 7
    10-MAY-10 6
    10-MAY-10 6
    10-MAY-10 7
    10-MAY-10 5
    10-MAY-10 5
    10-MAY-10 5
    10-MAY-10 5
    10-MAY-10 6
    10-MAY-10 12
    10-MAY-10 12
    10-MAY-10 20
    10-MAY-10 12
    10-MAY-10 13
    10-MAY-10 14
    10-MAY-10 17
    10-MAY-10 12
    10-MAY-10 15
    10-MAY-10 14
    10-MAY-10 12
    10-MAY-10 8
    10-MAY-10 8
    10-MAY-10 7
    10-MAY-10 7
    11-MAY-10 7
    11-MAY-10 7
    11-MAY-10 8
    11-MAY-10 8
    11-MAY-10 7
    11-MAY-10 30
    11-MAY-10 37
    11-MAY-10 43
    11-MAY-10 22
    11-MAY-10 14
    11-MAY-10 17
    11-MAY-10 19
    11-MAY-10 26
    11-MAY-10 20
    11-MAY-10 20
    11-MAY-10 30
    11-MAY-10 14
    11-MAY-10 18
    11-MAY-10 11
    11-MAY-10 12
    11-MAY-10 8
    11-MAY-10 8
    11-MAY-10 10
    11-MAY-10 8
    12-MAY-10 14
    12-MAY-10 12
    12-MAY-10 75
    12-MAY-10 51
    12-MAY-10 38
    12-MAY-10 39
    12-MAY-10 22
    12-MAY-10 17
    12-MAY-10 13
    12-MAY-10 12
    12-MAY-10 11
    12-MAY-10 17
    12-MAY-10 30
    12-MAY-10 28
    12-MAY-10 23
    12-MAY-10 20
    12-MAY-10 18
    12-MAY-10 12
    12-MAY-10 15
    12-MAY-10 16
    12-MAY-10 14
    12-MAY-10 28
    569 rows selected
    MIN_HR MAX_HR
    19-APR-10 12-MAY-10
    Now the output seem to produce the concurrent sessions as needed but the date range show is not exactly.
    I expect the output to display only for the range specified in parameter, which if for this example only one day.
    Probably if we ask for more that few days, we would like to display the MAX number of concurrent session for one day and the average (optional) also for that day and this for all days in the period.
    So for example based on the above output this next level report would output as follow (for period of 19-APR-10 to 27-APR-10;
    19-APR-10 24
    20-APR-10 14
    21-APR-10 26
    22-APR-10 50
    23-APR-10 66
    24-APR-10 6
    25-APR-10 6
    26-APR-10 105
    27-APR-10 44

  • Please help, how to send mails faster / send more mails per hour

    hello,
    in my application i am using mail sender class i have created to send mail to the users to participate in a survey. following is the code for it. i would like to know if there is anything wrong in it coz it takes to much time to send the mails it is taking 2 minustes to send 6 mails i.e 360 mails per hour only.
    following is how i instantiate the mail sender class and then generate a http link string dynamically as it is different for all the user.
    //////////class where mail sender is instantiated////////////////////
    try
    setConnection();
    st=con.createStatement();
    rs=st.executeQuery("select * from "+CNAME+"_campaign");                         
    String SurveyT = new String();
    while(rs.next())
         SurveyT = rs.getString(2);
    rs.close();
    rs=st.executeQuery("select * from "+CNAME+"_user");     
    ss = new MailSender();
    while(rs.next())
         String userid = rs.getString("userid");
         String password = rs.getString("password");
    StringBuffer message = new StringBuffer(BodyText.getText().trim());
    if(SurveyT.equals("invitational") || SurveyT.equals("single"))
                                            message.append( "\n" + "http://"+IPadd.getText().trim()+"/"+CNAME+"/servlet/login?username="+userid+"&passw="+password);
                                            ss.send(FromField.getText().trim(),userid,SmtpServerID.getText().trim(),MailSub.getText().trim(),message.toString());
    else if(SurveyT.equals("general"))
    message.append( "\n" + "http://"+IPadd.getText().trim()+"/"+CNAME+"/Index.html");
    ss.send(FromField.getText().trim(),userid,SmtpServerID.getText().trim(),MailSub.getText().trim(),message.toString());
    st.close();
    this.dispose();
    catch(SQLException sqlex)
    JOptionPane.showMessageDialog(null,sqlex.getMessage());
    //Mail Sender class/////////////////
    import javax.mail.*;
    import javax.mail.internet.*;
    import java.util.*;
    import javax.swing.*;
    public class MailSender
         String sentAddr,fromAddr,smtpServer,body,subject;
         public MailSender()
         //function send to send the mail
    public void send(String from,String to,String smtps,String subj,String messagetext)
              fromAddr=new String(from);
              sentAddr=new String(to);
              smtpServer=new String(smtps);
              body=new String(messagetext);
              subject=new String(subj);
              try
                   Properties props = System.getProperties();
                   props.put("mail.smtp.host",smtpServer);
         Session session = Session.getDefaultInstance(props,null);
    Message msg = new MimeMessage(session);
                   msg.setFrom(new InternetAddress(fromAddr));
    msg.setRecipients(Message.RecipientType.TO,InternetAddress.parse(sentAddr,false));
         msg.setSubject(subject);
         msg.setText(body);
    msg.setHeader("Survey","MailCheck");
    msg.setSentDate(new Date());
         Transport.send(msg);
         catch(MessagingException mex)
              JOptionPane.showMessageDialog(null,mex.getMessage());
    }

    Lots of variables here....Also my maths says only 180 per hour.... i.e. three a minute.
    1) you are using a database to get info from. What is the average response time of the DB server? Looks like you are doing one SQL then reading the result table but does the initial SQL take a while?
    2) how much data are you passing on to the SMTP server and how fast/slow is the link to that SMTP server? Work out the absolute max amount of data you can transfer over the link then get your average message size and work out a VERY theoretical Max number of messages a minute. Note that real life might approach 80% of this taking TCP/IP and SMTP overheads into account.
    3) What sort of load is the SMTP server under? If it's busy you will be only getting a fraction of whatever bandwidth is available. Depending on its design it may be trying to deliver the first message you sent it while you are still pumping more messages down to it. SMTP servers may limit the number of connections per minute from another machine in order to defeat a denial of service attack. Your code makes a connection per email so this may have relevence here.
    4) Raw horsepower always helps. When I write stuff to do things like this there is no nice GUI screen etc. Just basic Java that if it has to will write a log if something goes wrong. Maybe just maybe a counter on STD out to show it is still actually doing something. Keep the number of classes used down to the bare minimum. In the old days we used to spend days paring code to the bone - a skill somewhat lost these days.
    Hope this gives you some help in finding the bottleneck.
    Cheers,
    SH

  • How much data does the apple maps app use per hour ?

    hello,
    i would like to know, if somebody could give me a rough number of how much the Apple Maps app uses per hour for navigation ?

    Apple's Maps app uses vector graphics, so it doesn't have to download data everytime your view changes, & also has an offline mode...you can download quite a large area over WiFi & then use this cache to navigate.
    So, I really wouldn't worry too much about its data usage. Most tests I've read indicate it uses about 80% less data than Google Maps.

  • 12GB per hour of HD footage instead of 29GB?

    Hi,
    I recently switched over to HD camera (Sony HDR-HC1E). This is a DV tape cam.
    Im editing in imovie 6.0.4.
    I always back up my files by exporting via imovie "full quality" when i used to do this with SD content i would get a DV file as result, now on HD i get a .mov file.
    I used to get a 12GB file for a 60min movie. Now with HD i am getting 29GB per hour.
    I hear other people talking about 12GB per hour for HD content.
    Can anyone please tell me how this is possible?
    I am going through external hard drives at lightning speed!
    Thanks
    Dave

    Hi Dave
    There's actually a big difference between a DV stream (whether recorded to tape or file) and a .mov file.
    DV or HDV footage is recorded at 25Mbps - megabits per second. This translates to 90000Mb per hour, divide by 8 to convert bits to bytes, and you get 11.2GB/hour. That's constant, whether on the tape or saved as a DV file. Note that DV is pretty compressed.
    Now, when iMovie (or any other video conversion program) saves a .mov file, it's no longer a DV stream. It can be any of the codecs (video formats) that your program supports, and there are a wide range of codecs for both SD and HD. These different codecs will use different file sizes depending on their level of compression. (Just like mp3, a compressed audio codec, vs AIFF, an uncompressed codec.)
    You can choose to optimize video on import or choose a smaller format on export (large, rather than full size) which will reduce the frame size of your footage, which may be a good solution for you, depending on what you ultimately plan to do with the material. (Edit: actually, not with iMovie 6 - thanks, Karsten Newer versions of iMovie will help you there. )
    Matt
    Message was edited by: Matt Clifton

  • What setting is 'DVD quality' widescreen - ie about 3gb per hour?

    I have videod (isn't that a such a quaint 20th century word for the podcast age?) a conference and I want to archive the footage to store on a DVD for later reediting.
    The "full quality" setting is massive, I only have 90 mins, but a 20 min clip takes up over 10Gb
    The 'CD-rom quality' is just not not dvd quality.
    (I am using iMovie 06 btw, i will move onto 08 when it is finished)
    Ideally, I would just burn the whole lot raw with iDVD and import it again if i need to reedit - but (for copyright as opposed to technical reasons i suppose) we can't just put a dvd in the slot and rip it like we can an audio cd in iTunes. I have tried the workarounds - but none of them worked for me.
    Anyway, I digress: What setting is 'DVD quality' widescreen - ie about 3gb per hour?

    Hi there.
    *Keep the quality*
    0nce you have compressed data - you can't get the original quality back - so store as full quality as possible - and in several formats so there is always a backup.
    Media
    data cds don't hold that much - 700mb - so forget about them for quality productions and archiving
    dvd-roms hold much more - 4,700mb (you can get dual layer ones that double it)
    *'share' settings*
    Now, you have 2 options for saving the movie;
    1. as a file on a DVD-Rom (a compact .mov or a massive.dv)
    2. as a proper playable DVD (iDVD)
    If the "full quality" iM setting is too big a file for a dvd-rom (and you will be lucky to get 10 mins of footage in 4.7gb), then you could 'iDVD' it and that way you will get a good quality movie up to nearly 2 hours on a disc disc that can be played on most machines (and could even be ripped back into an editable form using third party software)
    Alternatively, save it using 'expert settings' listed above and you will probably get good enough results for a longer film without having to burn it as a playable (but uneditable) dvd.
    Bear in mind that unless you are using a pro camera, you may not notice the much difference with the higher settings.
    Good luck - seeing your work on the big screen is a blast!

  • Rows per hour

    Dear All,
    I have a table with 2 rows and one has varchar type data and the other is the time stamp (04/02/2013 19:44:40).
    Now I want to write a query to count number of rows per hour in the table with date ...strcutre is as follows :
    ddata varchar2(20)
    ddate date with timestamp ;
    Require the output as follows :
    March 28th 12 AM - 20
    March 28th 1 AM - 40
    March 28th 2 AM - 40
    where March 28th 12 AM is the date with time
    and 20 - number of rows in that date and in that hour..
    Please advise..

    Hi,
    936074 wrote:
    Dear All,
    I have a table with 2 rows and one has varchar type data and the other is the time stamp (04/02/2013 19:44:40).
    Now I want to write a query to count number of rows per hour in the table with date ...strcutre is as follows :
    Whenever you have a problem, post CREATE TABLE and INSERT statements for a little sample data, and the results you want from that data.
    >
    ddata varchar2(20)
    ddate date with timestamp ;What is this? DATE and TIMESTAMP are 2 different datatypes.
    If ddate is a DATE, then it is not a TIMESTAMP.
    If ddate is a TIMESTAMP, then it is not a DATE.
    Require the output as follows :
    March 28th 12 AM - 20
    March 28th 1 AM - 40
    March 28th 2 AM - 40
    where March 28th 12 AM is the date with time
    and 20 - number of rows in that date and in that hour..
    Please advise..Here's one way:
    SELECT    TRUNC (ddate, 'HH')           AS hour
    ,       COUNT (*)                AS cnt
    FROM       table_x
    GROUP BY  TRUNC (ddate, 'HH')
    ORDER BY  TRUNC (ddate, 'HH')     -- Or hour
    ;TRUNC (ddate) will return a DATE, regardless of whether ddate is a DATE or a TIMESTAMP.

  • How much is a GRC consultant paid per hour?

    I know that in USA, for FICO consultants the rate per hour is in the range of 100 -150 dollars per hour. Will a GRC consultant get more than this or less?
    There is a qualification called CISA. Is this GRC anyway realted to this?
    Is a GRC consultant involved in SOC compliance?What does he exactly do?

    Hi Suchita,
       This community meant for technical and BPX discussion and not to discuss hourly rates. You may find them from google search.
    Rgds,
    Asok

  • Cost Per hour, Costs per route not updating in S114 structure

    Dear PM Guru's,
    i am using MET unit system for vehicle consumption analysis. After completion of measurement documents and material issuance through IFCU, i checked the MCIZ report , but there i could not find the value of Costs per Hour, Costs per route information. I am requesting you all please guide me where i did the mistake either in "Configure measurement document updation" or calculation method?
    I am eagerly waiting for replies.
    regards
    JKM

    Its not possible to give any specific section where it might have gone wrong. Suggest you to check the complete configuration under Settings for Fleet Management. Also check the configuration in IMG - PM & CS - Information Systems - Configure Measurement document update

  • Peak No. Of Requests/ Per Hour in BW

    Suppose my cube gives performance of 5 Sec for a given query to run.
    How does one estimate, assuming Bandwidth can scaleup, at what no. of peak requests (in per Hour or Per minute or Per sec) would the BW server show considerable performance degradation in terms of longer query runtime.
    How much of this can be addressed by caching? Are there situations when Cache would get corruped due to high number of request, each potentially with differnt set of variable inputs?

    RiverCrab wrote:
    WiFi doesn't drain anything in Sleep mode because WiFi is turned off in sleep mode. Something else is causing the heavy battery drain.
    As you said you don't have any email accounts it isn't email. Safari can drain the battery if left on a page that automatically refreshes.
    Location Services can drain the battery - if you see an arrow in the top bar near the right then there is an app running in background and using location services.
    Game Center can drain the battery even when it is closed.
    Streaming apps, social networking apps (facebook, yahoo, skype, etc) can also drain the battery in sleep mode.

  • Refresh portlet report more frequently than once per hour

    When you publish a report from discoverer in the portal, you can tell it to automatically refresh, but the most frequent refresh it allows is once per hour. Is there some way I can refresh a report more frequently than that? Can I use the scheduled workbook functionality to achieve this, since in there you can schedule it to run as frequently as once per minute.
    I'm using Discoverer 10.1.2.
    Thanks
    -Nissim

    These are two different schedulers...
    We are considering changing the UI to let the users specify a schedule frequency more frequent than once an hour. But that is in a future, as yet undecided release.
    For the time being, you can resort to hacks to refresh Discoverer portlers more frequently than an hour. Some information on the same is available on Metalink. I shall see if I can publish in the next few days a blog post on how to do that. Of course you have to realize that something like this is not encouraged or supported.
    Thanks
    Abhinav

  • Condition for charging tenant at per hour basis

    Hi Experts,
    I need to charge a fixed rental per hour from customer based on air-conditioner (AC) running.
    For example Rs. 2000 per hr will be charged where Rs. 2000 is constant charge but AC run by Cutomer varies in one month it may be 160 hrs, in other month it may be 200 hrs, so on.
    Therefore rent also varies i.e. first month Rs. 320,000 (2000160) and in second month Rs. 400,000 (2000200)
    Please provide the solution for the same.
    Regards,
    Manish

    Hi Manish,
    You can design your requirement in different ways, depends on the level of requirements.
    One way could be sales based settlement, maintain sales based condition type and sales rule with your fixed price (Rs.2000).
    And update sales reports for consumption (AC utilization) and run the settlement run.
    Hope it will helps you, let me know if you need any more in detail.
    regards,
    Srini

  • Failed Logins Per Hour

    I have a NW 6.5 sp5 server that is showing a very high number of
    'Failed Logins Per Hour'. All the login attempts are as follows:
    Time: Tuesday, 3-20-2007 9:20 am
    Address: IP 192.168.25.43
    User: .CN=MTA.CN=USACSCMAIL01.OU=MAIL.O=USAMAIL.T=USAMAI L.
    The user is the MTA within a GroupWise system. The server showing the
    problem is 1 of 3 POs in a GroupWise system. The other 2 POs do not
    show this problem. The only thing different about this one is that it
    is running iManager. I have seen several mentions of this problem but
    I cannot find any resolutions. I would appreciate any information on
    why this is happen and how to stop it.
    thanks,
    -ch

    [email protected];2661689 Wrote:
    > I have a NW 6.5 sp5 server that is showing a very high number of
    > 'Failed Logins Per Hour'. All the login attempts are as follows:
    >
    > Time: Tuesday, 3-20-2007 9:20 am
    > Address: IP 192.168.25.43
    > User: .CN=MTA.CN=USACSCMAIL01.OU=MAIL.O=USAMAIL.T=USAMAI L.
    >
    > The user is the MTA within a GroupWise system. The server showing the
    > problem is 1 of 3 POs in a GroupWise system. The other 2 POs do not
    > show this problem. The only thing different about this one is that it
    > is running iManager. I have seen several mentions of this problem but
    > I cannot find any resolutions. I would appreciate any information on
    > why this is happen and how to stop it.
    >
    > thanks,
    > -ch
    I guess you may have more success by posting this to the Groupwise
    forums.
    cimetmc
    Marcel Cox
    http://support.novell.com/forums

Maybe you are looking for

  • C: drive filling up after using WebDAV ("Remote dr...

    Hi, the C: drive on my E55 had very low free space at some point. I started the file manager and the memory details reported "32 MB" for "Other files". I looked around on C: using the file manager, but didn't found anything. I removed and reinstalled

  • A/P Invoice Entry - Duplicate Invoice Error vs. Warning Message Control

    In configuration, the duplicate invoice message has been defined as an error instead of a warning.  This affects all invoice entry dialog transactions.  Is there a method  or approach to change only transaction F-63 Park Document so the message is a

  • How long to partition a 500mb drive

    I just bought a Seagate FreeAgent 500mb drive that needed to be partitioned because it came formatted NTFS for PCs. How long should this take? The progress bar has not hardly moved. Thanks.

  • G5 imac wont connect to gigabit switch at 1000

    Hi, I've got a G5 imac that wont connect to my gigabit switch at 1000. It looks like it connects at that speed for a few seconds then it drops to 100. I have a new Macbook pro that can easily connect at 1000 using that exact same cable. Has anyone he

  • How to customize the images before upload to KM

    hai everybody i want to know how to customize the images before uploading into KM.if anybody know reply.thanks in advance