Why it is taking too much time to kill the process?

Hi All,
Today,one of my user ran the calc script and the process is taking too much time, then i kill the process. I am wondering about one thing here even it is taking too long to kill the process, generally it will not take more than 2 sec. I did this through EAS.
After that I ran Maxl statement
alter system kill request 552599515;
there is no use at all.
Please reply if you have any solutions to kill this process.
Thanks in advance.
Ram.

Hi Ram,
1. Firstly, How much time does your calculation scripts normally run.
2. While it was running, you can go to the logs and monitor where exactly the script is taking time .
3. Sometimes, it does take time to cancel a transaction ( as it might be in between).
4. Maxl is always good to kill ,as you did . It should be succesful . Check the logs what it says ,and also the "sessions" which might say "terminating" and finish it off.
5. If nothing works ,and in the worst case scenarion , if its taking time without doing anything. Then log off all the users and stop the databas and start it .
6. Do log off all the users, so that you dont corrupt any filter related sec file.
Be very careful , if its production ( and I assume you have latest backups)
Sandeep Reddy Enti
HCC
http://hyperionconsultancy.com/

Similar Messages

  • Why is query taking too much time ?

    Hi gurus,
    I have a table name test which has 100000 records in it,now the question i like to ask is.
    when i query select * from test ; no proble with responce time, but when the next day i fire the above query it is taking too much of time say 3 times.i would also like to tell you that everything is ok in respect of tuning,the db is properly tuned, network is tuned properly. what could be the hurting factor here ??
    take care
    All expertise.

    Here is a small test on my windows PC.
    oracle 9i Rel1.
    Table : emp_test
    number of records : 42k
    set autot trace exp stat
    15:29:13 jaffar@PRIMEDB> select * from emp_test;
    41665 rows selected.
    Elapsed: 00:00:02.06 ==> response time.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    2951 consistent gets
    178 physical reads
    0 redo size
    1268062 bytes sent via SQL*Net to client
    31050 bytes received via SQL*Net from client
    2779 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    41665 rows processed
    15:29:40 jaffar@PRIMEDB> delete from emp_test where deptno = 10;
    24998 rows deleted.
    Elapsed: 00:00:10.06
    15:31:19 jaffar@PRIMEDB> select * from emp_test;
    16667 rows selected.
    Elapsed: 00:00:00.09 ==> response time
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    1289 consistent gets
    0 physical reads
    0 redo size
    218615 bytes sent via SQL*Net to client
    12724 bytes received via SQL*Net from client
    1113 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    16667 rows processed

  • [SOLVED]nm-applet taking too much memory (sometimes all the processes)

    Hello,
    One of the reason i chose Archlinux (archbang) was to keep the system small to fit in my little 512MB of ram and i've been happy so far but for some reason i noticed that nm-applet is now taking a *lot* of memory, around 40mb while it didn't use that much.
    Here is my ps_mem after a fresh boot:
    Private + Shared = RAM used Program
    108.0 KiB + 59.0 KiB = 167.0 KiB gnome-pty-helper
    252.0 KiB + 112.0 KiB = 364.0 KiB dbus-launch
    268.0 KiB + 109.5 KiB = 377.5 KiB rtkit-daemon
    360.0 KiB + 57.0 KiB = 417.0 KiB dhcpcd
    364.0 KiB + 146.5 KiB = 510.5 KiB lxdm-binary
    644.0 KiB + 86.5 KiB = 730.5 KiB systemd-logind
    412.0 KiB + 387.0 KiB = 799.0 KiB gconfd-2
    740.0 KiB + 142.5 KiB = 882.5 KiB gnome-keyring-daemon
    588.0 KiB + 456.0 KiB = 1.0 MiB (sd-pam)
    516.0 KiB + 529.5 KiB = 1.0 MiB gconf-helper
    588.0 KiB + 461.5 KiB = 1.0 MiB at-spi-bus-launcher
    676.0 KiB + 505.5 KiB = 1.2 MiB at-spi2-registryd
    848.0 KiB + 370.5 KiB = 1.2 MiB lxdm-session
    1.1 MiB + 127.0 KiB = 1.3 MiB systemd-udevd
    1.2 MiB + 88.5 KiB = 1.3 MiB systemd-journald
    1.4 MiB + 174.0 KiB = 1.6 MiB bash
    992.0 KiB + 711.0 KiB = 1.7 MiB dbus-daemon (3)
    1.2 MiB + 538.0 KiB = 1.8 MiB sudo
    840.0 KiB + 1.3 MiB = 2.1 MiB systemd (2)
    1.7 MiB + 876.5 KiB = 2.5 MiB conky
    2.0 MiB + 667.0 KiB = 2.6 MiB wpa_supplicant
    3.8 MiB + 985.5 KiB = 4.8 MiB pulseaudio
    3.6 MiB + 1.4 MiB = 5.0 MiB NetworkManager
    6.1 MiB + 613.5 KiB = 6.7 MiB polkitd
    22.2 MiB + 2.0 MiB = 24.2 MiB dunst
    22.6 MiB + 2.0 MiB = 24.6 MiB tint2
    23.7 MiB + 2.1 MiB = 25.8 MiB openbox
    27.5 MiB + 2.7 MiB = 30.2 MiB lxterminal
    26.4 MiB + 4.9 MiB = 31.2 MiB volumeicon
    29.4 MiB + 5.7 MiB = 35.0 MiB nm-applet
    42.7 MiB + 1.2 MiB = 43.9 MiB Xorg.bin
    255.8 MiB
    =================================
    I used to have 75mb used in the same situation.
    Here is ps_mem after some use, things looks like something normal except for nm-applet.
    Private + Shared = RAM used Program
    4.0 KiB + 31.0 KiB = 35.0 KiB gnome-pty-helper
    4.0 KiB + 48.0 KiB = 52.0 KiB dbus-launch
    4.0 KiB + 57.5 KiB = 61.5 KiB lxdm-binary
    4.0 KiB + 71.0 KiB = 75.0 KiB (sd-pam)
    24.0 KiB + 59.5 KiB = 83.5 KiB rtkit-daemon
    8.0 KiB + 87.0 KiB = 95.0 KiB cat (2)
    4.0 KiB + 95.5 KiB = 99.5 KiB lxdm-session
    4.0 KiB + 99.0 KiB = 103.0 KiB at-spi-bus-launcher
    20.0 KiB + 87.0 KiB = 107.0 KiB chrome-sandbox (2)
    4.0 KiB + 130.0 KiB = 134.0 KiB gconf-helper
    160.0 KiB + 51.0 KiB = 211.0 KiB dhcpcd
    140.0 KiB + 103.5 KiB = 243.5 KiB gconfd-2
    260.0 KiB + 131.5 KiB = 391.5 KiB at-spi2-registryd
    452.0 KiB + 51.0 KiB = 503.0 KiB systemd-udevd
    492.0 KiB + 40.5 KiB = 532.5 KiB systemd-logind
    444.0 KiB + 170.5 KiB = 614.5 KiB pulseaudio
    500.0 KiB + 196.5 KiB = 696.5 KiB conky
    620.0 KiB + 115.0 KiB = 735.0 KiB wpa_supplicant
    932.0 KiB + 48.5 KiB = 980.5 KiB systemd-journald
    628.0 KiB + 362.0 KiB = 990.0 KiB tint2
    612.0 KiB + 534.5 KiB = 1.1 MiB dbus-daemon (3)
    492.0 KiB + 659.0 KiB = 1.1 MiB bash (2)
    1.0 MiB + 275.5 KiB = 1.2 MiB systemd (2)
    1.3 MiB + 330.0 KiB = 1.6 MiB openbox
    1.5 MiB + 370.0 KiB = 1.9 MiB sudo
    1.2 MiB + 737.5 KiB = 1.9 MiB dunst
    816.0 KiB + 1.4 MiB = 2.2 MiB volumeicon
    1.8 MiB + 766.5 KiB = 2.5 MiB NetworkManager
    2.2 MiB + 866.0 KiB = 3.1 MiB lxterminal
    2.7 MiB + 447.0 KiB = 3.1 MiB polkitd
    13.8 MiB + 267.5 KiB = 14.1 MiB Xorg.bin
    20.4 MiB + 357.0 KiB = 20.7 MiB nacl_helper
    38.2 MiB + 3.1 MiB = 41.3 MiB nm-applet
    171.4 MiB + 77.6 MiB = 249.1 MiB chrome (10)
    351.5 MiB
    =================================
    I could live with an applet taking 40mb on a more beefy machine but being just behind chrome looks wrong. So i'm not sure what has changed, at the moment i just kill the process but i considered switching to wicd if it's ever better but it appears that it's not well maintained.
    Any hint?
    Last edited by Bombombom (2014-09-05 10:47:42)

    Thanks for your answers.
    I needed to use my computer and it wasn't usable so i just reinstalled my distro and went back to normal, while tweaking the OS i made various reboots to check if it happened again and how...
    And the problem of my system taking 240ish MB at boot vs 75MB is actually the nvidia drivers, after installing nvidia-304xx the memory went up a lot, but removing the driver and switching back to nouveau actually make the system way less greedy memory-wise, there is still a bit of mystery in one of the case where everything was fine but nm-applet but i think it's half solved here.
    PS : I gave shot at zswap but it made my computer unusable and freeze a lot

  • Why it is taking too much time to login in yosemite? and to load webpages.. whenever i kept in sleep mode battery drains itself causing overheat... what might be the cause and solution? i have my projects going on..please help.thanks.

    thanks

       Reset PRAM.   http://support.apple.com/kb/PH18761
    Reset SMC.     http://support.apple.com/kb/HT3964
    Choose the method for:
    "Resetting SMC on portables with a battery you should not remove on your own".
       Start up in Safe Mode.  http://support.apple.com/en-us/HT1564

  • Resizing tablespace is taking too much time and freezing the database

    This is my database setup:
    3 node rac on 32 bit linux (rh 4)
    2 SANs (3gb + 2 gb)
    asm config: everything is default from installation
    current datafiles: 3.2T out of 5.0T (autoextend on)
    this is how I do to resize the tablespace manually to test how long it takes:
    alter tablespace my_tbs resize 3173562M; //from 3172538M (adding another 1gb)
    And it took 33 minutes to complete.
    Could someone tell me what is wrong?
    ps:: when i check the instance performing, the "SMON" user is like 97% of db activities. And while the resizing is running, the database seem to freeze.
    Thanks,
    Chau
    Message was edited by:
    user626162

    sorry, it is 5 Tera bytes total.
    there are 2 sans, one is 1.7T (one partition). the other san has 3 partitions with 1.1T each. I have only 1 datagroup
    I used big file so there is only one big file for this tablespace. the size of the tablespace is 3.2T used with autoextend on (increment size is 1G)
    this is the command I used to manually resize the tablespace:
    alter tablespace my_table_space resize <3229923M>; //current size + 1G
    Thanks,
    Chau

  • Client import taking too much time

    hi all,
    i am importing a client , i it has complete copy table 19,803 of 19,803 but for last four hours its status is processing
    scc3
    Target Client           650
    Copy Type               Client Import Post-Proc
    Profile                 SAP_CUST
    Status                  Processing...
    User                    SAP*
    Start on                24.05.2009 / 15:08:03
    Last Entry on           24.05.2009 / 15:36:25
    Current Action:         Post Processing
    -  Last Exit Program    RGBCFL01
    Transport Requests
    - Client-Specific       PRDKT00004
    - Texts                 PRDKX00004
    Statistics for this Run
    - No. of Tables             19803 of     19803
    - Deleted Lines                 7
    - Copied Lines                  0
    sm50
    1 DIA 542           Running Yes             SAPLTHFB 650 SAP*     
    7 BGD 4172   Running Yes 11479  RGTBGD23 650 SAP* Sequential Read     D010INC
    sm66
    Server  No. Type PID Status  Reason Sem Start Error CPU Time   User Report   Action          Table
    prdsap_PRD_00  7  BTC 4172 Running   Yes    11711 SAP* RGTBGD23 Sequential Read D010INC
    plz guide me why it is taking too much time , while it has finished most of the things
    best regard
    Khan

    The import is in post processing. It digs through all the documents and adapts them to the new client. Most of the tables in the application area have a "MANDT" (= client) field which needs to be changed. Depending of the size of the client this can take a huge amount of time.
    You can try to improve the speed by updating the table statistics for table D010INC.
    Markus

  • Job is taking too much time

    Hi,
    I am running one SQL code that is taking 1 hrs. but when I schedule this code in JOB using package it is taking 5 hrs.
    could anybody suggest why it is taking too much time?
    Regards
    Gagan

    Use TRACE and TKPROF with wait events to see where time is being spent (or waisted).
    See these informative threads:
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    HOW TO: Post a SQL statement tuning request - template posting
    Also you can use V$SESSION and/or V$SESSION_LONGOPS to see what code is currently executing.

  • Job is taking too much time during Delta loads

    hi
    when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only).
    PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW_BTE_CALL_BW204010_E.
    please let me know why this is taking too much time.
    06:10:16  4 LUWs confirmed and 4 LUWs to be deleted with FB RSC2_QOUT_CONFIRM_DATA
    06:56:46  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Asynchronous sending of data package 1 in task 0002 (1 parallel tasks)
    06:56:47  IDOC: InfoIDOC 2, IDOC no. 121289649, duration 00:00:00
    06:56:47  IDOC: Begin 09.05.2011 06:10:15, end 09.05.2011 06:10:15
    06:56:48  Asynchronous sending of InfoIDOCs 3 in task 0003 (1 parallel tasks)
    06:56:48  Through selection conditions, 0 records filtered out in total
    06:56:48  IDOC: InfoIDOC 3, IDOC no. 121289686, duration 00:00:00
    06:56:48  IDOC: Begin 09.05.2011 06:56:48, end 09.05.2011 06:56:48
    06:56:54  tRFC: Data package 1, TID = 3547D5D96D2C4DC7740F217E, duration 00:00:07, ARFCSTATE =
    06:56:54  tRFC: Begin 09.05.2011 06:56:47, end 09.05.2011 06:56:54
    06:56:55  Synchronous sending of InfoIDOCs 4 (0 parallel tasks)
    06:56:55  IDOC: InfoIDOC 4, IDOC no. 121289687, duration 00:00:00
    06:56:55  IDOC: Begin 09.05.2011 06:56:55, end 09.05.2011 06:56:55
    06:56:55  Job finished
    Regards
    Atul

    Hi Atul,
    Have you written any customer exit code . If yes check for the optimization for it .
    Kind Regards,
    Ashutosh Singh

  • Server0 process taking too much time

    Hi All,
        Once i start the Netweaver server, the server) process taking too much time.
    When i was installed Netweaver that time 13 min, after 2 months 18 min.. then 25 min now it is taking 35 minutes.... to become green color.
    Why it is taking too much time, what might be the cause.....
    Give some ideas to solve this problem..............
    The server0 developer trace has this information continuously 6 to 7 times...
    [Thr 4204] *************** STISEND ***************
    [Thr 4204] STISEND: conversation_ID: 86244265
    [Thr 4204] STISEND: sending 427 bytes
    [Thr 4204] STISearchConv: found conv without search
    [Thr 4204] STISEND: send synchronously
    [Thr 4204] STISEND GW_TOTAL_SAPSEND_HDR_LEN: 88
    [Thr 4204] NiIWrite: hdl 0 sent data (wrt=515,pac=1,MESG_IO)
    [Thr 4204] STIAsSendToGw: Send to Gateway o.k.
    [Thr 4204] STIAsRcvFromGw: timeout value: -1
    [Thr 4204] NiIRead: hdl 0 recv would block (errno=EAGAIN)
    [Thr 4204] NiIRead: hdl 0 received data (rcd=3407,pac=2,MESG_IO)
    [Thr 4204] STIAsRcvFromGw: Receive from Gateway o.k.
    [Thr 4204] STISEND: data_received: CM_COMPLETE_DATA_RECEIVED
    [Thr 4204] STISEND: received_length: 3327
    [Thr 4204] STISEND: status_received: CM_SEND_RECEIVED
    [Thr 4204] STISEND: request_to_send_received: CM_REQ_TO_SEND_NOT_RECEIVED
    [Thr 4204] STISEND: ok
    [Thr 4204] STIRCV: new buffer state = BUFFER_EMPTY
    [Thr 4204] STIRCV: ok
    [Thr 4204] *************** STSEND ***************
    [Thr 4204] STSEND: conversation_ID: 86244265
    [Thr 4204] STISearchConv: found conv without search
    [Thr 4204] STSEND: new buffer state = BUFFER_DATA
    [Thr 4204] STSEND: 106 bytes buffered
    [Thr 4204] *************** STIRCV ***************
    [Thr 4204] STIRCV: conversation_ID: 86244265
    [Thr 4204] STIRCV: requested_length: 16000 bytes
    [Thr 4204] STISearchConv: found conv without search
    [Thr 4204] STIRCV: send 106 buffered bytes before receive
    [Thr 4204] STIRCV: new buffer state = BUFFER_DATA2
    [Thr 4204] *************** STISEND ***************
    then
    [Thr 4252] JHVM_NativeGetParam: get profile parameter DIR_PERF
    [Thr 4252] JHVM_NativeGetParam: return profile parameter DIR_PERF=C:\usr\sap\PRFCLOG
    this message continuously
    Can i have any solution for the above problem let me know .
    Thanks & regards,
    Sridhar M.

    Hello Manoj,
           Thanks for your quick response, Previously the server has 4GB RAM and now also it has same.
    Yesterday i found some more information, like deployed(through SDM) applications also take some memory at the time of starting the J2EE server...Is it right?
    Any other cause...let me know
    Thanks & Regards,
    Sridhar M.

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Initramfs too slow and ISR taking too much time !

    On my SMP (2 core with 10siblings each) platform i have a PCIe deivce which is a video card which does DMA transfers to the application buffers.
    This works fine with suse13.1 installed on a SSD drive.
    When i took the whole image and created it as a initramfs and running from it (w/o any hdd) the whole system is very slow and i am seeing PCIe ISRs taking too much time and hecne the driver is failing !
    any help on this is much appreciated.
    there is a first initramfs iage which is the usual initrd the opensuse installs and i patched it to unpack the second initramfs (full rootfs of 1.8GB) in to RAM (32GB) as tmpfs and exec_init from it.
    Last edited by Abhayadev S (2015-05-21 16:28:38)

    Abhayadev S,
    Although your problem definitely looks very interesting, we can't help with issues with suse linux.
    Or are you considering to test archlinux on that machine ?

  • Sites Taking too much time to open and shows error

    hi, 
    i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
    does not open any page or central admin site and shows following error
    event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error. 

    This usually happens if you are low on hardware requirements.  Check whether your machine confirms with the required software and hardware requirements.
    https://technet.microsoft.com/en-us/library/cc262485.aspx
    http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
    Please remember to up-vote or mark the reply as answer if you find it helpful.

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • Application diagnostics taking too much time

    EBS Version : 12.1.3
    Application diagnostics taking too much time. How do i investigate why its taking too much time? and then also I want to know How do i cancel this request?

    Application diagnostics taking too much time. How do i investigate why its taking too much time? and then also I want to know How do i cancel this request?
    Is this happening for all diagnostics scripts or specific one(s) only?
    E-Business Suite Diagnostics Installation Guide (Doc ID 167000.1)
    E-Business Suite Diagnostics References for R12 (Doc ID 421245.1)
    E-Business Suite Diagnostic Tools FAQ and Troubleshooting Guide for Release 11i and R12 (Doc ID 235307.1)
    You can it from OAM and you might need to cancel the database session from the backend as well.
    Thanks,
    Hussein

Maybe you are looking for