Insert query takes forever

Hi
I have this query for inserts and it never ends. I need some suggestions , can some one please help ? It will be much appreciated
INSERT INTO PIMITEK.BENE_SEC_PCPTNT_FINAL
SELECT P.PART_ID, MAX(TH.TSKID || TT.TSKDESC)
FROM PIMITEK.BENE_SEC_PCPTNT P
LEFT JOIN TSKIDENT TI
ON SUBSTR(P.PART_ID,1,9) = SUBSTR(TI.FIELDVALUE,1,9)
AND TI.IDCODE = 18
AND TI.FIELDNBR = 1
AND SUBSTR(TI.FIELDVALUE,10,1) = ' '
LEFT JOIN COMPTSKIDENT CT
ON SUBSTR(P.PART_ID,1,9) = SUBSTR(CT.FIELDVALUE,1,9)
AND CT.IDCODE = 18
AND CT.FIELDNBR = 1
AND SUBSTR(CT.FIELDVALUE,10,1) = ' '
JOIN TSKHIST TH
ON (TI.TSKID = TH.TSKID
OR CT.TSKID = TH.TSKID)
AND TH.TSKCODE IN
(SELECT TSKCODE
FROM TSKTYPE
JOIN PIMITEK.BENE_SEC_TSKTYPES
ON TSKDESC = TASK_TYPE)
JOIN TSKTYPE TT
ON TH.TSKCODE = TT.TSKCODE
WHERE P.PART_ID NOT IN (SELECT P2.PART_ID
FROM PIMITEK.BENE_SEC_PCPTNT_FINAL P2)
GROUP BY P.PART_ID ;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | INSERT STATEMENT | | 1 | 229 | 611K (1)| 02:02:14 | | |
| 1 | HASH GROUP BY | | 1 | 229 | 611K (1)| 02:02:14 | | |
|* 2 | HASH JOIN | | 1 | 229 | 611K (1)| 02:02:14 | | |
| 3 | NESTED LOOPS | | 1 | 218 | 611K (1)| 02:02:14 | | |
| 4 | NESTED LOOPS | | 1 | 203 | 611K (1)| 02:02:14 | | |
| 5 | NESTED LOOPS | | 1 | 188 | 611K (1)| 02:02:14 | | |
| 6 | NESTED LOOPS OUTER | | 1 | 172 | 370K (1)| 01:14:11 | | |
|* 7 | HASH JOIN OUTER | | 1 | 103 | 3443 (1)| 00:00:42 | | |
|* 8 | HASH JOIN ANTI | | 1 | 34 | 5 (20)| 00:00:01 | | |
| 9 | TABLE ACCESS FULL | BENE_SEC_PCPTNT | 1 | 17 | 2 (0)| 00:00:01 | | |
| 10 | TABLE ACCESS FULL | BENE_SEC_PCPTNT_FINAL | 1 | 17 | 2 (0)| 00:00:01 | | |
| 11 | PARTITION HASH ALL | | 528 | 36432 | 3438 (1)| 00:00:42 | 1 | 10 |
|* 12 | TABLE ACCESS FULL | TSKIDENT | 528 | 36432 | 3438 (1)| 00:00:42 | 1 | 10 |
| 13 | PARTITION HASH ALL | | 2445 | 164K| 367K (1)| 01:13:30 | 1 | 40 |
|* 14 | INDEX RANGE SCAN | COMPTSKIDENTNP_IDENTIF_IDCODE | 2445 | 164K| 367K (1)| 01:13:30 | 1 | 40 |
| 15 | PARTITION HASH ALL | | 432 | 6912 | 240K (1)| 00:48:03 | 1 | 10 |
|* 16 | INDEX FAST FULL SCAN | TSKHIST_TSK_OP_PD_ST_AC_DP | 432 | 6912 | 240K (1)| 00:48:03 | 1 | 10 |
| 17 | TABLE ACCESS BY INDEX ROWID| TSKTYPE | 1 | 15 | 1 (0)| 00:00:01 | | |
|* 18 | INDEX UNIQUE SCAN | TSKTYPE_TSKCODE | 1 | | 0 (0)| 00:00:01 | | |
| 19 | TABLE ACCESS BY INDEX ROWID | TSKTYPE | 1 | 15 | 0 (0)| 00:00:01 | | |
|* 20 | INDEX UNIQUE SCAN | TSKTYPE_TSKCODE | 1 | | 0 (0)| 00:00:01 | | |
| 21 | TABLE ACCESS FULL | BENE_SEC_TSKTYPES | 17 | 187 | 3 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
2 - access("TSKDESC"="TASK_TYPE")
7 - access(SUBSTR("P"."PART_ID",1,9)=SUBSTR("TI"."FIELDVALUE"(+),1,9))
8 - access("P"."PART_ID"="P2"."PART_ID")
12 - filter("TI"."IDCODE"(+)=18 AND SUBSTR("TI"."FIELDVALUE"(+),10,1)=' ' AND "TI"."FIELDNBR"(+)=1)
14 - access("CT"."IDCODE"(+)=18 AND "CT"."FIELDNBR"(+)=1)
filter(SUBSTR("CT"."FIELDVALUE"(+),10,1)=' ' AND SUBSTR("P"."PART_ID",1,9)=SUBSTR("CT"."FIELDVALUE"(+),1,9))
16 - filter("TI"."TSKID"="TH"."TSKID" OR "CT"."TSKID"="TH"."TSKID")
18 - access("TH"."TSKCODE"="TSKTYPE"."TSKCODE")
20 - access("TH"."TSKCODE"="TT"."TSKCODE")

Hello,
Seem you have partitioned table, try the query without insert statement and then instead of insert use create table as (for test purpose). you should collect stats on all the table involved in this query. How many rows you have in this followign table BENE_SEC_TSKTYPES ? Next time when you post output enclose it in between \ tag to preserve formatting for better readibility.
You can also try using append hint and see if this speeds upINSERT /*+APPEND */  INTO pimitek.bene_sec_pcptnt_final ....
create table mytest_table as
SELECT p.part_id, MAX (th.tskid || tt.tskdesc)
FROM pimitek.bene_sec_pcptnt p
LEFT JOIN
tskident ti
ON SUBSTR (p.part_id, 1, 9) = SUBSTR (ti.fieldvalue, 1, 9)
AND ti.idcode = 18
AND ti.fieldnbr = 1
AND SUBSTR (ti.fieldvalue, 10, 1) = ' '
LEFT JOIN
comptskident ct
ON SUBSTR (p.part_id, 1, 9) = SUBSTR (ct.fieldvalue, 1, 9)
AND ct.idcode = 18
AND ct.fieldnbr = 1
AND SUBSTR (ct.fieldvalue, 10, 1) = ' '
JOIN
tskhist th
ON (ti.tskid = th.tskid OR ct.tskid = th.tskid)
AND th.tskcode IN (SELECT tskcode
FROM tsktype
JOIN
pimitek.bene_sec_tsktypes
ON tskdesc = task_type)
JOIN
tsktype tt
ON th.tskcode = tt.tskcode
WHERE p.part_id NOT IN (SELECT p2.part_id
FROM pimitek.bene_sec_pcptnt_final p2)
GROUP BY p.part_id;
Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • Insert query takes too much time

    I have two select clauses as follows:
    "select * from employee" This returns me 6000 rows.
    & I have next clause as
    "select * from employee where userid in(1,2,3,....,3000)"
    This returns me 3000 rows.
    Now i have to insert the result of above queries into same extended list view of Visual Basic. But the insert for first query takes 11 seconds while second takes 34 sec. We have evaluated this that this time is for insert query & not for select query.
    I want to know that even if the Ist query returns 6000 rows it takes lesser time than the second which is
    inserting 3000 rows.
    We are using Oracle 8.1.7
    Thanks in advance

    The first query can do a straight dump of the table. The second select has to compare every userid to a hardcoded list of 3000 numbers. This will take quite a bit longer. Try rewriting it to
    select * from employee where userid between 1 and 3000
    It will run much faster then the other query.

  • Query takes forever to execute

    This query executes forever:
    select count(*)
    from VIEW_DOCUMENT
    where exists
    select VIEW_DOCUMENT_ITEM.documentID
    from VIEW_DOCUMENT_ITEM
    where VIEW_DOCUMENT_ITEM.documentID=VIEW_DOCUMENT.documentID
    Here's what's inside VIEW_DOCUMENT_ITEM:
    select *
    from DOCUMENT_ITEM
    join VIEW_USER
    If I replace "join VIEW_USER" with "join USER" the query executes normally in few seconds!
    This makes no sense because here's what's in VIEW_USER:
    select * from USER
    Does anybody know what are possible causes for this strange behaviour?

    Actually my query is much more complex, this is simplified version.
    In original query it's 100% not cartesian join.
    Result of my query is 2710 and result of Sven's query is 2711.
    My question is only why does joining a table work normally and why does joining a view "select * from table" take forever.
    I can't see execution plan because my query never ends. I have to kill the Linux process.
    Here is TKPROF output, I don't know what this stuff means:
    TKPROF: Release 10.2.0.1.0 - Production on Tue Mar 4 14:21:57 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Trace file: orcl_ora_29715.trc
    Sort options: prsela  exeela  fchela 
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    alter session set sql_trace true
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 64 
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       58      0.00       0.01          0          0          0           0
    Execute    308      0.05       0.06          0          0          0           0
    Fetch      758      0.03       0.16         38       1188          0        2086
    total     1124      0.10       0.24         38       1188          0        2086
    Misses in library cache during parse: 13
    Misses in library cache during execute: 13
        1  user  SQL statements in session.
      308  internal SQL statements in session.
      309  SQL statements in session.
    Trace file: orcl_ora_29715.trc
    Trace file compatibility: 10.01.00
    Sort options: prsela  exeela  fchela 
           1  session in tracefile.
           1  user  SQL statements in trace file.
         308  internal SQL statements in trace file.
         309  SQL statements in trace file.
          14  unique SQL statements in trace file.
        3007  lines in trace file.
           0  elapsed seconds in trace file.

  • Insert query takes long time

    Hi,
    I have written a procedure that does the following :
    1- Creates a temp table
    2- INSERT /*+ append */ INTO <temp table>
    (Select ......,
    (select sum(amt)
    from tbl1 b
    where b.col1=a.col2 and b.col3=a.col3 and (b.col4=a.col4 OR b.col5=a.col5) ...
    grp by col1,col2,col3),
    from tbl1 a
    where a.col1=.................
    3- Query to delete the duplicate rows from the temp table....
    4- Populate summarized data from TEMP TABLE to tbl1
    5- drops the temp table.
    Now this procedure takes around 2-3 mins for 2lakh records on local db, but when executed on production db takes 4 hours, when called from the application.
    When the log was reviewed it took 4 hours just to insert the data into TEMP TABLE.
    (*Note: this problem occurs only after executing the procedure for continuously 1 week or so. When the bea server is restarted and the application is run, it works fine. but after few days again the performance detiorates.
    What could be the reason for this weird problem?!!! Please give me some tips, which can help me to figure out the problem.
    Thanks
    Shruts.

    **I have renamed the cols and table.
    CREATE OR REPLACE PROCEDURE PROC_TEST(p_1 IN number,p_2 IN number,p_3 IN varchar2)
    is
    vsql varchar2(5000);
    Error_Message varchar2(200);
    vcalc varchar2(150);
    v_rec_present number;
    begin
         SELECT nvl(count(*),0) into v_rec_present
         FROM USER_TABLES
         WHERE upper(TABLE_NAME)=upper('TMP_TEST');
         if v_rec_present>0 then
         EXECUTE IMMEDIATE 'DROP table TMP_TEST';
         end if;
         EXECUTE IMMEDIATE 'CREATE TABLE TMP_TEST as select * from TBL1 where 1=2';
         vsql:= 'select distinct NULL as col1,a.scol2,a.ncol3,a.scol4, a.ncol5,a.scol6,a.dcol7,a.dcol8,null as dcol9,a.scol10,';
         vsql:=vsql||' nvl((select sum(t.amount) as TOTAL_AMT ';
         vsql:=vsql||' from TBL1 t';
         vsql:=vsql||' where t.ncol3='||p_1;
         vsql:=vsql||' and t.ncol3=a.ncol3';
         vsql:=vsql||' and t.scol4=a.scol4 and t.scol10=a.scol10 ';
         vsql:=vsql||' and (t.ncol5=a.ncol5 OR t.scol6=a.scol6)';
         vsql:=vsql||' and t.ncol11=a.ncol11      and t.ccol12=a.ccol12';
         vsql:=vsql||' and t.dcol7=a.dcol7 and t.dcol8=a.dcol8';     
         vsql:=vsql||' group by t.ncol3,t.scol4,';
         vsql:=vsql||' t.dcol7,t.dcol8, t.scol10,t.ncol11, t.ccol12),0) as amount,';
         vsql:=vsql||' a.ccol12,a.ncol11,';
         vsql:=vsql||' a.ncol13, null as ncol14, null as scol15, null as dcol16, null as scol17, null as scol18,';
         vsql:=vsql||' a.scol19, a.ccol20,a.sUser, sysdate as date_ins, a.site_ins,null as description';
         vsql:=vsql||' from TBL1 a';
         vsql:=vsql||' where a.ncol3='||p_1;
         if p_2=1 then
         vcalc:=' SYSDATE ';
         else
              vcalc:='to_date((to_char(to_date('''||p_3||''',''dd-mon-yyyy hh:mi:ss am''),''mm/dd/yyyy'') ) ,''mm/dd/yyyy'')' ;
         end if;
         vsql:=vsql||' and (case when a.ccol20=''TC'' and a.ccol12=''R''';
         vsql:=vsql||' then (case when '||p_2||'=0 and to_date(to_char(a.dcol9,''mm/dd/yyyy''),''mm/dd/yyyy'')='||vcalc;
         vsql:=vsql||' then ''TRUE'' ELSE ''FALSE'' END)';
         vsql:=vsql||' else ''TRUE'' END)=''TRUE''';
         /* if accrual flag=0 and calcenddt<>NULL then */
         if p_2=0 and p_3 is not NULL then
              vsql:=vsql||' and (a.dcol9 is null oR     (to_char(a.dcol9,''YYYY'')<=to_char(sysdate,''YYYY'') and';
              vsql:=vsql||' to_date(to_char(a.dcol9,''mm/dd/yyyy hh24:mi:ss''),''mm/dd/yyyy hh24:mi:ss'')<=to_date((to_char(to_date('''||p_3||''',''dd-mon-yyyy hh:mi:ss am''),''mm/dd/yyyy'') || ''23:59:59'') ,''mm/dd/yyyy hh24:mi:ss'')))';
         elsif p_2=1 then
              vsql:=vsql||' and (a.dcol9 is null oR     ((to_char(a.dcol9,''YYYY'')<=to_char(sysdate,''YYYY'') or';
              vsql:=vsql||' to_char(a.dcol9,''YYYY'')>to_char(sysdate,''YYYY''))))';
    end if;
    vsql:= 'INSERT /*+ append */ INTO TMP_TEST '|| vsql;
    EXECUTE IMMEDIATE vsql;
    EXECUTE IMMEDIATE 'truncate table TBL1';
    EXECUTE IMMEDIATE 'INSERT INTO TBL1 SELECT * FROM TMP_TEST';
    EXECUTE IMMEDIATE 'DROP table TMP_TEST';
    vsql:='DELETE from TBL1 a';
    vsql:=vsql||' where a.ncol3='||p_1;
    vsql:=vsql||' and ROWID<>(select min(ROWID)';
    vsql:=vsql||' from TBL1 b';
    vsql:=vsql||' where (a.scol6=b.scol6 OR a.ncol5=b.ncol5)';
    vsql:=vsql||' AND b.ncol3='||p_1;
    vsql:=vsql||' and a.ncol3=b.ncol3 and a.scol4=b.scol4 and a.scol10=b.scol10';
    vsql:=vsql||' and a.dcol7=b.dcol7';
    vsql:=vsql||' and a.dcol8=b.dcol8';
    vsql:=vsql||' and a.ccol12=b.ccol12 and a.ncol11=b.ncol11';
    vsql:=vsql||' group by a.scol4,a.dcol7,a.dcol8,a.scol10,a.ccol12,a.ccol20)';
    EXECUTE IMMEDIATE vsql;
    EXCEPTION
         WHEN OTHERS THEN
              Error_Message := 'Error in executing SP "PROC_TEST" '|| chr(10) ||'Error Code is ' || SQLCODE || Chr(10) || 'Error Message is ' || SQLERRM;
              dbms_output.put_line ('ERROR:-'||Error_message);
              raise;
    end;
    ******************************************************************************************

  • Problem with Top N Query when no rows returned (takes forever)

    I have a table with 100 Million rows and I want to get the latest N records using:
    SELECT * FROM
    (SELECT * FROM tablename WHERE columnA= 'ABC' ORDER BY TIME DESC)
    WHERE rownum <= N;
    This works fine and is very fast when there are rows with columnA= 'ABC' but when there are no rows with columnA= 'ABC' the query takes forever.
    The strange things is that the inner query returns immediately when run on it's own when no rows with columnA= 'ABC' exist e.g.
    SELECT * FROM tablename WHERE columnA= 'ABC' ORDER BY TIME DESC
    So why does it take for ever for to run:
    SELECT * FROM (no rows inner query) WHERE rownum <= N;
    I have also tried using:
    SELECT * FROM
    (SELECT columnA, rank() over(ORDER BY TIME DESC) time_rank
    FROM tablename WHERE columnA='ABC')
    WHERE time_rank <= N;
    which returns instantly when there are now rows but takes much longer than the first query when there are rows.

    I cannot see a real difference:With histogram we can see a difference on the elapse when no row returned and into explain plan.
    SQL> drop table tablename
      2  /
    Table dropped.
    Elapsed: 00:00:00.03
    SQL>
    SQL> create table tablename
      2  as
      3  select sysdate - l time
      4         , decode(trunc(dbms_random.value(1,10)),1,'ABC',2,'DEF',3,'GHI',4,'JKL','MNO') as columnA
      5    from (select level l from dual connect by level <= 1000000)
      6  /
    Table created.
    Elapsed: 00:01:19.08
    SQL>
    SQL> select columnA,count(*) from tablename group by columnA
      2  /
    COL   COUNT(*)
    ABC     110806
    DEF     111557
    GHI     111409
    JKL     111030
    MNO     555198
    Elapsed: 00:00:05.05
    SQL>
    SQL> create index i1 on tablename(time)
      2  /
    Index created.
    Elapsed: 00:00:34.08
    SQL>
    SQL> create index i2 on tablename(columna)
      2  /
    Index created.
    Elapsed: 00:00:30.08
    SQL>
    SQL> exec dbms_stats.gather_table_stats(user,'TABLENAME',cascade=>true)
    PL/SQL procedure successfully completed.
    Elapsed: 00:01:18.09
    SQL>
    SQL> set autotrace on explain
    SQL> SELECT * FROM
      2  (SELECT * FROM tablename WHERE columnA= 'ABC' ORDER BY TIME DESC)
      3  WHERE rownum <= 10
      4  /
    TIME     COL
    17/09/06 ABC
    12/09/06 ABC
    08/09/06 ABC
    07/09/06 ABC
    25/08/06 ABC
    22/08/06 ABC
    13/08/06 ABC
    08/07/06 ABC
    14/06/06 ABC
    01/05/06 ABC
    10 rows selected.
    Elapsed: 00:00:01.04
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2364 Card=10 Bytes=120)
       1    0   COUNT (STOPKEY)
       2    1     VIEW (Cost=2364 Card=200000 Bytes=2400000)
       3    2       SORT (ORDER BY STOPKEY) (Cost=2364 Card=200000 Bytes=2400000)
       4    3         TABLE ACCESS (FULL) OF 'TABLENAME' (Cost=552 Card=200000 Bytes=2400000)
    SQL>
    SQL> SELECT * FROM
      2  (SELECT * FROM tablename WHERE columnA= 'MNO' ORDER BY TIME DESC)
      3  WHERE rownum <= 10
      4  /
    TIME     COL
    20/09/06 MNO
    19/09/06 MNO
    16/09/06 MNO
    14/09/06 MNO
    13/09/06 MNO
    10/09/06 MNO
    06/09/06 MNO
    05/09/06 MNO
    03/09/06 MNO
    02/09/06 MNO
    10 rows selected.
    Elapsed: 00:00:02.04
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2364 Card=10 Bytes=120)
       1    0   COUNT (STOPKEY)
       2    1     VIEW (Cost=2364 Card=200000 Bytes=2400000)
       3    2       SORT (ORDER BY STOPKEY) (Cost=2364 Card=200000 Bytes=2400000)
       4    3         TABLE ACCESS (FULL) OF 'TABLENAME' (Cost=552 Card=200000 Bytes=2400000)
    SQL>
    SQL> SELECT * FROM
      2  (SELECT * FROM tablename WHERE columnA= 'PQR' ORDER BY TIME DESC)
      3  WHERE rownum <= 10
      4  /
    no rows selected
    Elapsed: 00:00:01.01
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2364 Card=10 Bytes=120)
       1    0   COUNT (STOPKEY)
       2    1     VIEW (Cost=2364 Card=200000 Bytes=2400000)
       3    2       SORT (ORDER BY STOPKEY) (Cost=2364 Card=200000 Bytes=2400000)
       4    3         TABLE ACCESS (FULL) OF 'TABLENAME' (Cost=552 Card=200000 Bytes=2400000)
    SQL> set autot off
    SQL>
    SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS(user,'TABLENAME',METHOD_OPT => 'FOR COLUMNS SIZE 250 columna')
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:09.08
    SQL>
    SQL> set autotrace on explain
    SQL> SELECT * FROM
      2  (SELECT * FROM tablename WHERE columnA= 'ABC' ORDER BY TIME DESC)
      3  WHERE rownum <= 10
      4  /
    TIME     COL
    17/09/06 ABC
    12/09/06 ABC
    08/09/06 ABC
    07/09/06 ABC
    25/08/06 ABC
    22/08/06 ABC
    13/08/06 ABC
    08/07/06 ABC
    14/06/06 ABC
    01/05/06 ABC
    10 rows selected.
    Elapsed: 00:00:01.03
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1434 Card=10 Bytes=120)
       1    0   COUNT (STOPKEY)
       2    1     VIEW (Cost=1434 Card=110806 Bytes=1329672)
       3    2       SORT (ORDER BY STOPKEY) (Cost=1434 Card=110806 Bytes=1329672)
       4    3         TABLE ACCESS (FULL) OF 'TABLENAME' (Cost=552 Card=110806 Bytes=1329672)
    SQL>
    SQL> SELECT * FROM
      2  (SELECT * FROM tablename WHERE columnA= 'MNO' ORDER BY TIME DESC)
      3  WHERE rownum <= 10
      4  /
    TIME     COL
    20/09/06 MNO
    19/09/06 MNO
    16/09/06 MNO
    14/09/06 MNO
    13/09/06 MNO
    10/09/06 MNO
    06/09/06 MNO
    05/09/06 MNO
    03/09/06 MNO
    02/09/06 MNO
    10 rows selected.
    Elapsed: 00:00:02.05
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6219 Card=10 Bytes=120)
       1    0   COUNT (STOPKEY)
       2    1     VIEW (Cost=6219 Card=555198 Bytes=6662376)
       3    2       SORT (ORDER BY STOPKEY) (Cost=6219 Card=555198 Bytes=6662376)
       4    3         TABLE ACCESS (FULL) OF 'TABLENAME' (Cost=552 Card=555198 Bytes=6662376)
    SQL>
    SQL> SELECT * FROM
      2  (SELECT * FROM tablename WHERE columnA= 'STU' ORDER BY TIME DESC)
      3  WHERE rownum <= 10
      4  /
    no rows selected
    Elapsed: 00:00:00.00
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=12)
       1    0   COUNT (STOPKEY)
       2    1     VIEW (Cost=6 Card=1 Bytes=12)
    3 2 SORT (ORDER BY STOPKEY) (Cost=6 Card=1 Bytes=12)
    4 3 TABLE ACCESS (BY INDEX ROWID) OF 'TABLENAME' (Cost=5 Card=1 Bytes=12)
    5 4 INDEX (RANGE SCAN) OF 'I2' (NON-UNIQUE) (Cost=4 Card=1)
    SQL> Nicolas.

  • Query takes too long to run

    I ran the following query:
    select
            q1.ISO_COUNTRY_CODE q1country
            ,q3.ISO_COUNTRY_CODE q3country
            ,q1.LANGUAGE_CODE q1lang
            ,q3.LANGUAGE_CODE q3lang
            ,q1.POSTAL_CODE pcode1
            ,q3.POSTAL_CODE pcode3
            ,case when q1.POSTAL_CODE=q3.POSTAL_CODE then 1 else 0 end as "check"
        from street3 q3
            inner join street1 q1 on q1.PERM_ID=q3.PERM_ID
        where
    nvl(upper(q1.ISO_COUNTRY_CODE),'tempnull')=nvl(upper(q3.ISO_COUNTRY_CODE),'tempnull')
            and nvl(upper(q1.LANGUAGE_CODE),'tempnull')=nvl(upper(q3.LANGUAGE_CODE),'tempnull')
         --and q1.POSTAL_CODE=q3.POSTAL_CODE
            and rownum<10
    ;Results were returned in less than a second:
    Q1COUNTRY,Q3COUNTRY,Q1LANG,Q3LANG,PCODE1,PCODE3,check
    USA,USA,ENG,ENG,49946,49946,1
    USA,USA,ENG,ENG,49946,49946,1
    USA,USA,ENG,ENG,49946,49946,1
    USA,USA,ENG,ENG,49946,49946,1
    USA,USA,ENG,ENG,49946,49946,1
    ... etc.
    When I uncomment the line and q1.POSTAL_CODE=q3.POSTAL_CODE in the WHERE clause, however, the query takes forever to return. This doesn't make sense to me since the postcodes are equal in most cases anyway.

    TKPROF is almost always gross overkill. Certainly overkill until one exhausts what can be done using an explain plan. You are correct that AUTOTRACE requires a special role, and also requires the DBA runl a script which far too many do not know how to do. So the answer is to provide, any time you are asking for tuning help, an explain plan report as no special privileges are required.
    You will find demos here:
    http://www.morganslibrary.org/library.html
    under Explain Plan
    Post the SQL, the report, and your full version number for help.

  • Query on object-relational data takes forever

    hello all
    i have a problem with a query performance... it seems like whenever i call a specific object function, the query executes very very slow. The results though are correct.
    let me explain what i do... I have some relational tables, and i recreate the schema into an object relational one. Then i insert data from relational tables to object tables. I follow this tutorial: [A Sample Application Using Object-Relational Features|http://download.oracle.com/docs/cd/B12037_01/appdev.101/b10799/adobjxmp.htm]
    these are the types that make up the transaction object table.
    CREATE OR REPLACE TYPE  TransactionItem_objtyp AS OBJECT
      transItemID   NUMBER,
      Quantity      NUMBER,
      iValue        NUMBER,
      item_ref      REF Item_objtyp
    CREATE TYPE TransactionItemList_ntabtyp AS TABLE OF TransactionItem_objtyp;
    CREATE OR REPLACE TYPE Transaction_objtyp AS OBJECT
      transID             NUMBER,
      cust_ref            REF Customer_objtyp,
      transTameio         NUMBER,
      transDateTime       DATE,
      isStoreCustomer     CHAR(1),
      store_ref           REF Store_objtyp,
      transItemList_ntab  TransactionItemList_ntabtyp,
      MAP MEMBER FUNCTION
        getTransID  RETURN NUMBER,
      MEMBER FUNCTION
        getTotalCost  RETURN NUMBER
    );the function that causes the query to run very slow (fetching 10 rows per sec in a query that should return 130.000 rows) is the getTotalCost:
    CREATE OR REPLACE TYPE BODY Transaction_objtyp AS
    MAP MEMBER FUNCTION getTransID RETURN NUMBER IS
    BEGIN
    RETURN transID;
    END;
    MEMBER FUNCTION getTotalCost RETURN NUMBER IS
    i       INTEGER;
    Total   NUMBER := 0;
    BEGIN
    IF(UTL_COLL.IS_LOCATOR(transItemList_ntab))
    THEN
    SELECT SUM(L.Quantity * L.iValue) INTO Total
    FROM TABLE(CAST(transItemList_ntab AS TransactionItemList_ntabtyp)) L;
    ELSE
    FOR i IN 1..SELF.transItemList_ntab.COUNT LOOP
    Total := Total + SELF.transItemList_ntab(i).Quantity * SELF.transItemList_ntab(i).iValue;
    END LOOP;
    END IF;
    RETURN ROUND(Total,2);
    END;
    END;the table transaction_objtab that contains the nested table is this
    CREATE TABLE Transaction_objtab OF Transaction_objtyp(
      PRIMARY KEY(transID),
      FOREIGN KEY(cust_ref) REFERENCING Customer_objtab,
      FOREIGN KEY(store_ref) REFERENCING Store_objtab)
      OBJECT IDENTIFIER IS PRIMARY KEY
      NESTED TABLE transItemList_ntab STORE AS TransItem_ntab (
        (PRIMARY KEY(transItemID))
        ORGANIZATION INDEX)
      RETURN AS LOCATOR
    ALTER TABLE TransItem_ntab ADD (SCOPE FOR (item_ref) IS Item_objtab);and this is how i insert the values into the transaction_objtab and the nested tables from the relational ones:
    INSERT INTO Transaction_objtab
    SELECT  t.transID,
            REF(c),
            t.transTameio,
            t.transDateTime,
            t.isStoreCustomer,
            REF(s),
            TransactionItemList_ntabtyp()
    FROM transactions t, Customer_objtab c, store_objtab s
    WHERE t.transCustomer = c.custCode AND t.transStore = s.storeCode;
    BEGIN
      FOR i IN (SELECT DISTINCT transID FROM transactionItems) LOOP
        INSERT INTO TABLE(  SELECT p.TransItemList_ntab
                            FROM Transaction_objtab p
                            WHERE p.transID = i.transID)
        SELECT transItemIDseq.nextval, t.Quantity, t.iValue, REF(i)
        FROM transactionItems t, item_objtab i
        WHERE t.transID = i.transID AND t.itemID = i.itemID;
      END LOOP;
    END;so whenever i use transaction_objtab t, t.getTotalCount() query takes for ever.
    is there anything i do wrong?
    sorry for this long post.
    thanks in advance

    So, how many transactions? How many items? There is a whole series of questions I would normally ask at this point, because performance tuning is - to a certain extent - largely a matter of rote. But there's a more fundamental issue.
    You are experiencing the problem with objects. They are cool enough when handling individual "things" but they suck when it comes to set-based processing. SQL and relational programming, on the other hand, excels at that sort of thing. So the question which has to asked is, why are you using objects for this project?
    Cheers, APC
    blog: http://radiofreetooting.blogspot.com

  • Performance issue with insert query !

    Hi ,
    I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
    My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
    My container is indexed with "node-attribute-equality-string". The insert query I used:
    insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    If I modify my query with
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
    insted of
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    Time taken reduces only by 8 secs.
    I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
    Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
    Has anybody got any idea regarding this performance issue.
    Kindly help me out.
    Thanks,
    Kapil.

    Hi George,
    Thanks for your reply.
    Here is the info you requested,
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-attribute-equality-string for node {}:mySSIAddress
    2 indexes found.
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
    Berkeley DB 4.6.21: (September 27, 2007)
    Default container name: n_b_i_f_c_a_z.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Shell and XmlManager state:
    Not transactional
    Verbose: on
    Query context state: LiveValues,Eager
    The insery query with update takes ~32 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 32.5002
    and the query without the updation part takes ~14 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 13.7289
    The query :
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
    Time in seconds for command 'query': 0.005375
    is very fast.
    The Updation of the document seems to consume much of the time.
    Regards,
    Kapil.

  • Insert query problem

    I have a insert query that I have not changed, but for some reason it will not insert anything into the database.
    The data gets inserted by a Ajax submit form and goes to insert.php
    if(isset($_POST['message_wall'])){
        /* Connection to Database */
        include('config.php');
        /* Remove HTML tag to prevent query injection */
        $message = mysql_real_escape_string($_POST['message_wall']);
        $to = mysql_real_escape_string($_POST['profile_to']);
        $sql    =    'INSERT INTO wall (message) VALUES(
                    "'.$message.'")';
                     mysql_query($sql);
    I want to be able to add a user_id into the database too
    The ajax code:
    $(document).ready(function(){
        $("form#submit_wall").submit(function() {
        var message_wall = $('#message_wall').attr('value');
            $.ajax({
                type: "POST",
                url: "insert.php",
                data: "message_wall="+ message_wall,
                success: function(){
                    $("ul#wall").prepend("<li style='display:none'>"+message_wall+"</li><br><hr>");
                    $("ul#wall li:first").fadeIn();
        return false;

    hi,
    while I was waiting for your message I went back to the raw code.
    I changed my code and now it works but it falls again once I add the other info in hidden fields
            $.ajax({
                type: "POST",
                url: "insert.php",
                data: "message_wall="+ message_wall,
                date: "msg_date="+ msg_date,
                success: function(){
                    $("ul#wall").prepend("<li style='display:none'>"+message_wall+"</li><br />"+msg_date+"<hr />");
                    $("ul#wall li:first").fadeIn();
    <form action="" id="submit_wall" name="submit_wall">
    <textarea name="name" id="message_wall" cols="70" rows="2" onclick="make_blank();"></textarea>
    <div align="left"><button type="submit">Post to wall</button></div>
    <input name="profile_to" type="hidden" value="<?php echo $row_user_profile['user_id']; ?>" />
    <input name="msg_date" type="hidden" value="<?php echo date("d/m/y");?>" />
    </form>
    how do I add more than one post option?
    Is this right:
                data: "message_wall="+ message_wall,
                 date: "msg_date="+ msg_date,
    I tried
                data: "message_wall="+ message_wall, "msg_date="+ msg_date,
    But nothing. I take it this is where the fields are sent

  • My Query takes too long ...

    Hi ,
    Env   , DB 10G , O/S Linux Redhat , My DB size is about 80G
    My query takes too long ,  about 5 days to get results , can you please help to rewrite this query in a better way ,
    declare
    x number;
    y date;
    START_DATE DATE;
    MDN VARCHAR2(12);
    TOPUP VARCHAR2(50);
    begin
    for first_bundle in
    select min(date_time_of_event) date_time_of_event ,account_identifier  ,top_up_profile_name
    from bundlepur
    where account_profile='Basic'
    AND account_identifier='665004664'
    and in_service_result_indicator=0
    and network_cause_result_indicator=0
    and   DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
    group by account_identifier,top_up_profile_name
    order by date_time_of_event
    loop
    select sum(units_per_tariff_rum2) ,max(date_time_of_event)
    into x,y
    from OLD_LTE_CDR
    where account_identifier=(select first_bundle.account_identifier from dual)
    and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
    and -- no more than a month
    date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
    and -- finished his bundle then buy a new one
      date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
                             FROM OLD_LTE_CDR
                             WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
                             AND IN_SERVICE_RESULT_INDICATOR=26);
    select first_bundle.account_identifier ,first_bundle.top_up_profile_name
    ,FIRST_BUNDLE.date_time_of_event
    INTO MDN,TOPUP,START_DATE
    from dual;
    insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
    end loop;
    COMMIT;
    end;

    > where account_identifier=(select first_bundle.account_identifier from dual)
    Why are you doing this?  It's a completely unnecessary subquery.
    Just do this:
    where account_identifier = first_bundle.account_identifier
    Same for all your other FROM DUAL subqueries.  Get rid of them.
    More importantly, don't use a cursor for loop.  Just write one big INSERT statement that does what you want.

  • Deletion Query takes too long

    I have two tables with exactly the same structure. The table 1 gets data and one procedure reads that data and inserts them into another table (table2) for processing them. I have a delete query which is taking too long to execute.
    The query is as follows
    delete
    from events.temp_act a
    where a.chess_ts < (select max(chess_ts) from events.temp_act b
    where a.db_type = b.db_type
    and a.order_no = b.order_no
    and a.acv_no = b.acv_no
    There is a composite index in this table which is (db_type,order_no,acv_no)
    In my procedure, I drop and create the index for faster processing and also analyze the index.
    The above deletion query approximately deletes half the number of total records.
    There is no primary key in the table for the reason that there can be no unique record identifier.
    The query takes nearly 2 hours for deleting about 1100000 records.
    Is there a way to make this query run faster?

    What is the explain plan for this statement? Is the index even being used?
    Is the table analyzed as well as the index?
    Dropping/re-creating the index - not likely to help. I would leave this out.
    Have you tried other variations, like:
    delete
      from events.temp_act
    where (db_type, order_no, acv_vo, chess_ts)
           not in (select db_type, order_no, acv_vo, max(chess_ts)
                     from events.temp_act b
                    group by db_type, order_no, acv_vo);

  • Insert operation takes looooooooooong time

    One of our ETL procedures does an Insert operation on a table with the records selected from a couple of tables across DB link.
    While the SELECT query takes about 6 seconds to retrieve nearly 42,00,000 records, the insert of those records to a table takes about 45 minutes.
    Infact I've altered the table to NOLOGGING mode and the /*+ append */ comment(for Insert) is in place to reduce the redo logs.The destination table has no index and no constraints as well..
    Is there any other way that I can adapt to reduce the time of insert operation?
    Thanks,
    Bhagat

    >While the SELECT query takes about 6 seconds to retrieve nearly 42,00,000 records
    Is this in TOAD? If so, TOAD actually returns rows in sections and may not be returning the full set. You would have to actually scroll to the bottom of the grid and wait for the data to finish loading. Caution, if you did not select the option to execute queries in threads in TOAD you will not have the cancel query ability.
    >the insert of those records to a table takes about 45 minutes
    Have you performed a CREATE TABLE AS using the query. This will give you a good benchmark for performance during a direct-path load. You can then look at the USER_SEGMENTS.BYTES column for that table after a load and, with your timings, check the data transfer rate with your network support.

  • Same query takes different time for two different users with same privileges

    Hi,
    Just wanted to know if it is possible for an INSERT query to take different amount of time for two different users for the same volume of data. If so, what could be the reason? I would like to add that we are not executing the query simultaneously. But it is always faster for user1(within 10-20 seconds) and slower for user2(takes around 8-10 minutes).

    Show execution plan for each user. I think there is other reasons which you didn't not tell

  • Sending Gmail takes forever.

    I'm using Macmail and have set up a gmail account. Everything technically works fine, it sends and receives mail, the problem is, when I try to send something it seems to take forever. The little grey things keep spinning by the send menu and it doesn't seem to catch, for lack of a better word.
    I'm using a wireless router but other than this the speed and connection strength are great so I'm confused.
    Any idea how to fix this, Outlook on my PC seemed much snappier.
    Thanks.

    Hello, and welcome to the Discussions.
    I have experienced the same behavior. For me it is corrected by entering a primary and secondary DNS address in the modem. I think this behavior differs for some people, and must be related to interaction with specific ISPs. One thing to check and report, is to open the Activity window (click on Window in the menubar, and choose Activity) and observe when sending how much time is spent connecting to the SMTP vs transmitting?
    I changed my Modem to have dedicated DNS address. It may be possible to do in the router, instead, but I did mine in the Modem. How that is done will likely depend on the brand of modem. You will likely need to query your ISP to get the server DNS number -- if so, they would be able to guide the change in the modem.
    Ernie

  • Good speeds but takes forever to connect

    I have a Mini directly connected to a wireless router and a Powerbook wirelessly connected. My ISP is Telus in Vancouver. My speeds are excellent but connecting itself takes forever it seems. When I click a link on a web page the staus bar reports "looking up hostname". It can take anywhere from 20 seconds to two minutes to load the page.
    I thought the problem was with Telus but when a friend brought over his PC laptop it made connections with great speed. So now I assume it's a Mac issue. It affects both my Macs.
    Telus can't help me. Their support suggested resetting the browsers but as I knew in advance it didn't make any difference. It affects all browsers on both machines.
    Apple support didn't even seem to understand what I was talking about.
    I posted a query at http://www.dslreports.com/forum/remark,14879923 and got this reply: [quote]Well, I have some bad news and some worse news.
    The bad news is, this is a MacOS problem. And yes, it is DNS.
    The worse news is that getting a local DNS server will likely not fix the problem.
    I had a G3 iBook for a while and I noticed this problem (I think I had OS X 10.0.3.) I don't have it anymore cause it was too frustrating. Literally, my P2-266 notebook with 64M RAM browsed the web a **** of a lot faster than the G3-500 with 384M RAM.
    I got so frustrated and installed linux on the iBook and all those problems went away.
    One of the problems may have been IPv6, but you mentioned that you disabled it.
    The other issue was DNS lookup. It wasn't the DNS lookup that was the problem, but it was the reverse DNS lookup. When you connect to your ISP the ISP assigns you a hostname. It seems that OS X isn't setting this hostname to your computer. When you connect to a site, the site may have to look up the DNS for the hostname your ISP assigned to you, and because OS X doesn't seem to set it, it's them timing out trying to get your IP.
    Seriously, if you google something like 'ibook slow dns' you'll get a TON of hits.
    Is there something in the DHCP that you can enable that will let the ISP set your hostname?
    Of course, if I'm wrong I'm sure someone else will point it out, but slow web browsing and DNS are a common complaint on the Macs.
    Edit: I used Mac OS for about 4 days before giving up on it, so I won't be much help beyond this, but at least it gives you another path to look at.[/quote]
      Mac OS X (10.4.3)  

    I thought the problem was with Telus but when a
    friend brought over his PC laptop it made connections
    with great speed. So now I assume it's a Mac issue.
    It affects both my Macs.
    Try manually entering your DNS servers in sys prefs->network-> configure -> tcp/ip
    There seems to be quite a few posts where OS X is not getting or setting name servers properly via DHCP.
    Personally, I have never had a problem like this, so I don't know what's causing this. There are a few possibilities.
    If your ISP's DHCP server and the OS X dhcp client are not communicating properly, that could lead to problems like this. Now, where is the fault in that scenario ? Could be the ISP using a crappy DHCP server, or it could be OS X dhcp client being to strict in what it will accept from the dhcp server, or it could be the fault of the dhcp client entirely. But it's probably not going to be resolved until someone who knows how to debug it is having this issue and takes the time to debug it.
    I posted a query at
    http://www.dslreports.com/forum/remark,14879923 and
    got this reply: [quote]Well, I have some bad news and
    <snip>
    The other issue was DNS lookup. It wasn't the DNS
    lookup that was the problem, but it was the reverse
    DNS lookup. When you connect to your ISP the ISP
    assigns you a hostname. It seems that OS X isn't
    setting this hostname to your computer. When you
    connect to a site, the site may have to look up the
    DNS for the hostname your ISP assigned to you, and
    because OS X doesn't seem to set it, it's them timing
    out trying to get your IP.
    This explanation is just plain wrong, and, as was also pointed out, applying to a much older OS.

Maybe you are looking for

  • How can I use multiple row insert or update into DB in JSP?

    Hi all, pls help for my question. "How can I use multiple rows insert or update into DB in JSP?" I mean I will insert or update the multiple records like grid component. All the data I enter will go into the DB. With thanks,

  • HT1665 lightning to 30 pin adapter/bose sound dock

    hi i have an older bose sound dock that i used with my iphone 3 but when i upgraded to a 4s it didn't work with the dock. i now have an iphone 5. does anyone know if the lightning to 30 pin adapter by chance works with older sound docks?

  • Standalone will not read Word or pdf files

    On 10/19/2006 3;28:29 I posted the following message:

  • BO BI 4.0 Access problem after restart of Netweaver 7.3

    After the customer deploys a BI application, when he restart the system,the application disappear. On the default trace file, the fo llowing error is being logged: Could not start portal service: com.sap.ip.bi.web.portal.deployment. bideployment [EXC

  • Part 2 Install Script

    Edit: Updated, check last post I got tired of the long and annoying process of manually installing everything after the base system was installed, so I got bored one night about two weeks ago and decided to write this script.  It will pretty much do