Index creation of 0figl_o02 taking too long.
Hi,
we load data to a ODS 0FIGL_O02 approx .5 million records daily. and recreate the index using the function module - RSSM_PROCESS_ODS_CREA_INDEXES.
The index creation job used to last for 3 -4 hours six months ago. but now the job runs for 6 hours, is there a way to decrese the job time.
the no of records in the active table of ODS is 424 million.
hi,
this DSO is based on DS 0FI_GL_4 which is delta enabled.
Do you mean to say that you are receiving .5 million data daily?
if yes then there is not much that you can do, as the program will try to create the incremental index and will have to find the current index of 240 million records. One thing you can do is that as a regular Monthly activity you can completely delete the index of the cube and recreate it(this may take huge time but it would correct some of the corrupt indexes).
this below sap note might help you.
Note 1413496 - DB2-z/OS: BW: Building indexes on active DSO data
If you are not using to report or lookup on this DSO then please do not create secondary indexes.
regards,
Arvind.
Edited by: Arvind Tekra on Aug 25, 2011 5:18 PM
Similar Messages
-
Creating intermedia index is taking too long!!
creating intermedia index is taking too long. then memory insufficient... system down.
please help.
platform: win2000 pro, oracle817
linux redhat, oracle817Use CTX_OUTPUT_START_LOG() to begin logging index requests. Then create an index and see what's going on.
o. -
Hello,
I have a table "a" that has:
left_id right_id type
4 5 1
4 6 1
4 7 1
5 9 2
5 10 2
5 11 2
9 13 3
13 14 4
10 15 3
QUERY:
select left_id, right_id, type from
a
connect by left_id = prior right_id
start with left_id = 4;
Execution Plan
Plan hash value: 2739023583
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29 | 1131 | 18 (12)| 00:00:01 |
|* 1 | CONNECT BY WITH FILTERING| | | | | |
|* 2 | INDEX RANGE SCAN | a_PK | 5 | 65 | 3 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 24 | 624 | 13 (0)| 00:00:01 |
| 4 | CONNECT BY PUMP | | | | | |
|* 5 | INDEX RANGE SCAN | a_PK | 5 | 65 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("LEFT_ID"=PRIOR "RIGHT_ID")
2 - access("LEFT_ID"=4)
5 - access("LEFT_ID"="connect$_by$_pump$_002"."prior right_id ")
Is there a way to optimize the query?
The query is taking too long.
-Thanks
Karthik
Edited by: user3934098 on Nov 14, 2010 1:50 AMHere is the detailed explaination:
Version: oracle 10g R2
Create table statement:
CREATE TABLE A
"LEFT_ID" NUMBER(9,0) NOT NULL ENABLE,
"RIGHT_ID" NUMBER(9,0) NOT NULL ENABLE,
"TYPE" NUMBER(9,0) NOT NULL ENABLE,
CONSTRAINT "A_PK" PRIMARY KEY ("LEFT_ID", "RIGHT_ID", "TYPE") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "DATA" ENABLE,
CONSTRAINT "A_FK1" FOREIGN KEY ("TYPE") REFERENCES "B" ("TYPE") ENABLE
SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT
TABLESPACE "DATA" ;
Insert statements:
INSERT INTO A VALUES(4, 5, 1);
INSERT INTO A VALUES(4, 6, 1);
INSERT INTO A VALUES(4, 7, 1);
INSERT INTO A VALUES(5, 9, 2);
INSERT INTO A VALUES(5, 10, 2);
INSERT INTO A VALUES(5, 11, 2);
INSERT INTO A VALUES(9, 13, 3);
INSERT INTO A VALUES(13, 14, 4);
INSERT INTO A VALUES(10, 15, 3);
INDEXES:
CREATE UNIQUE INDEX "A_PK" ON "A" ("LEFT_ID", "RIGHT_ID", "TYPE") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "DATA" ;
QUERY:
select left_id, right_id, type from
a
connect by left_id = prior right_id
start with left_id = 4;
THE table has 951053 rows
The explain plan is:"
Execution Plan
Plan hash value: 2739023583
Id Operation Name Rows Bytes Cost (%CPU) Time
0 SELECT STATEMENT 29 1131 18 (12) 00:00:01
* 1 CONNECT BY WITH FILTERING
* 2 INDEX RANGE SCAN a_PK 5 65 3 (0) 00:00:01
3 NESTED LOOPS 24 624 13 (0) 00:00:01
4 CONNECT BY PUMP
* 5 INDEX RANGE SCAN a_PK 5 65 2 (0) 00:00:01
Predicate Information (identified by operation id):
1 - access("LEFT_ID"=PRIOR "RIGHT_ID")
2 - access("LEFT_ID"=4)
5 - access("LEFT_ID"="connect$_by$_pump$_002"."prior right_id ")
Now is there a way to optimize the query? The query takes about a min to excute. This will be my inner query.
" Is there any other information that you may need? Am I missing something here? "
-Thanks
Karthik
Edited by: user3934098 on Nov 14, 2010 2:22 AM -
Importing a table with a BLOB column is taking too long
I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
1 - set buffer to 500 Mb
2 - pre-created the table and turned off logging
3 - set indexes=N
4 - set constraints=N
5 - I have 10 online redo logs with 200 MB each
6 - Even turned off logging at the database level with disablelogging = true
It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
For your info:
Computer: Sun v490 with 16 CPUs, solaris 10
memory: 10 Gigabytes
SGA: 4 GigabytesLegatti,
I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
Thanks for your reply. -
SQL Update statement taking too long..
Hi All,
I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
Oracle Version: 11.2.0.1 64bit
OS: Windows 2008 64bit
desc temp_person;
Name Null? Type
PERSON_ID NOT NULL NUMBER(10)
DISTRICT_ID NOT NULL NUMBER(10)
FIRST_NAME VARCHAR2(60)
MIDDLE_NAME VARCHAR2(60)
LAST_NAME VARCHAR2(60)
BIRTH_DATE DATE
SIN VARCHAR2(11)
PARTY_ID NUMBER(10)
ACTIVE_STATUS NOT NULL VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
CPP_EXEMPT VARCHAR2(1)
EVENT_ID NOT NULL NUMBER(10)
USER_INFO_ID NUMBER(10)
TIMESTAMP NOT NULL DATE
CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
Index created.
ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
Index analyzed.
explain plan for update temp_person
2 set first_name = (select trim(f_name)
3 from ext_names_csv
4 where temp_person.PERSON_ID=ext_names_csv.p_id
5 and temp_person.DISTRICT_ID=ext_names_csv.ed_id);
Explained.
@?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 3786226716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 82095 | 4649K| 2052K (4)| 06:50:31 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 82095 | 4649K| 191 (1)| 00:00:03 |
|* 3 | EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV | 1 | 178 | 24 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
19 rows selected.By the looks of it the update is going to take 6 hrs!!!
ext_names_csv is an external table that have the same number of rows as the PERSON table.
ROHO@rohof> desc ext_names_csv
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
F_NAME VARCHAR2(300)
L_NAME VARCHAR2(300)Anyone can help diagnose this please.
Thanks
Edited by: rsar001 on Feb 11, 2011 9:10 PMThank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
Table created.
SQL> desc ext_person
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
FST_NAME VARCHAR2(300)
LST_NAME VARCHAR2(300)
SQL> select count(*) from ext_person;
COUNT(*)
93383
SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
Index created.
SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
SQL> explain plan for update temp_person
2 set first_name = (select fst_name
3 from ext_person
4 where temp_person.PERSON_ID=ext_person.p_id
5 and temp_person.DISTRICT_ID=ext_person.ed_id);
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 1236196514
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 93383 | 1550K| 186K (50)| 00:37:24 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 93383 | 1550K| 191 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| EXTT_PERSON | 9 | 1602 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EXT_PERSON_ED | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
SQL> explain plan for MERGE INTO temp_person t
2 USING (SELECT fst_name ,p_id,ed_id
3 FROM ext_person) ext
4 ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
5 WHEN MATCHED THEN
6 UPDATE set t.first_name=ext.fst_name;
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2192307910
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 92307 | 14M| | 1417 (1)| 00:00:17 |
| 1 | MERGE | TEMP_PERSON | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN | | 92307 | 20M| 6384K| 1417 (1)| 00:00:17 |
| 4 | TABLE ACCESS FULL| TEMP_PERSON | 93383 | 5289K| | 192 (2)| 00:00:03 |
| 5 | TABLE ACCESS FULL| EXT_PERSON | 92307 | 15M| | 85 (2)| 00:00:02 |
Predicate Information (identified by operation id):
3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
Note
- dynamic sampling used for this statement (level=2)
21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
Thank you all for your ideas that helped us get to the solution.
Much appreciated.
Thanks -
My Time Capsule backups are taking too long. Time Machine says it is indexing. However, I have had my Time Machine configured since I installed Lion last year and it has been making automatic backups every hour since. Why is it 'indexing' now, and why is it taking so long?
Thank youVisit pondini.org for all things Time Machine.
-
Oracle - Query taking too long (Materialized view)
Hi,
I am extracting billing informaiton and storing in 3 different tables... in order to show total billing (80 to 90 columns, 1 million rows per month), I've used a materialized view... I do not have indexes on 3 billing tables - do have 3 indexes on Materialized view...
at the moment it's taking too long to query the data (running a query via toad fails and shows "Out of Memory" error message; runing a query via APEX though is providing results but taking way too long)...
Please advice how to make the query efficient...tparvaiz,
Is it possible when building your materialized view to summarize and consolidate the data?
Out of a million rows, what would your typical user do with that amount data if they could retrieve it readily? The answer to this question may indicate if and how to summarize the data within the materialized view.
Jeff
Edited by: jwellsnh on Mar 25, 2010 7:02 AM -
Query is taking too long to execute - contd
I am unable to post the entire explain plan in one post as it exceeds maximum length.
Please advise on how to post this.
Previous post Link : Link: Query is taking too long to execute
Regards,
Sreekanth Munagala.
Edited by: Sreekanth Munagala on Oct 27, 2009 8:31 AM
Edited by: Sreekanth Munagala on Oct 27, 2009 8:34 AMHi Tubby,
Today i executed only the first query in the view and it took almost 2.5 hrs.
Here is the explain plan for this query
SQL> SET SERVEROUTPUT ON
SQL> set linesize 200
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 766 | 2448 |
| 1 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 13 | 3 |
|* 2 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
| 3 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 29 | 3 |
|* 4 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
| 5 | VIEW | POC_ASN_PICKUP_LOCATIONS_V | 2 | 2426 | 17 |
| 6 | UNION-ALL | | | | |
| 7 | NESTED LOOPS | | 1 | 85 | 4 |
| 8 | NESTED LOOPS | | 1 | 78 | 4 |
|* 9 | TABLE ACCESS BY INDEX ROWID | PO_VENDOR_SITES_ALL | 1 | 73 | 3 |
|* 10 | INDEX UNIQUE SCAN | PO_VENDOR_SITES_U2 | 1 | | 2 |
|* 11 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | 5 | 1 |
|* 12 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | 7 | |
| 13 | NESTED LOOPS | | 1 | 91 | 13 |
| 14 | NESTED LOOPS | | 1 | 84 | 13 |
| 15 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 13 | 3 |
|* 16 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
PLAN_TABLE_OUTPUT
|* 17 | TABLE ACCESS BY INDEX ROWID | FND_LOOKUP_VALUES | 1 | 71 | 10 |
|* 18 | INDEX RANGE SCAN | FND_LOOKUP_VALUES_U2 | 13 | | 2 |
|* 19 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | 7 | |
|* 20 | COUNT STOPKEY | | | | |
| 21 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 8 | 136 | 12 |
|* 22 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 8 | | 3 |
|* 23 | COUNT STOPKEY | | | | |
| 24 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 8 | 288 | 12 |
|* 25 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 8 | | 3 |
| 26 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 27 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
| 28 | NESTED LOOPS | | 1 | 40 | 5 |
| 29 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 11 | 3 |
|* 30 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 2 |
| 31 | TABLE ACCESS BY INDEX ROWID | HZ_PARTIES | 1 | 29 | 2 |
|* 32 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | | 1 |
| 33 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 34 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
| 35 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 36 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
|* 37 | COUNT STOPKEY | | | | |
PLAN_TABLE_OUTPUT
|* 38 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_HEADERS | 1 | 21 | 3 |
|* 39 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_HEADERS_U2 | 1 | | 2 |
| 40 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 41 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
|* 42 | COUNT STOPKEY | | | | |
|* 43 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_HEADERS | 1 | 21 | 3 |
|* 44 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_HEADERS_U2 | 1 | | 2 |
| 45 | SORT AGGREGATE | | 1 | 39 | |
| 46 | NESTED LOOPS OUTER | | 2 | 78 | 1828 |
|* 47 | TABLE ACCESS FULL | ONTC_MTC_PROFORMA_HEADERS | 1 | 24 | 1825 |
| 48 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_LINES | 5 | 75 | 3 |
|* 49 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_LINES_PK | 11 | | 2 |
| 50 | NESTED LOOPS | | 1 | 766 | 2448 |
| 51 | NESTED LOOPS | | 1 | 761 | 2447 |
| 52 | NESTED LOOPS | | 1 | 746 | 2445 |
| 53 | NESTED LOOPS | | 1 | 694 | 2443 |
| 54 | NESTED LOOPS | | 1 | 682 | 2441 |
| 55 | NESTED LOOPS | | 1 | 671 | 2439 |
| 56 | NESTED LOOPS | | 1 | 612 | 2437 |
| 57 | NESTED LOOPS | | 1 | 600 | 2435 |
| 58 | NESTED LOOPS | | 1 | 575 | 2433 |
PLAN_TABLE_OUTPUT
| 59 | NESTED LOOPS | | 1 | 552 | 2431 |
| 60 | NESTED LOOPS | | 1 | 533 | 2429 |
| 61 | NESTED LOOPS | | 1 | 524 | 2428 |
| 62 | NESTED LOOPS | | 1 | 455 | 2427 |
| 63 | NESTED LOOPS | | 1 | 429 | 2426 |
| 64 | NESTED LOOPS | | 1 | 389 | 2424 |
| 65 | NESTED LOOPS | | 1 | 368 | 2422 |
| 66 | NESTED LOOPS | | 1 | 308 | 2421 |
| 67 | NESTED LOOPS | | 1 | 281 | 2419 |
| 68 | NESTED LOOPS | | 1 | 253 | 2418 |
| 69 | NESTED LOOPS | | 1 | 214 | 2416 |
| 70 | NESTED LOOPS | | 39 | 7371 | 2338 |
|* 71 | TABLE ACCESS FULL | RCV_SHIPMENT_HEADERS | 39 | 5070 | 2221 |
|* 72 | TABLE ACCESS BY INDEX ROWID| RCV_SHIPMENT_LINES | 1 | 59 | 3 |
|* 73 | INDEX RANGE SCAN | RCV_SHIPMENT_LINES_U2 | 1 | | 2 |
|* 74 | TABLE ACCESS BY INDEX ROWID | PO_LINES_ALL | 1 | 25 | 2 |
|* 75 | INDEX UNIQUE SCAN | PO_LINES_U1 | 1 | | 1 |
|* 76 | TABLE ACCESS BY INDEX ROWID | PO_LINE_LOCATIONS_ALL | 1 | 39 | 2 |
|* 77 | INDEX UNIQUE SCAN | PO_LINE_LOCATIONS_U1 | 1 | | 1 |
|* 78 | TABLE ACCESS BY INDEX ROWID | PO_HEADERS_ALL | 1 | 28 | 1 |
|* 79 | INDEX UNIQUE SCAN | PO_HEADERS_U1 | 1 | | |
PLAN_TABLE_OUTPUT
|* 80 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_LINES_ALL | 1 | 27 | 2 |
|* 81 | INDEX UNIQUE SCAN | OE_ORDER_LINES_U1 | 1 | | 1 |
| 82 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_HEADERS_ALL | 1 | 60 | 1 |
|* 83 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 | 1 | | |
|* 84 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 21 | 2 |
|* 85 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | 1 |
|* 86 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 40 | 2 |
|* 87 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | 1 |
| 88 | TABLE ACCESS BY INDEX ROWID | WSH_CARRIERS | 1 | 26 | 1 |
|* 89 | INDEX UNIQUE SCAN | WSH_CARRIERS_U2 | 1 | | |
|* 90 | TABLE ACCESS BY INDEX ROWID | WSH_CARRIER_SERVICES | 1 | 69 | 1 |
|* 91 | INDEX RANGE SCAN | WSH_CARRIER_SERVICES_N1 | 2 | | |
|* 92 | TABLE ACCESS BY INDEX ROWID | WSH_ORG_CARRIER_SERVICES | 1 | 9 | 1 |
|* 93 | INDEX RANGE SCAN | WSH_ORG_CARRIER_SERVICES_N1 | 1 | | |
| 94 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 19 | 2 |
|* 95 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 1 |
|* 96 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 23 | 2 |
|* 97 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | 1 |
|* 98 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 25 | 2 |
|* 99 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | 1 |
| 100 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 12 | 2 |
PLAN_TABLE_OUTPUT
|*101 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | 1 |
| 102 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 59 | 2 |
|*103 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | 1 |
|*104 | INDEX RANGE SCAN | HZ_LOC_ASSIGNMENTS_N1 | 1 | 11 | 2 |
| 105 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 12 | 2 |
|*106 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | 1 |
| 107 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 52 | 2 |
|*108 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | 1 |
|*109 | INDEX RANGE SCAN | HZ_LOC_ASSIGNMENTS_N1 | 1 | 15 | 2 |
|*110 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | 5 | 1 |
I will put the predicate information in another post.
193 rows selected.
SQL> spool offPlease suggest on how can we improve the performance.
Regards,
Sreekanth Munagala. -
I am running a fairly complex query with several table joins
and it is taking too long. What can I do to improve performance?
Thanks.
FrankDan's first suggestion is key - if you are doing multiple
table joins, you want to make sure your indexes are set up on your
tables correctly. If you have access to the database, this should
be your first step. Rationalize's stored procedure suggestion is
also a great idea (again, if you have access to create and manage
stored procedures on your DB).
Other than than, most databases usually have some sort of SQL
efficiency analysis tool. SQL server has one built into their Query
Analyzer tool. I would recommend using something like that to
streamline your SQL. Like Dan said, something as simple as the
order of elements in your where clause might make a big
difference. -
Query taking too long on Oracle9i
Hi All
I am running a query on our prod database (Oracle8i 8.1.7.4) and again running the same query on Test db (Oracle9i version 4). The query is taking too long(more then 10 min) in test db. Both the database are installed on the same machine IBM AIX V4 and table schema and data are same.
Any help would be appreciated.
Here are the results.
FASTER ONE
ORACLE 8i using Production
Statistics
864 recursive calls
68 db block gets
159855 consistent gets
20297 physical reads
0 redo size
1310148 bytes sent via SQL*Net to client
68552 bytes received via SQL*Net from client
1036 SQL*Net roundtrips to/from client
28 sorts (memory)
1 sorts (disk)
15525 rows processed
SLOWER ONE
ORACLE 9i using Test
Statistics
819 recursive calls
80 db block gets
22981568 consistent gets
1361 physical reads
0 redo size
1194902 bytes sent via SQL*Net to client
34193 bytes received via SQL*Net from client
945 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
14157 rows processed319404-
To help us better understand the problem,
1) Could you post your execution plan on the two different databases?
2) Could you list indexes (if any, on these tables)?
3) Are any of the objects in the 'from list' a view?
If so, are you using a user defined function to create the view?
4) Why are you using the table 'cal_instance_relationship' twice in the 'from ' clause'?
5) Can't your query be the following?
SELECT f.person_id, f.course_cd, cv.responsible_org_unit_cd cowner, f.fee_cal_type Sem, f.fee_ci_sequence_number seq_no,
sua.unit_cd, uv.owner_org_unit_cd uowner, uv.supervised_contact_hours hours, 0 chg_rate, sum(f.transaction_amount) tot_fee,
' ' tally
FROM unit_version uv,
cal_instance_relationship cir1,
chg_method_apportion cma,
student_unit_attempt sua,
course_version cv,
fee_ass f
WHERE f.fee_type = 'VET-MATFEE'
AND f.logical_delete_dt IS NULL
AND f.s_transaction_type IN ('ASSESSMENT', 'MANUAL ADJ')
AND f.fee_ci_sequence_number > 400
AND f.course_cd = cv.course_cd
AND cv.version_number = (SELECT MAX(v.version_number) FROM course_version v
WHERE v.course_cd = cv.course_cd)
AND f.person_id = sua.person_id
and f.course_cd = sua.course_cd
AND f.fee_type = cma.fee_type
AND f.fee_ci_sequence_number = cma.fee_ci_sequence_number
AND cma.load_ci_sequence_number = cir1.sub_ci_sequence_number
AND cir1.sup_cal_type = 'ACAD-YR'
AND cir1.sub_cal_type = sua.cal_type
AND cir1.sub_ci_sequence_number = sua.ci_sequence_number
AND sua.unit_attempt_status NOT IN ('DUPLICATE','DISCONTIN')
AND sua.unit_cd = uv.unit_cd
AND sua.version_number = uv.version_number
GROUP BY f.person_id, f.course_cd, cv.responsible_org_unit_cd , f.fee_cal_type, f.fee_ci_sequence_number,
sua.unit_cd, uv.owner_org_unit_cd, uv.supervised_contact_hours; -
Network event is taking too long (100%)
Hi everybody. We have a 10g DB on Windows. We're using OEM to manage the DB, and it has started to show an alert about database time spend waiting for "Network" event. It arises when we execute one module that updates several tables, that is taking too long. Before we had this app on 8i, also on Windows, and that operation was much more faster than now. The indexes on tables are valid, and I've gathered statistics for the CBO, so I suppose the problem is, as the OEM says, related with network, but I don't know why, because the connection speed is the same tan before, and the two machines are in the same LAN.
Any ideas??Here is the output requested:
SQL> select * from v$system_event
2 where event like 'SQL%';
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
AVERAGE_WAIT TIME_WAITED_MICRO EVENT_ID
SQL*Net message to client 1159200 0 252
0 2516408 2067390145
SQL*Net message to dblink 2234 0 1
0 5590 3655533736
SQL*Net more data to client 5753 0 166
0 1657387 554161347
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
AVERAGE_WAIT TIME_WAITED_MICRO EVENT_ID
SQL*Net more data to dblink 12 0 0
0 548 1958556342
SQL*Net message from client 1159181 0 218341084
188 2,1834E+12 1421975091
SQL*Net more data from client 23299 0 180602
8 1806015123 3530226808
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
AVERAGE_WAIT TIME_WAITED_MICRO EVENT_ID
SQL*Net message from dblink 2234 0 3693
2 36934861 4093028837
SQL*Net more data from dblink 4021 0 39
0 390002 1136294303
SQL*Net break/reset to client 182986 0 2740
0 27397165 1963888671
9 filas seleccionadas. -
Data Archive Script is taking too long to delete a large table
Hi All,
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
CREATE TABLE "APP"."MON_TXNS"
( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
"BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_PAYER" NUMBER(12,0),
"ID_PAYER_PI" NUMBER(12,0),
"ID_PAYEE" NUMBER(12,0),
"ID_PAYEE_PI" NUMBER(12,0),
"ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
"STR_TEXT" VARCHAR2(60 CHAR),
"DAT_MERCHANT_TIMESTAMP" DATE,
"STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
"DAT_EXPIRATION" DATE,
"DAT_CREATION" DATE,
"STR_USER_CREATION" VARCHAR2(30 CHAR),
"DAT_LAST_UPDATE" DATE,
"STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
"STR_OTP" CHAR(6 BYTE),
"ID_AUTH_METHOD_PAYER" NUMBER(1,0),
"AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
"BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
"ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ENABLE,
CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for
2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2798378986
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 |
| 1 | DELETE | MON_TXNS | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 |
| 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
Please help,
thanks,
Banka Ravi'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index. -
KDEMod3 is taking too long to log out or power off
KDEMod3 is taking too long to log out or power off...and I swear I've had less stuff running than my previous install before my computer broke O____o How can I fix this?
Last edited by ShadowKyogre (2009-04-16 05:43:39)I think you should head over to the kdemod forum:
http://chakra-project.org/bbs/index.php -
Discoverer report taking too long time to open.
HI,
Discovere reports are taking too long time to open. Please help to resolve this.
Regards,
BhatiaWhat is the Dicoverer and the Application release?
Please refer to the following links (For both Discoverer 4i and 10g). Please note that some Discoverer 4i notes also apply to Discoverer 10g.
Note: 362851.1 - Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=362851.1
Note: 68100.1 - Discoverer Performance When Running On Oracle Applications
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=68100.1
Note: 465234.1 - Recommended Client Java Plug-in (JVM/JRE) For Discoverer Plus 10g (10.1.2)
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=465234.1
Note: 329674.1 - Slow Performance When Opening Plus Workbooks from Oracle 11.5.10 Applications Home Page
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=329674.1
Note: 190326.1 - Ideas for Improving Discoverer 4i Performance in an Applications 11i Environment
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=190326.1
Note: 331435.1 - Slow Perfomance Using Disco 4.1 Admin/Desktop in Oracle Applications Mode EUL
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=331435.1
Note: 217669.1 - Refreshing Folders and opening workbooks is slow in Apps 11i environment
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=217669.1 -
I can't view my website at www.artisancandies.com, even though it's working and everyone else seems to see it. No, I don't have a firewall, and it's not because of my internet provider - I have AT&T at work, and Comcast at home. My husband can see the site on his laptop. I tried dumping my cache in both Firefox and Safari, but it didn't work. I looked at it through proxify.com, and can see it that way, so I know it works. This is so frustrating, because I used to only see it when I typed in artisancandies.com - it would never work for me if I typed in www.artisancandies.com. Now it doesn't work at all. This is the message I get in Firefox:
"The connection has timed out. The server at www.artisancandies.com is taking too long to respond."
Please help!!!
Kristen ScottLinc, here's what I've got from what you asked me to do. I hope you don't mind, but it was simple enough to leave everything in, so you could see the progression:
Kristen-Scotts-Computer:~ kristenscott$ kextstat -kl | awk ' !/apple/ { print $6 $7 } '
Kristen-Scotts-Computer:~ kristenscott$ sudo launchctl list | sed 1d | awk ' !/0x|apple|com\.vix|edu\.|org\./ { print $3 } '
WARNING: Improper use of the sudo command could lead to data loss
or the deletion of important system files. Please double-check your
typing when using sudo. Type "man sudo" for more information.
To proceed, enter your password, or type Ctrl-C to abort.
Password:
com.microsoft.office.licensing.helper
com.google.keystone.daemon
com.adobe.versioncueCS3
Kristen-Scotts-Computer:~ kristenscott$ launchctl list | sed 1d | awk ' !/0x|apple|edu\.|org\./ { print $3 } '
com.google.keystone.root.agent
com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae
Kristen-Scotts-Computer:~ kristenscott$ ls -1A {,/}Library/{Ad,Compon,Ex,Fram,In,La,Mail/Bu,P*P,Priv,Qu,Scripti,Sta}* 2> /dev/null
/Library/Components:
/Library/Extensions:
/Library/Frameworks:
Adobe AIR.framework
NyxAudioAnalysis.framework
PluginManager.framework
iLifeFaceRecognition.framework
iLifeKit.framework
iLifePageLayout.framework
iLifeSQLAccess.framework
iLifeSlideshow.framework
/Library/Input Methods:
/Library/Internet Plug-Ins:
AdobePDFViewer.plugin
Disabled Plug-Ins
Flash Player.plugin
Flip4Mac WMV Plugin.plugin
Flip4Mac WMV Plugin.webplugin
Google Earth Web Plug-in.plugin
JavaPlugin2_NPAPI.plugin
JavaPluginCocoa.bundle
Musicnotes.plugin
NP-PPC-Dir-Shockwave
Quartz Composer.webplugin
QuickTime Plugin.plugin
Scorch.plugin
SharePointBrowserPlugin.plugin
SharePointWebKitPlugin.webplugin
flashplayer.xpt
googletalkbrowserplugin.plugin
iPhotoPhotocast.plugin
npgtpo3dautoplugin.plugin
nsIQTScriptablePlugin.xpt
/Library/LaunchAgents:
com.google.keystone.agent.plist
/Library/LaunchDaemons:
com.adobe.versioncueCS3.plist
com.apple.third_party_32b_kext_logger.plist
com.google.keystone.daemon.plist
com.microsoft.office.licensing.helper.plist
/Library/PreferencePanes:
Flash Player.prefPane
Flip4Mac WMV.prefPane
VersionCue.prefPane
VersionCueCS3.prefPane
/Library/PrivilegedHelperTools:
com.microsoft.office.licensing.helper
/Library/QuickLook:
GBQLGenerator.qlgenerator
iWork.qlgenerator
/Library/QuickTime:
AppleIntermediateCodec.component
AppleMPEG2Codec.component
Flip4Mac WMV Export.component
Flip4Mac WMV Import.component
Google Camera Adapter 0.component
Google Camera Adapter 1.component
/Library/ScriptingAdditions:
Adobe Unit Types
Adobe Unit Types.osax
/Library/StartupItems:
AdobeVersionCue
HP Trap Monitor
Library/Address Book Plug-Ins:
SkypeABDialer.bundle
SkypeABSMS.bundle
Library/Internet Plug-Ins:
Move_Media_Player.plugin
fbplugin_1_0_1.plugin
Library/LaunchAgents:
com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae.plist
com.apple.FolderActions.enabled.plist
com.apple.FolderActions.folders.plist
Library/PreferencePanes:
A Better Finder Preferences.prefPane
Kristen-Scotts-Computer:~ kristenscott$
Maybe you are looking for
-
my ipod touch 2nd gen has 2.2 IOS on it it wont let me do anything everything i try says it needs 4.2.1 to download i have tried putting 4.2.1 on it and it says it can not play on this ipod how do i get the IOS updated to something that allows me to
-
Any hope for this 6730b?
Hi I picked up a HP notebook 6730b at the local recycling centre. it has 6Gb Ram and a 250 Gb Hdd Everything seems to run fine, except I can't lock into windows or acces the recovery partion. I've tried removing the battery under the HDD, booting wi
-
Is there a way to hide or move apps ?
Not on the home screen but in the Menu. There are so many apps that I dont use and dont care to see. I really want to either hide them or put in a folder so there is less clutter
-
I'm working with this chart now and I would want to know if is possible when user pass the mouse in a bubble show legend and not the value... Is possible to do this??? Thank you!!
-
Urgent !! :-( - Problems with CLOB
Hello all, I want to write and read in a CLOB. Every thing is OK. But I've a problem !! :-( When I close my clob I still have an cursor (table_...) open on my data base(Oracle 8i) Can you help me ??? This is my class witch woks with CLOB : import jav