TEMP tablespace increasing
I have a problem with storage space of my default temporary table space.
For my application, I specified the default table space as "DEFTBSPACE" with size 4000MB and temporary table space as "TEMP" with size 14486MB. Now, my usage of my temporary table space is increasing rapidly (Now, the used temp tablespace is 14485MB almost 99.99%). What could be the reason..? Is there any thing going into TEMP tablespace by mistake?.
Please help.
Reagrds,
Satish.
Can we deallocate manually the data file or a part of the datafile so that the usage space is reduced. What's the solution to reduce the usage space of this TEMP tablespace.If you bounce the db, you will get all used space.
I heared, dropping and recreating the TEMP tablespace is the only the solution for this..??Is it true?
Yes it is absolutely true.
Create another temp tabelspace and set as default temp tablespace and drop the previous one. However, make sure no one is currently using that temp tablespace.
The reason that you haven't get any result query v$sort_usge is that no one is using temp currently.
You can also use v$tempseg_usage.
Jaffar
OCP DBA
Similar Messages
-
Should I increase TEMP tablespace size here ?
Version: 10.2.0.4
Platform : RHEL 5.8
Currently we are running a batch job in a schema. The default temporary tablespace for that schema is TEMP.
But I see that the tablespace is full.
SQL> select file_name, bytes/1024/1024/1024 gb from dba_temp_files where tablespace_name = 'TEMP';
FILE_NAME GB
/prd/fdms/oradata_db18/fdmsc1sdb/oradata/ts_temp/temp01.dbf 10
SQL> SELECT TABLESPACE_NAME, FILE_ID,
BYTES_USED/1024/1024,
bytes_free/1024/1024
FROM V$TEMP_SPACE_HEADER where tablespace_name = 'TEMP' 2 3 4 ;
TABLESPACE_NAME FILE_ID BYTES_USED/1024/1024 BYTES_FREE/1024/1024
TEMP 1 10240
So, far the application users have not complained and I didn't see any 'unable to extend' error in the alert log yet, but the above scenario is dangerous. Right? I mean SQL statements with sorting can error out. Right ? Unlike UNDO, with temp tablespace, temp segments cannot be reused. Right ?Hello,
As said previously, the Sort Segments can be reused, the Views V$SORT_SEGMENT and V$TEMPSEG_USAGE are relevant to monitore the usage of the Temporary Tablespace.
You'll find in the Note below a way to control over time the Temporary Tablespace:
How Can Temporary Segment Usage Be Monitored Over Time? [ID 364417.1]
More over, you may also check for any ORA-01652 in the Alert Log.
But don't worry to much to get a Full Temporary Tablespace, here "Full" doesn't mean "unreusable".
Hope this help.
Best Regards,
Jean-Valentin Lubiez -
TEMP tablespace getting full while inserting a CLOB in Trigger
We have a Oracle 10g (10.2.0.4.0) DB on a Solaris 9 box which also runs our J2EE web-service application on Weblogic 8sp6 server.
We get around 220K web-service requests from upstream callers daily to insert data in the main table, say TABLE1, which has daily partitions on a date column. This table has around 21 columns out of which 1 is a CLOB column.
Now this table has an AFTER INSERT trigger which calls a package procedure to insert the same record into another table, say TABLE2.
From Java application insert statement in executed in below format using a weblogic jdbc connection pool :
INSERT INTO TABLE1(COLUMN1, COLUMN2, ........., CLOB_COLUMN,........, COLUMN21) VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20);
Clob object is prepared in application using ojdbc14.jar.
We are observing a strange issue here. The TEMP tablespace utilization keeps on growing as more and more inserts are executed by application and after ~125K inserts the TEMP tablespace gets full and we start getting ORA-01652 error.
On further analysis we could see that there are only 7-10 session being maintained but as more and more inserts happen TEMP tablespace utilization goes on increasing for each of these sessions.
When we tried with inserting just few records and then watching the session details in v$session_wait then we could see that it is in INACTIVE state and waiting for the event ‘SQL*Net message from client’. This does not seem correct as the session has successfully inserted the data and committed the transaction and we can see the data in the tables as well.
The confusing thing here is when we modify the trigger to pass blank string('' ) instead of the CLOB column to TABLE2 then this issue does not occur. All 200K records are inserted properly and TEMP tablespace utilization also keep always below 1%.
Can you please help us in solving this issue. Is this related to any oracle issue?
Inside the package we have tried using DBMS_COPY statement to copy the CLOB column after insert but still same result.
Code for reference:
Trigger:
=====================================
CREATE OR REPLACE TRIGGER trg
AFTER INSERT OR UPDATE
ON TABLE1
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
IF (:NEW.date_col > SYSDATE - 2)
THEN
IF (:NEW.cat IN (1001, 1002))
THEN
pkg.process_change
(:NEW.COLUMN1,
:NEW.COLUMN2,
:NEW.CLOB_COLUMN,
FLAG
END IF;
END IF;
END;
=====================================
Package:
=====================================
procedure PKG.Process_change(
p_COLUMN1 number,
p_COLUMN2 varchar2,
p_CLOB_COLUMN clob,
flag boolean
) is
v_watermark pls_integer;
v_type varchar2(1);
begin
if (flag) then
v_type := 'U';
else
v_type := 'I';
end if;
select t_seq.nextval into v_watermark from dual;
insert into TABLE2(
COLUMN1 number,
COLUMN2 varchar2,
CLOB_COLUMN clob,
watermark,
dml_type
)values (
p_COLUMN1 number,
p_COLUMN2 varchar2,
p_CLOB_COLUMN clob,
v_watermark,
v_dml_type
end;
=====================================My first thought on reading your post is that not only are you using a database version that is now so old it is in extended support and even then not even the most recent patchset for it.
The first thing I would do is move to 11gR2 and if you can't do that at least get to 10.2.0.5 and apply CLOB relevant patches as well.
Same goes for your operating system. Solaris 9 is ancient: So move to 10 which has vastly improved memory management.
To help you further it would be really valuable to know the table layout. For example is this a heap table or an IOT? Is it partitioned? Is this RAC? What size are the CLOBs? Are they stored in-line? Chunk size? etc.
This page should start you down the right road:
http://docs.oracle.com/cd/B19306_01/appdev.102/b14249/adlob_tables.htm#sthref204
But I am also wondering why you would use a trigger to, as you say, "insert the same record into another table." This description is a poster child for "bad design." -
9i on Linux. Problems with Temp tablespace cleanup
I am currently running Oracle 9i Enterprise on SUSE Linux 7.2.
I am executing queries against the new XMLType datatype and every query adds to the Temp tablespace, which doesn't get cleaned up. Eventually, the tablespace will run out of space and Oracle will issue and error:
ORA-01652: unable to extend temp segment by 128 in tablespace <name>
The only way to clean up the Temp tablespace seems to be by restarting the server.
Is that happening on other platforms as well? I would appreciate any help.Hi
You can connect to the database as DBA (Sys or System) and make a bigger temporal tablespace. Or create a new bigger temporary tablespace and assign it to the user.
A10!
PS: Temporary tablespace is used when no memory available for the session, for example when a big ORDER BY is done. Try to increase the memory assigned, just look at initXXX.ora (sort_size) -
Hello All,
I am using Oracle 11g R2 i want to increase the temp tablespace size. can i use the below command? can i increase while the database is open and some queries are running and use the temp table space?
ALTER DATABASE TEMPFILE '....../datafile/name_datafile.tmp' RESIZE 100M
Regards,Hello,
I am using Oracle 11g R2 i want to increase the temp tablespace size. can i use the below command? can i increase while the database is open and some queries are running and use the temp table space?Why do you intend to extend the Temporary Tablespace ? Do you have any ORA-01652 error ?
If not, may be it's not necessary to extend it. Even if it seems to be full Free Extents are reused.
ALTER DATABASE TEMPFILE '....../datafile/name_datafile.tmp' RESIZE 100MYes you can use this statement, but be aware that the Size specified (here 100 Mo) is the target size not the supplemental size.
Hope this help.
Best regards,
Jean-Valentin -
Informatica Workflow fails because of TEMP tablespace
Hi,
I am trying to do a Complete Oracle 11.5.10 load. However my execution plan fails because the SDE_ORA_Payroll_Fact fails. The error in the session log is as follows:
READER_1_1_1> RR_4035 SQL Error [
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
From the error message it is very clear that the Source Qualifier is unable to select the data from the source tables. i have increased the TEMP tablespace too however I keep getting the error. Because of this error my other mappings are also Stopped. Any solutions to this problem?Hi,
Would you not want to use the following parameters to say load one fiscal year at a time?
Analysis Start Date
The start date used to build the day dimension and to flatten exchange rates and costs lists.
$$ANALYSIS_START, $$ANALYSIS_START_WID
Default : Jan 1, 1980
Analysis End Date
The end date used to build the day dimension and to flatten exchange rates and costs lists.
$$ANALYSIS_END, $$ANALYSIS_END_WID
Default : Dec 31, 2010
Thanks,
Chris -
Hi,
We have the following errors:
RMAN> crosscheck archivelog all;
starting full resync of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of crosscheck command at 01/07/2009 14:26:10
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of full resync command on default channel at 01/07/2009 14:26:10
ORA-01652: unable to extend temp segment by in tablespace
RMAN>
We have tried to increase the size of the TEMP tablespace, but this operation keeps using up the entire temp tablespace. It's the temp tablespace in the target database that is filling up.
We cannot keep increasing the size of the TEMP ts to fill the disk.
We have also tried to create a new repository but this had the same outcome.
I reckon there is something in the database tables, as one of our backups used the target control file as its repository. Should I remove the entries from these tables in the target tablespace to make the repository think there is no re-synching to be done?
Regards,
Tim.If you are sure that it is on the target DB what you can do is enable a trace for the event to see what triggers this error
it doesnt have to be on temporary table because there are temp segments on normal tables used for index creation type operations as well so you need to find the problem before assuming it is temp tablespace error
alter system set events '1652 trace name errorstack level 1';
to turn off
alter system set events '1652 trace name context off';
If you cant find anything from trace then you can send the trace output here maybe somebody can catch something.
Coskan Gundogar
http://coskan.wordpress.com
Edited by: coskan on Mar 30, 2009 4:28 PM -
Hi all,
We have an Oracle 10.2.0.4 Reporting database, of which every night al lot of data is refreshed from production databases.
For this refresh this database uses complex Materialized View definitions.
We recently keep running out of TEMP tablespace during the refresh of these MV's. The TEMP tablespace has lately been increased several times ( in total from 15Gb to 25Gb the last months).
The largets MV is just 3Gb's big. Especially the one that ran out of TEMP tablespace last night is only 1Gb big.
The error message:
ORA-12008: error in materialized view refresh path
ORA-12801: error signaled in parallel query server P002
ORA-01652: unable to extend temp segment by 64 in tablespace TEMP
Can anyone tell me what could cause this behaviour ??
Some specs:
Oracle 10.2.0.4
Platform: AIX 5.3 TL06
sga_target = 3504M
parallel_max_servers=8
temp_tablespace_size=25600Mb
Thanks in advanceThey are COMPLETE refreshes.
Statement of the view:
SELECT /*+ NO_USE_HASH_AGGREGATION */
mon.mon_description AS mon_mon_descr, mon.mon_code AS mon_mon_code,
mon.yer_code AS mon_yer_code, cus.iet_descr AS cus_iet_descr,
prd.igp_nr AS prd_igp_nr, prd.igp_descr_german AS prd_igp_descr_ger,
prd.igp_descr_dutch AS prd_igp_descr_dut,
prd.igp_descr_english AS prd_igp_descr_eng,
prd.igp_descr_czech AS prd_igp_descr_ces,
prd.igp_descr_russian AS prd_igp_descr_rus, prd.pgp_nr AS prd_pgp_nr,
prd.pgp_descr_german AS prd_pgp_descr_ger,
prd.pgp_descr_dutch AS prd_pgp_descr_dut,
prd.pgp_descr_english AS prd_pgp_descr_eng,
prd.pgp_descr_czech AS prd_pgp_descr_ces,
prd.pgp_descr_russian AS prd_pgp_descr_rus, prd.dvs_nr AS prd_dvs_nr,
prd.dvs_descr_german AS prd_dvs_descr_ger,
prd.dvs_descr_dutch AS prd_dvs_descr_dut,
prd.dvs_descr_english AS prd_dvs_descr_eng,
prd.dvs_descr_czech AS prd_dvs_descr_ces,
prd.dvs_descr_russian AS prd_dvs_descr_rus,
cus.pce_descr AS cus_pce_descr, cus.smk_descr AS cus_smk_descr,
cus.org_descr AS cus_org_descr, cus.dpm_descr AS cus_dpm_descr,
cus.cmp_descr AS cus_cmp_descr, cus.cgp_descr AS cus_cgp_descr,
cus.cus_nr AS cus_cus_nr, cus.cus_descr AS cus_cus_descr,
cus.cus_billto_nr AS cus_billto_nr,
SUM (fin.invoice_discount_eur) AS invoice_discount_eur,
SUM (fin.invoice_discount_gbp) AS invoice_discount_gbp,
SUM (fin.invoice_line_discount_eur) AS invoice_line_discount_eur,
SUM (fin.invoice_line_discount_gbp) AS invoice_line_discount_gbp,
SUM (fin.turnover_cr_eur) AS turnover_cr_eur,
SUM (fin.turnover_cr_gbp) AS turnover_cr_gbp,
SUM (fin.turnover_deb_eur) AS turnover_deb_eur,
SUM (fin.turnover_deb_gbp) AS turnover_deb_gbp,
SUM (fin.turnover_eur) AS turnover_eur,
SUM (fin.turnover_gbp) AS turnover_gbp,
SUM (fin.count_credit_slips) AS count_credit_slips,
cus.srp_nr AS cus_srp_nr, cus.srp_descr AS cus_srp_descr,
COUNT (*) AS total_records,
COUNT (fin.count_credit_slips) AS num_count_credit_slips,
cus.cus_branch AS cus_branch_nr, cus.cus_district AS cus_district_nr,
SUM (fin.profit_eur) AS profit_eur,
SUM (fin.profit_gbp) AS profit_gbp,
SUM (fin.cost_price_eur) AS costs_eur,
SUM (fin.cost_price_gbp) AS costs_gbp,
SUM (fin.invoice_discount_chf) AS invoice_discount_chf,
SUM (fin.invoice_line_discount_chf) AS invoice_line_discount_chf,
SUM (fin.turnover_cr_chf) AS turnover_cr_chf,
SUM (fin.turnover_deb_chf) AS turnover_deb_chf,
SUM (fin.turnover_chf) AS turnover_chf,
SUM (fin.profit_chf) AS profit_chf,
SUM (fin.cost_price_chf) AS costs_chf,
SUM (fin.invoice_discount_czk) AS invoice_discount_czk,
SUM (fin.invoice_line_discount_czk) AS invoice_line_discount_czk,
SUM (fin.turnover_cr_czk) AS turnover_cr_czk,
SUM (fin.turnover_deb_czk) AS turnover_deb_czk,
SUM (fin.turnover_czk) AS turnover_czk,
SUM (fin.profit_czk) AS profit_czk,
SUM (fin.cost_price_czk) AS costs_czk,
SUM (fin.invoice_discount_rub) AS invoice_discount_rub,
SUM (fin.invoice_line_discount_rub) AS invoice_line_discount_rub,
SUM (fin.turnover_cr_rub) AS turnover_cr_rub,
SUM (fin.turnover_deb_rub) AS turnover_deb_rub,
SUM (fin.turnover_rub) AS turnover_rub,
SUM (fin.profit_rub) AS profit_rub,
SUM (fin.cost_price_rub) AS costs_rub,
COUNT (fin.invoice_discount_eur) AS cnt_invoice_discount_eur,
COUNT (fin.invoice_discount_gbp) AS cnt_invoice_discount_gbp,
COUNT
(fin.invoice_line_discount_eur)
AS cnt_invoice_line_discount_eur,
COUNT
(fin.invoice_line_discount_gbp)
AS cnt_invoice_line_discount_gbp,
COUNT (fin.turnover_cr_eur) AS cnt_turnover_cr_eur,
COUNT (fin.turnover_cr_gbp) AS cnt_turnover_cr_gbp,
COUNT (fin.turnover_deb_eur) AS cnt_turnover_deb_eur,
COUNT (fin.turnover_deb_gbp) AS cnt_turnover_deb_gbp,
COUNT (fin.turnover_eur) AS cnt_turnover_eur,
COUNT (fin.turnover_gbp) AS cnt_turnover_gbp,
COUNT (fin.profit_eur) AS cnt_profit_eur,
COUNT (fin.profit_gbp) AS cnt_profit_gbp,
COUNT (fin.cost_price_eur) AS cnt_costs_eur,
COUNT (fin.cost_price_gbp) AS cnt_costs_gbp,
COUNT (fin.invoice_discount_chf) AS cnt_invoice_discount_chf,
COUNT
(fin.invoice_line_discount_chf)
AS cnt_invoice_line_discount_chf,
COUNT (fin.turnover_cr_chf) AS cnt_turnover_cr_chf,
COUNT (fin.turnover_deb_chf) AS cnt_turnover_deb_chf,
COUNT (fin.turnover_chf) AS cnt_turnover_chf,
COUNT (fin.profit_chf) AS cnt_profit_chf,
COUNT (fin.cost_price_chf) AS cnt_costs_chf,
COUNT (fin.invoice_discount_czk) AS cnt_invoice_discount_czk,
COUNT
(fin.invoice_line_discount_czk)
AS cnt_invoice_line_discount_czk,
COUNT (fin.turnover_cr_czk) AS cnt_turnover_cr_czk,
COUNT (fin.turnover_deb_czk) AS cnt_turnover_deb_czk,
COUNT (fin.turnover_czk) AS cnt_turnover_czk,
COUNT (fin.profit_czk) AS cnt_profit_czk,
COUNT (fin.cost_price_czk) AS cnt_costs_czk,
COUNT (fin.invoice_discount_rub) AS cnt_invoice_discount_rub,
COUNT
(fin.invoice_line_discount_rub)
AS cnt_invoice_line_discount_rub,
COUNT (fin.turnover_cr_rub) AS cnt_turnover_cr_rub,
COUNT (fin.turnover_deb_rub) AS cnt_turnover_deb_rub,
COUNT (fin.turnover_rub) AS cnt_turnover_rub,
COUNT (fin.profit_rub) AS cnt_profit_rub,
COUNT (fin.cost_price_rub) AS cnt_costs_rub
FROM /* dwh_internal_external_dim iet */
dwh_customers_dim cus /* department */
, dwh_products_dim prd /* itemgroup */
, dwh_months_dim mon
, dwh_financial_fct fin
WHERE fin.mon_code = mon.mon_code
AND fin.prd_id = prd.prd_id
AND fin.cus_cus_id = cus.cus_id
GROUP BY mon.mon_description,
mon.mon_code,
mon.yer_code,
cus.iet_descr,
prd.igp_nr,
prd.igp_descr_german,
prd.igp_descr_dutch,
prd.igp_descr_english,
prd.igp_descr_czech,
prd.igp_descr_russian,
prd.pgp_nr,
prd.pgp_descr_german,
prd.pgp_descr_dutch,
prd.pgp_descr_english,
prd.pgp_descr_czech,
prd.pgp_descr_russian,
prd.dvs_nr,
prd.dvs_descr_german,
prd.dvs_descr_dutch,
prd.dvs_descr_english,
prd.dvs_descr_czech,
prd.dvs_descr_russian,
cus.pce_descr,
cus.smk_descr,
cus.org_descr,
cus.dpm_descr,
cus.cmp_descr,
cus.cgp_descr,
cus.cus_nr,
cus.cus_descr,
cus.cus_billto_nr,
cus.srp_nr,
cus.srp_descr,
cus.cus_branch,
cus.cus_district;
Explain plan:
Plan
SELECT STATEMENT CHOOSE Cost: 278,496 Bytes: 13,752,541,260 Cardinality: 18,864,940
25 PX COORDINATOR
24 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10005 :Q1005 Cost: 278,496 Bytes: 13,752,541,260 Cardinality: 18,864,940
23 SORT GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 278,496 Bytes: 13,752,541,260 Cardinality: 18,864,940
22 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 278,496 Bytes: 13,752,541,260 Cardinality: 18,864,940
21 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004 Cost: 278,496 Bytes: 13,752,541,260 Cardinality: 18,864,940
20 SORT GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 278,496 Bytes: 13,752,541,260 Cardinality: 18,864,940
19 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 26,390 Bytes: 13,752,541,260 Cardinality: 18,864,940
4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 55 Bytes: 11,394,614 Cardinality: 70,774
3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10000 :Q1000 Cost: 55 Bytes: 11,394,614 Cardinality: 70,774
2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1000 Cost: 55 Bytes: 11,394,614 Cardinality: 70,774
1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT MIS_RUN.DWH_CUSTOMERS_DIM :Q1000 Cost: 55 Bytes: 11,394,614 Cardinality: 70,774
18 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 26,300 Bytes: 10,715,285,920 Cardinality: 18,864,940
8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Bytes: 2,052 Cardinality: 108
7 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001 Cost: 2 Bytes: 2,052 Cardinality: 108
6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001 Cost: 2 Bytes: 2,052 Cardinality: 108
5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT MIS_RUN.DWH_MONTHS_DIM :Q1001 Cost: 2 Bytes: 2,052 Cardinality: 108
17 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 26,264 Bytes: 10,356,852,060 Cardinality: 18,864,940
12 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 305 Bytes: 178,954,440 Cardinality: 426,082
11 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002 Cost: 305 Bytes: 178,954,440 Cardinality: 426,082
10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002 Cost: 305 Bytes: 178,954,440 Cardinality: 426,082
9 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT MIS_RUN.DWH_PRODUCTS_DIM :Q1002 Cost: 305 Bytes: 178,954,440 Cardinality: 426,082
16 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 11,396 Bytes: 2,433,577,260 Cardinality: 18,864,940
15 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003 Cost: 11,396 Bytes: 2,433,577,260 Cardinality: 18,864,940
14 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003 Cost: 11,396 Bytes: 2,433,577,260 Cardinality: 18,864,940
13 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT MIS_RUN.DWH_FINANCIAL_FCT :Q1003 Cost: 11,396 Bytes: 2,433,577,260 Cardinality: 18,864,940
Thanks again -
Sizing of the OLAP temp tablespace!!
We are using 10gR2 OLAP along with Disco Plus OLAP to build a prototype. The metadata is built with AWM 10.2.0.1.0A.
Our test data is about 40M records, 1.7G in a flat file. After loaded to a fact table via Sql*loader, it takes about 2G for the tablespace, along with 7 dim tables. Only one cube and one measure are used. A compressed composite with 6 dims in it is associated with the cube.
However, when run the AWM maintenance steps to get a full cube aggregation. The temp tablespace for the AW shot for 30G and then generated an out of disk space error. The AW tablespace remains untouched. The maintenace cannot be finished.
Interesting to note, the data was solved in the Xpress already. When do a eif exp/imp, the result tablespace is about 5G while the eif file takes 1.3G.
Also, before doing this round data, we have sucessfully created and maintained a AW with very small set of data. The data is displayed in Disco Plus correctly.
The disk space for the temp tablespace seems surprisingly demanding. We need to get an idea how much space it really needs. Is there anybody that has some contributable experience on this issue? Please reply this post. Not sure if Oracle has formal publishing on the temp tablespace sizing issue. Believe they should have one.
Thanks,Chris,
No upgrading was done here. The metadata and objects were defined using AWM model view manually, but matching to the definition on the Xpress side for the same object, like dim order etc. The fact data then dumped to a flat file from Xpress, transferred to the olap db server and loaded to a star schema with sql*loader. AWM mapping and maintenance features were used to build the cube.
I am saying the data was solved means the summary level data was included in the dump. So we know the size of the solved cube. My reasoning is that there should be no place for the size increase to out of control because too much new summary level data was added.
Another fact I should mention last time is all the data are in one time period. 40M is the number of tuples.
Thanks for the help.
Haiyong -
¿Will make a delete and create a new temp.dbf increase perfomance?
I have a 32GB temp file. I don't understand so much what really mean that temp.dbf must be so big.
The question is: Can I delete and create a new temp.dbf? and what means do this?.
I'm newbie in this.
Thanks a lot, Luis.
Edited by: cabezanet on 23-nov-2009 9:10Temporary tablespace (physically tempfile) is sused for sorting operations, like result set ordering, joins and etc. Hopefully, your tempfile was in autoextend mode and its' size increased as soon as there was request for sorting operations which did not fill into memory.
You can drop and recreate the temp tablespace as it is not related any database consistency structure. They are only sort segments. -
Any way to avoid the hit on the TEMP tablespace?
I'm running a CTAS query that unions 13 tables with about 7,000,000 rows each, inner joins that union again against a global temporary table with 13 rows, and then doing a GROUP BY against the result, summing about 6 fields. The resulting query runs in about 2 hours with about a 16 Gig hit on the TEMP tablespace.
In production I will need to join 52 tables with about that many rows against a GTT with 52 rows. I haven't experimented with it yet but I'm guessing the time and memory increase will be linear (i.e. 8 hours, 64 Gig hit).
I'm being asked if there's any way to avoid the hit on the TEMP tablespace. It was suggested that I look into using a materialized view, but won't that just transfer the hit from the harddrive to the RAM? I don't think this particular database has 64 Gigs of RAM dedicated to it, and I'm sure the row counts will grow in the future anyway.
Thoughts?
Thanks
JoeI don't have visibility to the TEMP tablespace on their database so I don't know if the hit there is any less.
If you have privileges you can use SQL*PLUS autotrace (or possibly Oracle trace, if you're really that interested) to get run-time statistics on a query as you run it. Waiting for the 2-hour results will be a bit tedious though. If you don't have privileges for autotrace it won'r work; the error messages explain what's wrong rather well ;)
SQL> set autotrace on
SQL> select * from dual;
D
X
Execution Plan
Plan hash value: 3543395131
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
3 consistent gets
2 physical reads
0 redo size
204 bytes sent via SQL*Net to client
234 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedI still haven't figured out why selecting from dual is doing 3 consistent gets and 2 physical reads ;) -
ORA-30928: "Connect by filtering phase runs out of temp tablespace"
i have created a query that is sued to display a data in a label. This particular query will then be stored into a program that we use. The query runs just fine until this morning when it returns the error ORA-30928: "Connect by filtering phase runs out of temp tablespace". I have Googled and found out that I can do any of the following:
Include a NO FILTERING hint - but did not work properly
Increase the temp tablespace - not applicable to me since this runs in a production server that I don't have any access to.
Are there other ways to fix this? By the way, below is the query that I use.
SELECT * FROM(
SELECT
gn.wipdatavalue
, gn.containername
, gn.l
, gn.q
, gn.d
, gn.l2
, gn.q2
, gn.d2
, gn.l3
, gn.q3
, gn.d3
, gn.old
, gn.qtyperbox
, gn.productname
, gn.slot
, gn.dt
, gn.ws_green
, gn.ws_pnr
, gn.ws_pcn
, intn.mkt_number dsn
, gn.low_number
, gn.high_number
, gn.msl
, gn.baketime
, gn.exptime
, NVL(gn.q, 0) + NVL(gn.q2, 0) + NVL(gn.q3, 0) AS qtybox
, row_number () over (partition by slot order by low_number) as n
FROM
SELECT
tr.*
, TO_NUMBER(SUBSTR(wipdatavalue, 1, INSTR (wipdatavalue || '-', '-') - 1)) AS low_number
, TO_NUMBER(SUBSTR(wipdatavalue, 1 + INSTR ( wipdatavalue, '-'))) AS high_number
, pm.msllevel MSL
, pm.baketime BAKETIME
, pm.expstime EXPTIME
FROM trprinting tr
JOIN CONTAINER c ON tr.containername = c.containername
JOIN a_lotattributes ala ON c.containerid = ala.containerid
JOIN product p ON c.productid = p.productid
LEFT JOIN otherdb.pkg_main pm ON trim(p.brandname) = trim(pm.pcode)
WHERE (c.containername = :lot OR tr.SLOT= :lot)
)gn
LEFT JOIN otherdb.intnr intn ON TRIM(gn.productname) = TRIM(intn.part_number)
connect by level <= HIGH_NUMBER + 1 - LOW_NUMBER and LOW_NUMBER = prior LOW_NUMBER and prior SYS_GUID() is not null
ORDER BY low_number,n
WHERE n LIKE :n AND wipdatavalue LIKE :wip AND ROWNUM <= 300 AND wipdatavalue NOT LIKE 0
I am using Oracle 11g too.
Thanks for the help everyone.Hi,
The documentation implies that the START WITH and CONNECT BY clauses should come before the GROUP BY clause. I've never known it to make a difference before, but you might try putting the GROUP BY clause last.
If you're GROUPing by LEVEL, what's the point of SELECTing MAX (LEVEL)? MAX (LEVEL) will always be the same as LEVEL.
What are you trying to do?
Post some sample data (CREATE TABLE and INSERT statements) and the results you want from that data, and somebody will help you get them. -
Hi,
I need some help with the temp tablespace on an express edition. I have an express edition running under windows 2003 Server and the temp tablespace has increased very large (>4 GB). I have tried to reduce the datafile with 'alter database datafile 'C:\ORACLEXE\ORADATA\XE\TEMP.DBF' resize 2048M;' but I always get the message 'FEHLER in Zeile 1:
ORA-01516: Nicht vorhandene Log-Datei, Datendatei oder temporΣre Datei
'C:\ORACLEXE\ORADATA\XE\TEMP.DBF' even if the path is correct (I have verified it many times).
Is there an possibility to reduce the temp tablespace or is the only possibility to create a new default temp tablespace and delete the old one. If I create a new temp tablespace, can I use the maxsize option?
Many thanks in advance for all your help.
Kind regards
Luis Varoncreate a new default temp tablespace and delete the old one.Yes, do that.
can I use the maxsize option?You can, but even better, do not specify AUTEXTEND ON (OFF is the default). -
RDBMS version:11.2, 10.2
We usually create only one temporary tablespace (TEMP) for a database. But, our application team is asking us to create two more dedicated Temporary tablespaces for 2 of its DB schemas which have high activity and assign it as the default temporary tablespaces for it.
Are there any advantages in creating separate TEMP tablespaces for highly active schemas ? Can't I just have one tablespace and increase the size of this TEMP tablespace instead ?SM_308 wrote:
RDBMS version:11.2, 10.2
We usually create only one temporary tablespace (TEMP) for a database. But, our application team is asking us to create two more dedicated Temporary tablespaces for 2 of its DB schemas which have high activity and assign it as the default temporary tablespaces for it.
Are there any advantages in creating separate TEMP tablespaces for highly active schemas ? Can't I just have one tablespace and increase the size of this TEMP tablespace instead ?I would recommend single larger TEMP tablespace -
Hi Folks,
My temp tablespace is 100% full...what should i do? Pls keep in mind that the Auto Exted is ON & this is a 24x7 DB.
Please advice.Sometime it really difficult to decide between temp tablespace or tempfile..
We also running 24*7 and customer make their own code to duplicate scheme using their Java code .
What happen is , some scheme is really big and sometime we allocate 12gig for database size of 20. because of 1 scheme which is big enough. 10 gig.
We did encounter lack of space after we increase their tempfile and we shrink their tempfile again to gain more space...
I don't think this is a good way..but till now i haven't find out a best way to handle it.
Maybe you are looking for
-
As I said, the other day I had downloaded the latest version of Firefox and after doing so none of the icons for the different websites that normally go to are no longer there. Websites such as MSNBC, MSN, Most visited, that entire strip is no longer
-
ITunes 12.0.1 windows file movement
Hi I have always been a Windows user, and can find my way around the file system without having to think about it. I use iTunes for an extensive music/movie collection and have sorted 99.9% of the files metadata so that itunes recognises it correctly
-
Sales order creation using bapi
kindly help me. I have created a program which uses a bapi BAPI_SALESDOCU_CREATEFROMDATA. I fill the required data in a text file and the sales order gets created. but when I go to va03 transaction to check the data the payment terms column shows a
-
Argentina Withholding Tax - RFWT0010
I am new to Argentina withholding taxes and trying to develop a process to update the vendor master and open items. We currently perform the steps manually and it's very time-consuming and prone to errors. We receive a file from the govt with the
-
Mozilla will not allow me to copy and paste a URL from Sendible. The info indicated I had to update my profile information, but I'm not able to do this on my own. Please help!