Query to Identify Compressed Tables?
Is there a query to identify which tables are compressed?
Isn't the following query limited to just the partitioned tables that are compressed?
SELECT DISTINCT NAME
FROM sysobjects x
WHERE x.id IN (SELECT OBJECT_ID
FROM sys.partitions
WHERE data_compression <> 0)
Hi,
Does this query helps
SELECT
SCHEMA_NAME(sys.objects.schema_id) AS [SchemaName]
,OBJECT_NAME(sys.objects.object_id) AS [ObjectName]
,[rows]
,[data_compression_desc]
,[index_id] as [IndexID_on_Table]
FROM sys.partitions
INNER JOIN sys.objects
ON sys.partitions.object_id = sys.objects.object_id
WHERE data_compression > 0
AND SCHEMA_NAME(sys.objects.schema_id) <> 'SYS'
ORDER BY SchemaName, ObjectName
Taken from
Here
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles
Similar Messages
-
Query to identify full table scans in progress
Does anybody have a query that would help me identify:
1) Full table scans in progress.
2) Long running queries in progress.
Thanks,
Thomas
nullDoes anybody have a query that would help me identify:
1) Full table scans in progress.Not sure.
2) Long running queries in progressDon’t have a query readily available, but you can write one based on the following –
Try querying the view V$SESSION_LONGOPS. You will need to join this to V$SQL using SQL_ADDRESS to identify all the SQL’s running for more than ‘x’ minutes.
Current System Time - V$SESSION_LONGOPS.Start_Time should give you the duration.
Shailender Mehta -
11.2.0.3.3 impdp compress table
HI ML :
源库 : 10.2.0.3 compress table
target : 11.2.0.3.3 impdp 源端的compress tables,在目标端是否是compress table
之前在10g库直接 通过impdp dblink 导入时候 发现,入库的表需要手工做move compress。
MOS 文档给的的测试时 在10g开始 支持导入自动维护compress table :
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.1 - Release: 9.2 to 11.2
Information in this document applies to any platform.
Symptoms
Original import utility bypasses the table compression or does not compress, if the table is precreated as compressed. Please follow the next example that demonstrates this.
connect / as sysdba
create tablespace tbs_compress datafile '/tmp/tbs_compress01.dbf' size 100m;
create user test identified by test default tablespace tbs_compress temporary tablespace temp;
grant connect, resource to test;
connect test/test
-- create compressed table
create table compressed
id number,
text varchar2(100)
) pctfree 0 pctused 90 compress;
-- create non-compressed table
create table noncompressed
id number,
text varchar2(100)
) pctfree 0 pctused 90 nocompress;
-- populate compressed table with data
begin
for i in 1..100000 loop
insert into compressed values (1, lpad ('1', 100, '0'));
end loop;
commit;
end;
-- populate non-compressed table with identical data
begin
for i in 1..100000 loop
insert into noncompressed values (1, lpad ('1', 100, '0'));
end loop;
commit;
end;
-- compress the table COMPRESSED (previous insert doesn't use the compression)
alter table compressed move compress;
Let's now take a look at data dictionary to see the differences between the two tables:
connect test/test
select dbms_metadata.get_ddl ('TABLE', 'COMPRESSED') from dual;
DBMS_METADATA.GET_DDL('TABLE','COMPRESSED')
CREATE TABLE "TEST"."COMPRESSED"
( "ID" NUMBER,
"TEXT" VARCHAR2(100)
) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 COMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TBS_COMPRESS"
1 row selected.
SQL> select dbms_metadata.get_ddl ('TABLE', 'NONCOMPRESSED') from dual;
DBMS_METADATA.GET_DDL('TABLE','NONCOMPRESSED')
CREATE TABLE "TEST"."NONCOMPRESSED"
( "ID" NUMBER,
"TEXT" VARCHAR2(100)
) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TBS_COMPRESS"
1 row selected.
col segment_name format a30
select segment_name, bytes, extents, blocks from user_segments;
SEGMENT_NAME BYTES EXTENTS BLOCKS
COMPRESSED 2097152 17 256
NONCOMPRESSED 11534336 26 1408
2 rows selected.
The table COMPRESSED needs fewer storage space than the table NONCOMPRESSED. Now, let's export the tables using the original export utility:
#> exp test/test file=test_compress.dmp tables=compressed,noncompressed compress=n
About to export specified tables via Conventional Path ...
. . exporting table COMPRESSED 100000 rows exported
. . exporting table NONCOMPRESSED 100000 rows exported
Export terminated successfully without warnings.
and then import them back:
connect test/test
drop table compressed;
drop table noncompressed;
#> imp test/test file=test_compress.dmp tables=compressed,noncompressed
. importing TEST's objects into TEST
. . importing table "COMPRESSED" 100000 rows imported
. . importing table "NONCOMPRESSED" 100000 rows imported
Import terminated successfully without warnings.
Verify the extents after original import:
col segment_name format a30
select segment_name, bytes, extents, blocks from user_segments;
SEGMENT_NAME BYTES EXTENTS BLOCKS
COMPRESSED 11534336 26 1408
NONCOMPRESSED 11534336 26 1408
2 rows selected.
=> The table compression is gone.
Cause
This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
•Direct path SQL*Loader
•CREATE TABLE and AS SELECT statements
•Parallel INSERT (or serial INSERT with an APPEND hint) statements
Solution
The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
alter table compressed move compress;
Beginning with Oracle version 10g, DataPump utilities (expdp/impdp) perform direct path operations and so the table compression is maintained, like in the following example:
- after crating/populating the two tables, export them with:
#> expdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
. . exported "TEST"."COMPRESSED" 10.30 MB 100000 rows
Master table "TEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
and re-import after deletion with:
#> impdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
. . imported "TEST"."COMPRESSED" 10.30 MB 100000 rows
Job "TEST"."SYS_IMPORT_TABLE_01" successfully completed at 12:47:51
Verify the extents after DataPump import:
col segment_name format a30
select segment_name, bytes, extents, blocks from user_segments;
SEGMENT_NAME BYTES EXTENTS BLOCKS
COMPRESSED 2097152 17 256
NONCOMPRESSED 11534336 26 1408
2 rows selected.
=> The table compression is kept.
===========================================================
1 到底11.2.0.3 是否 支持impdp自动维护compress table 通过dblink 方式?
2
This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
•Direct path SQL*Loader
•CREATE TABLE and AS SELECT statements
•Parallel INSERT (or serial INSERT with an APPEND hint) statements
Solution
The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
以上意思在10g之前是 必须以上方式才能支持目标端入库压缩表,10g开始支持自动压缩? 貌似 10g也需要手工move。ODM TEST:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create table nocompres tablespace users as select * from dba_objects;
Table created.
SQL> create table compres_tab tablespace users as select * from dba_objects;
Table created.
SQL> alter table compres_tab compress 3;
Table altered.
SQL> alter table compres_tab move ;
Table altered.
select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
BYTES/1024/1024 SEGMENT_NAME
3 COMPRES_TAB
9 NOCOMPRES
C:\Users\ML>expdp maclean/oracle dumpfile=temp:COMPRES_TAB2.dmp tables=COMPRES_TAB
Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:12 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "MACLEAN"."SYS_EXPORT_TABLE_01": maclean/******** dumpfile=temp:COMPRES_TAB2.dmp tables=COMPRES_TAB
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 3 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MACLEAN"."COMPRES_TAB" 7.276 MB 75264 rows
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
D:\COMPRES_TAB2.DMP
Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 12:01:20
C:\Users\ML>impdp maclean/oracle remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:47 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MACLEAN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MACLEAN"."SYS_IMPORT_FULL_01": maclean/******** remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MACLEAN1"."COMPRES_TAB" 7.276 MB 75264 rows
Job "MACLEAN"."SYS_IMPORT_FULL_01" successfully completed at 12:01:50
1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
SQL> /
BYTES/1024/1024 SEGMENT_NAME
3 COMPRES_TAB
SQL> drop table compres_tab;
Table dropped.
C:\Users\ML>exp maclean/oracle tables=COMPRES_TAB file=compres1.dmp
Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:19 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in ZHS16GBK character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table COMPRES_TAB 75264 rows exported
Export terminated successfully without warnings.
C:\Users\ML>
C:\Users\ML>imp maclean/oracle fromuser=maclean touser=maclean1 file=compres1.dmp
Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:45 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
. importing MACLEAN's objects into MACLEAN1
. . importing table "COMPRES_TAB" 75264 rows imported
Import terminated successfully without warnings.
SQL> conn maclean1/oracle
Connected.
1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
SQL> /
BYTES/1024/1024 SEGMENT_NAME
8 COMPRES_TAB
我的理解 对于direct load 总是能保持compression
但是 imp默认是conventional path 即使用普通的走buffer cache的INSERT实现导入 所以无法保持compression
而impdp不管 access_method 是external_table还是direct_path 模式都可以保持compression -
Query is doing full table scan
Hi All,
The below query is doing full table scan. So many threads from application trigger this query and doing full table scan. Can you please tell me how to improve the performance of this query?
Env is 11.2.0.3 RAC (4 node). Unique index on VZ_ID, LOGGED_IN. The table row count is 2,501,103.
Query is :-
select ccagentsta0_.LOGGED_IN as LOGGED1_404_, ccagentsta0_.VZ_ID as VZ2_404_, ccagentsta0_.ACTIVE as ACTIVE404_, ccagentsta0_.AGENT_STATE as AGENT4_404_,
ccagentsta0_.APPLICATION_CODE as APPLICAT5_404_, ccagentsta0_.CREATED_ON as CREATED6_404_, ccagentsta0_.CURRENT_ORDER as CURRENT7_404_,
ccagentsta0_.CURRENT_TASK as CURRENT8_404_, ccagentsta0_.HELM_ID as HELM9_404_, ccagentsta0_.LAST_UPDATED as LAST10_404_, ccagentsta0_.LOCATION as LOCATION404_,
ccagentsta0_.LOGGED_OUT as LOGGED12_404_, ccagentsta0_.SUPERVISOR_VZID as SUPERVISOR13_404_, ccagentsta0_.VENDOR_NAME as VENDOR14_404_
from AGENT_STATE ccagentsta0_ where ccagentsta0_.VZ_ID='v790531' and ccagentsta0_.ACTIVE='Y';
Table Scan AGENT_STATE 2.366666667
Table Scan AGENT_STATE 0.3666666667
Table Scan AGENT_STATE 1.633333333
Table Scan AGENT_STATE 0.75
Table Scan AGENT_STATE 1.866666667
Table Scan AGENT_STATE 2.533333333
Table Scan AGENT_STATE 0.5333333333
Table Scan AGENT_STATE 1.95
Table Scan AGENT_STATE 0.8
Table Scan AGENT_STATE 0.2833333333
Table Scan AGENT_STATE 1.983333333
Table Scan AGENT_STATE 2.5
Table Scan AGENT_STATE 1.866666667
Table Scan AGENT_STATE 1.883333333
Table Scan AGENT_STATE 0.9
Table Scan AGENT_STATE 2.366666667
But the explain plan shows the query is taking the index
Explain plan output:-
PLAN_TABLE_OUTPUT
Plan hash value: 1946142815
| Id | Operation | Name | Rows | Bytes | Cost (%C
PU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | 106 | 244
(0)| 00:00:03 |
|* 1 | TABLE ACCESS BY INDEX ROWID| AGENT_STATE | 1 | 106 | 244
(0)| 00:00:03 |
|* 2 | INDEX RANGE SCAN | AGENT_STATE_IDX | 229 | | 4
(0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - filter("CCAGENTSTA0_"."ACTIVE"='Y')
2 - access("CCAGENTSTA0_"."VZ_ID"='v790531')
The values (VZ_ID) i have given are dummy values picked from the table. I dont get the actual values since the query is coming with bind variables. Please let me know your suggestion on this.
Thanks,
ManiHi,
But I am not getting what is the issue..its a simple select query and index is there on one of the leading columns (VZ_ID --- PK). Explain plan says its using its using Index and it only select fraction of rows from the table. Then why it is doing FTS. For Optimizer, why its like a query doing FTS.
The rule-based optimizer would have picked the plan with the index. The cost-based optimizer, however, is picking the plan with the lowest cost. Apparently, the lowest cost plan is the one with the full table scan. And the optimizer isn't necessarily wrong about this.
Reading data from a table via index probes is only efficient when selecting a relatively small percentage of rows. For larger percentages, a full table scan is generally better.
Consider a simple example: a query that selects from a table with biographies for all people on the planet. Suppose you are interested in all people from a certain country.
select * from all_people where country='Vatican'
would only return only 800 rows (as Vatican is an extremely small country with population of just 800 people). For this case, obviously, using an index would be very efficient.
Now if we run this query:
select * from all_people where contry = 'India',
we'd be getting over a billion of rows. For this case, a full table scan would be several thousand times faster.
Now consider the third case:
select * from all_people where country = :b1
What plan should the optimizer choose? The value of :b1 bind variable is generally not known during the parse time, it will be passed by the user when the query is already parsed, during run-time.
In this case, one of two scenarios takes place: either the optimizer relies on some built-in default selectivities (basically, it takes a wild guess), or the optimizer postpones taking the final decision until the
first time the query is run, 'peeks' the value of the bind, and optimizes the query for this case.
In means, that if the first time the query is parsed, it was called with :b1 = 'India', a plan with a full table scan will be generated and cached for subsequent usage. And until the cursor is aged out of library cache
or invalidated for some reason, this will be the plan for this query.
If the first time it was called with :b1='Vatican', then an index-based plan will be picked.
Either way, bind peeking only gives good results if the subsequent usage of the query is the same kind as the first usage. I.e. in the first case it will be efficient, if the query would always be run for countries with big popultions.
And in the second case, if it's always run for countries with small populations.
This mechanism is called 'bind peeking' and it's one of the most common causes of performance problems. In 11g, there are more sophisticated mechanisms, such a cardinality feedback, but they don't always work as expected.
This mechanism is the most likely explanation for your issue. However, without proper diagnostic information we cannot be 100% sure.
Best regards,
Nikolay -
Using case when statement in the select query to create physical table
Hello,
I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
I have a physical table based on a select table with one column.
SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
I also have a customer table.
In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
EXECUTION
When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
everything works as expected. YE!!
Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
FROM DUAL
Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
If anyone has any explanation to this error and how we can achieve the same, please help.
Thanks.Hello,
Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
Here is what I am trying to do. the select query of the physical table is as follows.
SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
Thanks. -
Best practice for a same query against 2 different tables
Hello all,
I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
SELECT file_name, bytes, autoextensible, maxbytes, increment_by
FROM dba_data_files
WHERE tablespace_name = tablespaceName;
CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
SELECT file_name, bytes, autoextensible, maxbytes, increment_by
FROM dba_temp_files
WHERE tablespace_name = tablespaceName;
First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
Thank you,Hi
Check whether the below query is helpful or not
select fs.tablespace_name "Tablespace",
fs.tempspace "Temp MB",
df.totalspace "Total MB"
from
(select
tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from
dba_data_files
group by
tablespace_name
) df,
(select
tablespace_name,
round(sum(bytes) / 1048576) tempSpace
from
dba_temp_files
group by
tablespace_name
) fs
where
df.tablespace_name = fs.tablespace_name;
Thanks -
How can i display all the query items to a table?
how can i display all the query items to a table in a jsp file?
i always have an out of memory error..any body??any idea?
is it possible thru configuration or i have to write a program by the abaper??
Biswa -
Compress nonclustered index on a compressed table
Hi all,
I've compressed a big table, space has been shrunk from 180GB to 20GB using page compression.
I've observed that this table has 50GB of indexes too, this space has remanied the same.
1) is it possible to compress nonclustered index on an already compressed table?
2) is it a best practice?ALTER INDEX...
https://msdn.microsoft.com/en-us/library/ms188388.aspx
You saved the disk space, that's fine, but now see if there is some performance impact on the queries, do you observe that any improvement in terms of performance?
http://blogs.technet.com/b/swisssql/archive/2011/07/09/sql-server-database-compression-speed-up-your-applications-without-programming-and-complex-maintenance.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Space reusage after deletion in compressed table
Hi,
Some sources tell, that free space after DELETE in compressed table is not reused.
For example, this http://www.trivadis.com/uploads/tx_cabagdownloadarea/table_compression2_0411EN.pdf
Is it true?
Unfortunatly I cannot reproduce it.Unfortunatly the question is still open.
In Oracle 9i space, freed after DELETE in compressed block, was not reused in subsequent inserts.
Isn't it?
I saw many evidences from other people. One link I gave above.
But in Oracle 10g I see another figures. After delete rows in compressed blocks, and subsequent insert into that block, block defragmented!
Please, if who know any documentation about change in this behavior, please post links.
p.s.
in 10g:
1. CTAS compress. Block is full.
2. after, deleted every 4 from 5 rows.
avsp=0x3b
tosp=0x99e
0x24:pri[0] offs=0xeb0
0x26:pri[1] offs=0xea8 -- deleted
0x28:pri[2] offs=0xea0 -- deleted
0x2a:pri[3] offs=0xe98 -- deleted
0x2c:pri[4] offs=0xe90 -- deleted
0x2e:pri[5] offs=0xe88 -- live
0x30:pri[6] offs=0xe80 -- deleted
0x32:pri[7] offs=0xe78 -- deleted
0x34:pri[8] offs=0xe70 -- deleted
0x36:pri[9] offs=0xe68 -- deleted
0x38:pri[10] offs=0xe60 -- live
0x3a:pri[11] offs=0xe58 -- deleted
0x3c:pri[12] offs=0xe50 -- deleted
0x3e:pri[13] offs=0xe48 -- deleted
0x40:pri[14] offs=0xe40 -- deleted
0x42:pri[15] offs=0xe38 -- live
0x44:pri[16] offs=0xe30 -- deleted
0x46:pri[17] offs=0xe28 -- deleted
0x48:pri[18] offs=0xe20 -- deleted
0x4a:pri[19] offs=0xe18 -- deleted
0x4c:pri[20] offs=0xe10 -- live
...3. insert into table t select from ... where rownum < 1000;
Inserted rows were inserted in a several blocks. Total number of not empty blocks was not changed. Chains did not occure.
Block above looks as follow:
avsp=0x7d
tosp=0x7d
0x24:pri[0] offs=0xeb0
0x26:pri[1] offs=0x776 - new
0x28:pri[2] offs=0x84b - new
0x2a:pri[3] offs=0x920 - new
0x2c:pri[4] offs=0x9f5 - new
0x2e:pri[5] offs=0xea8 - old
0x30:pri[6] offs=0xaca - new
0x32:pri[7] offs=0xb9f - new
0x34:pri[8] offs=0x34d - new
0x36:pri[9] offs=0x422 - new
0x38:pri[10] offs=0xea0 - old
0x3a:pri[11] offs=0x4f7 - new
0x3c:pri[12] offs=0x5cc - new
0x3e:pri[13] offs=0x6a1 - new
0x40:pri[14] sfll=16
0x42:pri[15] offs=0xe98 - old
0x44:pri[16] sfll=17
0x46:pri[17] sfll=18
0x48:pri[18] sfll=19
0x4a:pri[19] sfll=21
0x4c:pri[20] offs=0xe90 -- old
0x4e:pri[21] sfll=22
0x50:pri[22] sfll=23
0x52:pri[23] sfll=24
0x54:pri[24] sfll=26As we see, that old rows were defragmented, and repacked, and moved to the bottom of block.
New rows (inserted after compressing of table) fill remaining space.
So, deleted space was reused. -
1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed in the stat plan ?
2. Does rowsource statistics gives some kind of understanding of Extended stats ?You can get Row Source Statistics only *after* the SQL has been executed. An Explain Plan midway cannot give you row source statistics.
To get row source statistics either set STATISTICS_LEVEL='ALL' in the session that executes theSQL OR use the Hint "gather_plan_statistics" in the SQL being executed.
Then use dbms_xplan.display_cursor
Hemant K Chitale -
Select Query failing on a table that has per sec heavy insertions.
Hi
Problem statement
1- We are using 11g as an database.
2- We have a table that is partitioned on the date as range partition.
3- The insertion of data is very high.i.e. several hundreds records per sec. in the current partitioned.
4- The data is continuously going in the current partitioned as and when buffer is full or per sec timer expires.
5-- We have to make also select query on the same table and on the current partitioned say for the latest 500 records.
6- Effecient indexes are also created on the table.
Solutions Tried.
1- After analyzing by tkprof it is observed that select and execute is working fine but fetch is taking too much time to show the out put. Say it takes 1 hour.
2- Using the 11g sql advisior and SPM several baseline is created but the success rate of them is observed also too low.
please suggest any solution to this issue
1- i.e. Redisgn of table.
2- Any better way to quey to fix the fetch issue.
3- Any oracle seetings or parameter changes to fix the fetch issue.
Thanks in advance.
Regards
Vishal SharmaI am uploading the latest stats please let me know how can improve as this is taking 25 minutes
####TKPROF output#########
SQL ID : 2j5w6bv437cak
select almevttbl.AlmEvtId, almevttbl.AlmType, almevttbl.ComponentId,
almevttbl.TimeStamp, almevttbl.Severity, almevttbl.State,
almevttbl.Category, almevttbl.CauseCode, almevttbl.UnitType,
almevttbl.UnitId, almevttbl.UnitName, almevttbl.ServerName,
almevttbl.StrParam, almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2,
almevttbl.ExtraStrParam3, almevttbl.ParentCustId, almevttbl.ExtraParam1,
almevttbl.ExtraParam2, almevttbl.ExtraParam3,almevttbl.ExtraParam4,
almevttbl.ExtraParam5, almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,
almevttbl.SrcIPAddress12,almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
almevttbl.DestIPAddress14, almevttbl.DestPort, almevttbl.SrcPort,
almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24
FROM
AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT * FROM
( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where ((AlmEvtTbl.Customerid
= 0 or AlmEvtTbl.ParentCustId = 0)) ORDER BY AlmEvtTbl.TIMESTAMP DESC)
WHERE ROWNUM < 602) order by timestamp desc
call count cpu elapsed disk query current rows
Parse 1 0.10 0.17 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 42 1348.25 1521.24 1956 39029545 0 601
total 44 1348.35 1521.41 1956 39029545 0 601
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 82
Rows Row Source Operation
601 PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11043 us cost=0 size=7426 card=1)
601 TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11030 us cost=0 size=7426 card=1)
601 INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=39029377 pr=1956 pw=1956 time=11183 us cost=0 size=0 card=1)(object id 72557)
601 FILTER (cr=39027139 pr=0 pw=0 time=0 us)
169965204 COUNT STOPKEY (cr=39027139 pr=0 pw=0 time=24859073 us)
169965204 VIEW (cr=39027139 pr=0 pw=0 time=17070717 us cost=0 size=13 card=1)
169965204 PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=13527031 us cost=0 size=48 card=1)
169965204 TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=10299895 us cost=0 size=48 card=1)
169965204 INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=1131414 pr=0 pw=0 time=3222624 us cost=0 size=0 card=1)(object id 72557)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 42 0.00 0.00
SQL*Net message from client 42 11.54 133.54
db file sequential read 1956 0.20 28.00
latch free 21 0.00 0.01
latch: cache buffers chains 9 0.01 0.02
SQL ID : 0ushr863b7z39
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("PLAN_TABLE") FULL("PLAN_TABLE")
NO_PARALLEL_INDEX("PLAN_TABLE") */ 1 AS C1, CASE WHEN
"PLAN_TABLE"."STATEMENT_ID"=:B1 THEN 1 ELSE 0 END AS C2 FROM
"SYS"."PLAN_TABLE$" "PLAN_TABLE") SAMPLESUB
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.01 1 3 0 1
total 3 0.00 0.01 1 3 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 82 (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=3 pr=1 pw=1 time=0 us)
0 TABLE ACCESS FULL PLAN_TABLE$ (cr=3 pr=1 pw=1 time=0 us cost=29 size=138856 card=8168)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.01 0.01
SQL ID : bjkdb51at8dnb
EXPLAIN PLAN SET STATEMENT_ID='PLUS30350011' FOR select almevttbl.AlmEvtId,
almevttbl.AlmType, almevttbl.ComponentId, almevttbl.TimeStamp,
almevttbl.Severity, almevttbl.State, almevttbl.Category,
almevttbl.CauseCode, almevttbl.UnitType, almevttbl.UnitId,
almevttbl.UnitName, almevttbl.ServerName, almevttbl.StrParam,
almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2, almevttbl.ExtraStrParam3,
almevttbl.ParentCustId, almevttbl.ExtraParam1, almevttbl.ExtraParam2,
almevttbl.ExtraParam3,almevttbl.ExtraParam4,almevttbl.ExtraParam5,
almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,almevttbl.SrcIPAddress12,
almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
almevttbl.DestIPAddress14, almevttbl.DestPort, almevttbl.SrcPort,
almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24 FROM
AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT * FROM
( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where ((AlmEvtTbl.Customerid
= 0 or AlmEvtTbl.ParentCustId = 0)) ORDER BY AlmEvtTbl.TIMESTAMP DESC)
WHERE ROWNUM < 602) order by timestamp desc
call count cpu elapsed disk query current rows
Parse 1 0.28 0.26 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.29 0.27 0 0 0 0
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 82
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 13 0.71 0.96 3 10 0 0
Execute 14 0.20 0.29 4 304 26 21
Fetch 92 2402.17 2714.85 3819 70033708 0 1255
total 119 2403.09 2716.10 3826 70034022 26 1276
Misses in library cache during parse: 10
Misses in library cache during execute: 6
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 49 0.00 0.00
SQL*Net message from client 48 29.88 163.43
db file sequential read 1966 0.20 28.10
latch free 21 0.00 0.01
latch: cache buffers chains 9 0.01 0.02
latch: session allocation 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 940 0.51 0.73 1 2 38 0
Execute 3263 1.93 2.62 7 1998 43 23
Fetch 6049 1.32 4.41 214 12858 36 13724
total 10252 3.78 7.77 222 14858 117 13747
Misses in library cache during parse: 172
Misses in library cache during execute: 168
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 88 0.04 0.62
latch: shared pool 8 0.00 0.00
latch: row cache objects 2 0.00 0.00
latch free 1 0.00 0.00
latch: session allocation 1 0.00 0.00
34 user SQL statements in session.
3125 internal SQL statements in session.
3159 SQL statements in session.
Trace file: ora11g_ora_2064.trc
Trace file compatibility: 11.01.00
Sort options: default
6 sessions in tracefile.
98 user SQL statements in trace file.
9111 internal SQL statements in trace file.
3159 SQL statements in trace file.
89 unique SQL statements in trace file.
30341 lines in trace file.
6810 elapsed seconds in trace file.
###################################### AutoTrace Output#################
Statistics
3901 recursive calls
0 db block gets
39030275 consistent gets
1970 physical reads
140 redo size
148739 bytes sent via SQL*Net to client
860 bytes received via SQL*Net from client
42 SQL*Net roundtrips to/from client
73 sorts (memory)
0 sorts (disk)
601 rows processed -
Query related to Internal Table
Hi ,
I have a small query related to internal table , can we dump millions of records to an internal .
The actual requirment is like i need to develop a report in BI side where i have to dump records into an internal table from PSA tables without filtering .
Can we do so ....
or do we have any other option to dump the data to an internal tables .
need some tips on the same .
Thanks ,
VInay.Hello Vinay,
I believe the following extract will give you a brief idea on the size limitations for an internal table.......
Internal tables are dynamic data objects, since they can contain any number of lines of a particular type. The only restriction on the number of lines an internal table may contain are the limits of your system installation. <u><i>The maximum memory that can be occupied by an internal table (including its internal administration) is 2 gigabytes. A more realistic figure is up to 500 megabytes. An additional restriction for hashed tables is that they may not contain more than 2 million entries.</i></u>
Hope it proved useful
Reward if helpful
Regards
Byju -
Query related to database tables
Hi,
Im having a requirement wherein i would like to create one ztable and the purpose is only to get the fields but not the values for the same. Bez, with the help of these fields in ztable and developing some logic and it shld be dynamic based on the no of fields appended into ztable.
To be clear, can i write a select query only to get the fields from the table but not the values ( in this no way i will get the values bez i won't insert any records for the same).
I thnk im clear from my end.
Thanks
rohithTo get the fields, you can write the query like this too :
tables: dd03l.
data: begin of itab occurs 0,
fieldname like dd03l-fieldname,
end of itab.
*parameters: p_tab like dd03l-tabname.
select fieldname from dd03l into table itab where tabname = 'VBAK'.
LOOP AT ITAB.
WRITE:/ itab-fieldname.
ENDLOOP. -
To make the query more efficient (create table wiht select command)
Hi,
I have written this query to create another table, but it takes approx two hours while both tables are indexed with 891353, 769023, i have used the following query.
create table source1 as select a.idx, a.source from tt a where a.idx not in (select b.idx from ttt b)
thanksTry this one if you're on oracle 8i or older
create table source1 as
select a.idx, a.source
from tt a
where not exists (select null from ttt b where a.idx = b.idx) -
How to Add column with default value in compress table.
Hi ,
while trying to add column to compressed table with default value i am getting error.
Even i tried no compress command on table still its giivg error that add/drop not allowed on compressed table.
Can anyone help me in this .
Thanks.Aman wrote:
while trying to add column to compressed table with default value i am getting error.This is clearly explain in the Oracle doc :
"+You cannot add a column with a default value to a compressed table or to a partitioned table containing any compressed partition, unless you first disable compression for the table or partition+"
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5163
Nicolas.
Maybe you are looking for
-
Ipod has itunes symbol on screen and charge cord but won't allow me to get on screen.
I went to restore my Ipod to original setting and after it turning off and coming back on, It shows itunes symbol and white charge cord coming up from bottom and it wont allow me to get off that screen. Even after turning off and back on. As well as
-
When I close the screen...
When I close the screen on my MBP it goes to sleep. Does the new O.S. enable me to set it so it does nothing? I have my sony vaio set to do nothing when I close the lid with XP Pro. If I'm running parallels with windows can I set my MBP this way? Tha
-
Satelite A200-1IW: Win 8 - ATI Radeon HD 2600 can't set 1920x1080 @60Hz
Hi! I have a problem with my Toshiba Satelite A200-1IW and its resolution on VGA 22" monitor. I decided to switch win7 to win8 and got stuck on this annoying issue. I can't set 1920x1080 @60Hz on win8 even though there was no problem doing so in win7
-
I been having some problem with my account, some one else is try to use my apple account, my password is been reset and is not me, please can helpl me.
-
3 messages with no information
I have received 3 emails with no information, or header information, all is there is the date and time they arrived. They will not delete only and i cant seem to get rid of them, any ideas