High_value, low_value in dba_tab_columns
Hi!
How to make them human readable?
They are defined as raw.
/Bjørn
BD, you can use the rawtohex function for one way
col low format a20
col high format a20
select column_name, rawtohex(low_value) low, rawtohex(high_value) high
from dba_tab_columns
where table_name = 'ITEM_MASTER'
and (column_name = 'ACCOUNT_CLASS' or
column_name = 'QTY_ALLOC_MFG')
COLUMN_NAME LOW HIGH
ACCOUNT_CLASS 31 58
QTY_ALLOC_MFG 3D020266 C4084E5A2A4347
Note the value in this case is hex (base 16)
There is a dbms_raw package that Oracle provides for working with RAW datatypes.
HTH -- Mark D Powell --
Similar Messages
-
Converting hex to number & date
Hi!
Im having a problem converting the hex code to the number & date.
select low_value,high_value from ALL_TAB_COL_STATISTICS;
Iv tried HEX_TO_CHAR and that works fine.
select column_name, UTL_I18N.RAW_TO_CHAR(low_value),UTL_I18N.RAW_TO_CHAR(high_value),low_value, UTL_I18N.RAW_TO_CHAR(high_value) from ALL_TAB_COL_STATISTICS where owner = 'user' and table_name = 'table_name';
if you type this statement you will see that numeric fields and date fields get caught up in (¿) type of fields.
Any solution for this?
BR / S-A
Edited by: SweAnderline on 2008-nov-14 12:58Hi William!
Need your help again.
Iv taken your statement and alterd it a bit.
SELECT c.table_name table_name, c.column_name, c.data_type
, c.histogram
, CASE c.data_type
WHEN 'NUMBER' THEN TO_CHAR(UTL_RAW.CAST_TO_NUMBER(c.low_value))
WHEN 'VARCHAR2' THEN UTL_RAW.CAST_TO_VARCHAR2(c.low_value)
WHEN 'DATE' THEN
RTRIM(TO_NUMBER(SUBSTR(c.low_value,1,2),'XX') -100 ||
LPAD(TO_NUMBER(SUBSTR(c.low_value,3,2),'XX') -100,2,'0') || '-' ||
LPAD(TO_NUMBER(SUBSTR(c.low_value,5,2),'XX'),2,'0') || '-' ||
LPAD(TO_NUMBER(SUBSTR(c.low_value,7,2),'XX'),2,'0') || ' ' ||
LPAD(TO_NUMBER(SUBSTR(c.low_value,9,2),'XX') -1,2,'0') || ':' ||
LPAD(TO_NUMBER(SUBSTR(c.low_value,11,2),'XX') -1,2,'0') || ':' ||
LPAD(TO_NUMBER(SUBSTR(c.low_value,13,2),'XX') -1,2,'0'),':- ')
END AS low_val
, CASE c.data_type
WHEN 'NUMBER' THEN TO_CHAR(UTL_RAW.CAST_TO_NUMBER(c.high_value))
WHEN 'VARCHAR2' THEN UTL_RAW.CAST_TO_VARCHAR2(c.high_value)
WHEN 'DATE' THEN
RTRIM(TO_NUMBER(SUBSTR(c.high_value,1,2),'XX') -100 ||
LPAD(TO_NUMBER(SUBSTR(c.high_value,3,2),'XX') -100,2,'0') || '-' ||
LPAD(TO_NUMBER(SUBSTR(c.high_value,5,2),'XX'),2,'0') || '-' ||
LPAD(TO_NUMBER(SUBSTR(c.high_value,7,2),'XX'),2,'0') || ' ' ||
LPAD(TO_NUMBER(SUBSTR(c.high_value,9,2),'XX') -1,2,'0') || ':' ||
LPAD(TO_NUMBER(SUBSTR(c.high_value,11,2),'XX') -1,2,'0') || ':' ||
LPAD(TO_NUMBER(SUBSTR(c.high_value,13,2),'XX') -1,2,'0'),':- ')
END AS high_val
, c.num_distinct
, c.avg_col_len
, ROUND(c.density,1) AS density
, c.num_nulls, c.nullable
, round(((c.num_nulls*100)/decode(nvl(d.num_rows,1),0,1))) || '%' "Null % of total Rows"
, c.last_analyzed
, d.num_rows "Number of rows in table"
FROM all_tab_columns c, dba_tables d
WHERE c.table_name = d.table_name and c.owner like 'MDB%'
AND c.owner = d.owner
ORDER BY c.table_name asc;
What I cant get to work is my
-- round(((c.num_nulls*100)/decode(nvl(d.num_rows,1),0,1))) || '%' "Null %"
When I execute the statement I get divisor is equal to zero, so I tried to add a nvl if its null then 1, and if its 0 in number of rows then 1.
Iv tried multiple ways to do this but I cant get it to calculate. With the current statement I dont get any results.
I can get it to work if I create a view with this statement and add a specific table_name. But I need to get this to work in the whole schema/user without entering a specific table.
Any ideas what I can do else?
Thx in advance
Cheers -
How does the CBO calculate the selectivity for range predicates on ROWID ?
Hi all,
I'm wondering how the CBO estimate the selectivity for range predicates based on ROWID columns.
For example, for the following query the CBO estimates there's going to be 35004 rows returned instead of 7:
SQL> SELECT count(*)
FROM intsfi i
WHERE
ROWID>='AAADxyAAWAAHDLIAAB' AND ROWID<='AAADxyAAWAAHDLIAAH'; 2 3 4
COUNT(*)
7
Elapsed: 00:00:02.31
SQL> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
PLAN_TABLE_OUTPUT
SQL_ID aqbdu2p2t6w0z, child number 1
SELECT count(*) FROM intsfi i WHERE ROWID>='AAADxyAAWAAHDLIAAB' AND
ROWID<='AAADxyAAWAAHDLIAAH'
Plan hash value: 1610739540
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:02.31 | 68351 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:02.31 | 68351 |
|* 2 | INDEX FAST FULL SCAN| INTSFI2 | 1 | 35004 | 7 |00:00:02.31 | 68351 |
Predicate Information (identified by operation id):
2 - filter((ROWID>='AAADxyAAWAAHDLIAAB' AND ROWID<='AAADxyAAWAAHDLIAAH'))According to Jonathan Lewis' book, for a normal column the selectivity would have been:
(value_column1-value_column2)/(high_value-low_value)+1/num_distinct+1/num_distinct
But here with the ROWID column, how does the CBO make its computation ?
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for INTSFI[I]
Table: INTSFI Alias: I
Card: Original: 14001681.000000 Rounded: 35004 Computed: 35004.20 Non Adjusted: 35004.20Hi Jonathan,
Some Clarifications
=============
DELETE /*+ ROWID(I) */ FROM INTSFI I WHERE
(I.DAVAL<=TO_DATE('12032008','DDMMYYYY') AND (EXISTS(SELECT 1 FROM
INTSFI S WHERE S.COINT=I.COINT AND S.NUCPT=I.NUCPT AND S.CTSIT=I.CTSIT
AND NVL(S.RGCID,-1)=NVL(I.RGCID,-1) AND S.CODEV=I.CODEV AND
S.COMAR=I.COMAR AND S.DAVAL>I.DAVAL) AND I.COMAR IN (SELECT P.COMAR
FROM PURMAR P WHERE P.NUPUR=1))) AND ROWID>='AAADxyAAWAAHDLIAAB' AND
ROWID<='AAADxyAAWAAHDLIAAH'
Plan hash value: 1677274993
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 0 | DELETE STATEMENT | | 1 | | 0 |00:00:05.94 | 53247 | | | |
| 1 | DELETE | INTSFI | 1 | | 0 |00:00:05.94 | 53247 | | | |
|* 2 | HASH JOIN SEMI | | 1 | 9226 | 7 |00:00:05.94 | 53180 | 783K| 783K| 471K (0)|
| 3 | NESTED LOOPS | | 1 | 9226 | 7 |00:00:00.01 | 10 | | | |
|* 4 | TABLE ACCESS BY ROWID RANGE| INTSFI | 1 | 9226 | 7 |00:00:00.01 | 6 | | | |
|* 5 | INDEX UNIQUE SCAN | PURMAR1 | 7 | 1 | 7 |00:00:00.01 | 4 | | | |
| 6 | INDEX FAST FULL SCAN | INTSFI1 | 1 | 14M| 7543K|00:00:01.73 | 53170 | | | |
Predicate Information (identified by operation id):
2 - access("S"."COINT"="I"."COINT" AND "S"."NUCPT"="I"."NUCPT" AND "S"."CTSIT"="I"."CTSIT" AND
NVL("S"."RGCID",(-1))=NVL("I"."RGCID",(-1)) AND "S"."CODEV"="I"."CODEV" AND "S"."COMAR"="I"."COMAR")
filter("S"."DAVAL">"I"."DAVAL")
4 - access(ROWID>='AAADxyAAWAAHDLIAAB' AND ROWID<='AAADxyAAWAAHDLIAAH')
filter("I"."DAVAL"<=TO_DATE(' 2008-03-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
5 - access("P"."NUPUR"=1 AND "I"."COMAR"="P"."COMAR")
When I force the NESTED LOOP SEMI JOIN the query runs faster:
DELETE /*+ ROWID(I) */ FROM INTSFI I WHERE
(I.DAVAL<=TO_DATE('12032008','DDMMYYYY') AND (EXISTS(SELECT /*+ NL_SJ
*/ 1 FROM INTSFI S WHERE S.COINT=I.COINT AND S.NUCPT=I.NUCPT AND
S.CTSIT=I.CTSIT AND NVL(S.RGCID,-1)=NVL(I.RGCID,-1) AND S.CODEV=I.CODEV
AND S.COMAR=I.COMAR AND S.DAVAL>I.DAVAL) AND I.COMAR IN (SELECT P.COMAR
FROM PURMAR P WHERE P.NUPUR=1))) AND ROWID>='AAADxyAAWAAHDLIAAB' AND
ROWID<='AAADxyAAWAAHDLIAAH'
Plan hash value: 2031485112
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 0 | DELETE STATEMENT | | 1 | | 0 |00:00:00.01 | 94 |
| 1 | DELETE | INTSFI | 1 | | 0 |00:00:00.01 | 94 |
| 2 | NESTED LOOPS SEMI | | 1 | 9226 | 7 |00:00:00.01 | 27 |
| 3 | NESTED LOOPS | | 1 | 9226 | 7 |00:00:00.01 | 9 |
|* 4 | TABLE ACCESS BY ROWID RANGE| INTSFI | 1 | 9226 | 7 |00:00:00.01 | 5 |
|* 5 | INDEX UNIQUE SCAN | PURMAR1 | 7 | 1 | 7 |00:00:00.01 | 4 |
|* 6 | INDEX RANGE SCAN | INTSFI1 | 7 | 14M| 7 |00:00:00.01 | 18 |
Predicate Information (identified by operation id):
4 - access(ROWID>='AAADxyAAWAAHDLIAAB' AND ROWID<='AAADxyAAWAAHDLIAAH')
filter("I"."DAVAL"<=TO_DATE(' 2008-03-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
5 - access("P"."NUPUR"=1 AND "I"."COMAR"="P"."COMAR")
6 - access("S"."COINT"="I"."COINT" AND "S"."NUCPT"="I"."NUCPT" AND
"S"."CTSIT"="I"."CTSIT" AND "S"."CODEV"="I"."CODEV" AND "S"."COMAR"="I"."COMAR" AND
"S"."DAVAL">"I"."DAVAL")
filter(NVL("S"."RGCID",(-1))=NVL("I"."RGCID",(-1)))the above post is from Ahmed AANGOUR
Case 1 - . If you check Plan hash value: 16772749938
=====
TABLE ACCESS BY ROWID RANGE| INTSFI For every row access from INTSFI - it fetches a record from INDEX UNIQUE SCAN | PURMAR1
If we check A-rows = 9226
9226 * 7 = 64582 request across the table - perhaps with hint of rowid it fetches exact rows from PURMAR1
in this case i think going for hash join with ordered hints (jonathan as you suggest go for leading hint's instead of ordered) - from INTSFI - PURMAR1 - instead of going for IN clause would get the rows that satifies the ("P"."NUPUR"=1 AND "I"."COMAR"="P"."COMAR")
|* 2 | HASH JOIN SEMI | | 1 | 9226 | 7 |00:00:05.94 | 53180 | 783K| 783K| 471K (0)|
| 3 | NESTED LOOPS | | 1 | 9226 | 7 |00:00:00.01 | 10 | | | |
|* 4 | TABLE ACCESS BY ROWID RANGE| INTSFI | 1 | 9226 | 7 |00:00:00.01 | 6 | | | |
|* 5 | INDEX UNIQUE SCAN | PURMAR1 | 7 | 1 | 7 |00:00:00.01 | 4 | | | |My understanding with above plan would change to
HASH JOIN
TABLE ACCESS BY ROWID RANGE| INTSFI
INDEX UNIQUE SCAN | PURMAR1
HASH JOIN
INDEX FAST FULL SCAN | INTSFI1
Which migt be feasible.
2 .
DELETE /*+ ROWID(I) */ FROM INTSFI I WHERE
(I.DAVAL<=TO_DATE('12032008','DDMMYYYY') AND (EXISTS(SELECT /*+ NL_SJ
*/ 1 FROM INTSFI S WHERE S.COINT=I.COINT AND S.NUCPT=I.NUCPT AND
S.CTSIT=I.CTSIT AND NVL(S.RGCID,-1)=NVL(I.RGCID,-1) AND S.CODEV=I.CODEV
AND S.COMAR=I.COMAR AND S.DAVAL>I.DAVAL) AND I.COMAR IN (SELECT P.COMAR
FROM PURMAR P WHERE P.NUPUR=1))) AND ROWID>='AAADxyAAWAAHDLIAAB' AND
ROWID<='AAADxyAAWAAHDLIAAH'Ahmed AANGOUR, modified the query by /*+ NL_SJ */ hint, Instead of that in to remove the most of the rows as we join the tables using subquery, I still doubt it
to go push_predicate hints - still doubt it.
Jonathan your comments are most valuable in the above two cases..
Looking forward to calrify my understanding with concepts of indexes for above test cases
- Pavan Kumar N -
10.2.0.4 CBO behavior without histograms and binds/literals
Hello,
i have a question about the CBO and the collected statistic values LOW_VALUE and HIGH_VALUE. I have seen the following on an oracle 10.2.0.4 database.
The CBO decides for a different execution plan, if we use bind variables (without bind peeking) or literals - no histograms exist on the table columns.
Unfortunately i didn't export the statistics to reproduce this behaviour on my test database, but it was "something" like this.
Environment:
- Oracle 10g 10.2.0.4
- Bind peeking disabled (_optim_peek_user_binds=FALSE)
- No histograms
- No partitioned table/indexes
The table (TAB) has 2 indexes on it:
- One index (INDEX A1) has included the date (which was a NUMBER column) and the values in this columns spread from 0 (LOW_VALUE) up to 99991231000000 (HIGH_VALUE).
- One index (INDEX A2) has included the article number which was very selective (distinct keys nearly the same as num rows)
Now the query looks something like this:
SELECT * FROM TAB WHERE DATE BETWEEN :DATE1 AND :DATE2 AND ARTICLENR = :ARTNR;And the CBO calculated, that best execution plan would be a index range scan on both indexes and perform a btree-to-bitmap conversion .. compare the returned row-ids of both indexes and then access the table TAB with that.
What the CBO didn't know (because of the disabled bind peeking) was, that the user has entered DATE1 (=0) and DATE2 (=99991231000000) .. so the index access on index A1 doesn't make any sense.
Now i executed the query with literals just for the DATE .. so query looks something like this:
SELECT * FROM TAB WHERE DATE BETWEEN 0 AND 99991231000000 AND ARTICLENR = :ARTNR;And then the CBO did the right thing ... just access index A2 which was very selective and then acceesed the table TAB by ROWID.
The query was much faster (factor 4 to 5) and the user was happy.
As i already mentioned, that there were no historgrams i was very amazed, that the execution plan changed because of using literals.
Does anybody know in which cases the CBO includes the values in LOW_VALUE and HIGH_VALUE in its execution plan calcuation?
Until now i thought that these values will only be used in case of histograms.
Thanks and RegardsoraS wrote:
As i already mentioned, that there were no historgrams i was very amazed, that the execution plan changed because of using literals.
Does anybody know in which cases the CBO includes the values in LOW_VALUE and HIGH_VALUE in its execution plan calcuation?
Until now i thought that these values will only be used in case of histograms.I don't have any references in front of me to confirm but my estimation is that the LOW_VALUE and HIGH_VALUE are used whenever there is a range based predicate, be it, the BETWEEN or any one of the <, >, <=, >= operators. Generally speaking the selectivity formula is the range defined in the query over the HIGH_VALUE/LOW_VALUE range. There are some specific variations of this due to including the boundaries (<= vs <) and NULL values. This make sense to use when the literal values are known or the binds are being peaked at.
However, when bind peaking is disabled Oracle has no way to use the general formula above for an estimation of the rows so it mostly likely uses the 5% rule. Since in your query you have a BETWEEN clause the estimated selectivity becomes 5%*5% which equals 0.0025. This estimated cardinality could be what made the CBO decide to use the index path versus ignoring it completely.
If you can post some sample data to reproduce this test case we can confirm.
Just a follow-up question. Why is a date being stored as a number?
HTH! -
Hi,
My Oracle Version is 10.2.0.4.
How would I know the low_value and high_value from the table USER_TAB_COL_STATISTICS in a readable format. I am getting the values in RAW. How would I get these values for CHAR datatype columns in CHAR, NUMBER datatype columns in NUMBER and DATE datatype columns in DATE.
See the example given below.
swamy@VSFTRAC1> DESC employee_attendance
Name Null? Type
EMPID NOT NULL VARCHAR2(10)
ACCESS_TIME NOT NULL DATE
ENAME VARCHAR2(50)
FLOOR VARCHAR2(10)
DOOR VARCHAR2(10)
INOUT VARCHAR2(3)
ACCESS_RESULT VARCHAR2(50)
swamy@VSFTRAC1> SELECT column_name, density, num_distinct, num_nulls, low_value, high_value, avg_COL_len FROM user_tab_col_statistics WHERE table_name='EMPLO
YEE_ATTENDANCE';
COLUMN_NAME DENSITY NUM_DISTINCT NUM_NULLS LOW_VALUE HIGH_VALUE AVG_COL_LEN
EMPID .008333333 120 0 30303031303830 3031313633 7
ACCESS_TIME .000259538 3853 0 786E0101031121 786E0106121B01 8
ENAME .008333333 120 0 414248494A49542050415449 57494E53544F4E2053414D55454C20 16
52414A552050
FLOOR .5 2 0 5345434F4E44 5448495244 7
DOOR .5 2 0 454E5452414E4345 535441495243415345 10
INOUT .5 2 0 494E 4F5554 4
ACCESS_RESULT 1 1 0 414343455353204752414E544544 414343455353204752414E544544 15
7 rows selected.
swamy@VSFTRAC1>Hi,
You can use the " dbms_stats.convert_raw_value" to convert the value to readable format
Refer to the following for the example
http://structureddata.org/2007/10/16/how-to-display-high_valuelow_value-columns-from-user_tab_col_statistics/
Hope that helps and solution for your requirement.
- Pavan Kumar N
Oracle 9i/10g - OCP
http://oracleinternals.blogspot.com/ -
Is this really a "bug" in dba_tab_columns?
Hi Guys, it's Xev.
I have been struggling with this for weeks now. No matter what I do, I cannot get my procedure to see dba_tab_columns inside of my procedure at run-time. It just blows up and says
"cannot see table or view", that generic answer.
Is there really a bug or limitation with how I am using this dba view? is there something else I can do? it simply will not recognize the other custom users also.
I have two basic lines of defense here, but both of them do not what I need them to do.
Can someone please help me on this.
Ok, this is the "first method" that I have tried to use, it works great, but "only" for one user at a time. I cannot get it to "search" all the rest of the "user schema's". It finds what I tell it to,
but only in the schema, tables and fields in the user that is executing it. I've tried to switch out (user_tab_columns) for dba_tab_columns, but it blows up and tells me that it "cannot see the table or view".
This is my preferred method. If you can alter this to make it find other other users in other schema's through-out the entire database, I would do anything for you!!!
create or replace procedure find_str
authid current_user
as
l_query long;
l_case long;
l_runquery boolean;
l_tname varchar2(30);
l_cname varchar2(4000);
l_refcur sys_refcursor;
z_str varchar2(4000);
begin
z_str := '^[0-9]{9}$';
dbms_output.enable (buffer_size => NULL);
dbms_application_info.set_client_info (z_str);
dbms_output.put_line ('Searchword Table Column/Value');
dbms_output.put_line ('---------------------------- ------------------------------ --------------------------------------------------');
for x in (select distinct table_name from all_tables
where owner not in ('SYS','SYSTEM','MDSYS','OUTLN','CTXSYS','OLAPSYS','OWBSYS','FLOWS_FILES','EXFSYS','SCOTT',
'APEX_030200','DBSNMP','ORDSYS','SYSMAN','APPQOSSYS','XDB','ORDDATA','WMSYS'))
loop
l_query := 'select ''' || x.table_name || ''', $$
from ' || x.table_name || '
where 1 = 1 and ( 1=0 ';
l_case := 'case ';
l_runquery := FALSE;
for y in ( select *
from user_tab_columns
where table_name = x.table_name
and data_type in ( 'VARCHAR2', 'CHAR' ))
loop
l_runquery := TRUE;
l_query := l_query || ' or regexp_like (' ||
y.column_name || ', userenv(''client_info'')) ';
l_case := l_case || ' when regexp_like (' ||
y.column_name || ', userenv(''client_info'')) then ' ||
'''<' || y.column_name || '>''||' || y.column_name || '||''</' || y.column_name || '>''';
end loop;
if ( l_runquery )
then
l_case := l_case || ' else NULL end';
l_query := replace( l_query, '$$', l_case ) || ')';
begin
open l_refcur for l_query;
loop
fetch l_refcur into l_tname, l_cname;
exit when l_refcur%notfound;
dbms_output.put_line
(rpad (z_str, 29) ||
rpad (l_tname, 31) ||
rpad (l_cname, 50));
end loop;
exception
when no_data_found then null;
end;
end if;
end loop;
end find_str;
NOW,
This is the second method, it also does a good job finding what i want it to, but still doesn't search the other users and other schema's. If you can alter this to make it find other users and other schema's I'll go crazy! LOL!
For test data simply create a table in your schema and put a "nine digit" number anywhere in the fields and both of these procedures will find them, but only for that "USER".
AND, that's my problem, I have to many custom user's to go on the instances and create procedures for each and every user. it's just not practical.
I really need you guys on this, Happy New Year!
create or replace PROCEDURE find_string
--(p_search_string IN VARCHAR2 DEFAULT '^[0-9]{3}-[0-9]{2}-[0-9]{4}$')
(p_search_string IN VARCHAR2 DEFAULT '^[0-9]{9}$')
IS
e_error_in_xml_processing EXCEPTION;
e_table_not_exist EXCEPTION;
PRAGMA EXCEPTION_INIT (e_error_in_xml_processing, -19202);
PRAGMA EXCEPTION_INIT (e_table_not_exist, -942);
BEGIN
DBMS_OUTPUT.PUT_LINE ('Searchword Table Column/Value');
DBMS_OUTPUT.PUT_LINE ('---------------------------- ------------------------------ --------------------------------------------------');
FOR r1 IN
(SELECT table_name, column_name
FROM dba_tab_cols
WHERE table_name IN (select distinct table_name from dba_tab_cols
where owner not in ('MDSYS','OUTLN','CTXSYS','OLAPSYS','FLOWS_FILES','OWBSYS','SYSTEM','EXFSYS','APEX_030200','SCOTT','DBSNMP','ORDSYS','SYSMAN','
APPQOSSYS','XDB','ORDDATA','SYS','WMSYS'))
--WHERE table_name = 'FIND_TEST'
ORDER BY table_name, column_name)
LOOP
BEGIN
FOR r2 IN
(SELECT DISTINCT SUBSTR (p_search_string, 1, 28) "Searchword",
SUBSTR (r1.table_name, 1, 30) "Table",
SUBSTR (t.column_value.getstringval (), 1, 50) "Column/Value"
FROM TABLE
(XMLSEQUENCE
(DBMS_XMLGEN.GETXMLTYPE
( 'SELECT "' || r1.column_name ||
'" FROM "' || r1.table_name ||
'" WHERE REGEXP_LIKE
("' || r1.column_name || '",'''
|| p_search_string || ''')'
).extract ('ROWSET/ROW/*'))) t)
LOOP
DBMS_OUTPUT.PUT_LINE
(RPAD (r2."Searchword", 29) ||
RPAD (r2."Table", 31) ||
RPAD (r2."Column/Value", 50));
END LOOP;
EXCEPTION
WHEN e_error_in_xml_processing THEN NULL;
WHEN e_table_not_exist THEN NULL;
WHEN OTHERS THEN RAISE;
END;
END LOOP;
END find_string;
Happy New Year, if you can get this to find other users!!! GOOD LUCK!Hi Solomon,
Ok, I understand the first 2 grants, but just to make sure this is what I think they are, I don't understand the third grant, so can you supply the actual grant statement.
Here are the first 2 grant statements. The users name is "directgrant1".
SQL> GRANT SELECT ANY TABLE TO DIRECTGRANT1;
SQL> GRANT SELECT ON SYS.DBA_TAB_COLS TO DIRECTGRANT1;
Can you please provide the third grant statement?
Now, this stored procedure code below, actually works, but it only finds the tables and fields within it's own schema. I have over 28 custom schema's to search for this nine digit number, so I need this stored procedure to search all of the schema's/Tables/Fields for a nine digit number in the entire database.
This stored procedure compiles and executes with no problem, but I need to use dba_tab_cols so it can find all the other users right? or is there a better way to do this Solomon?
As you can see, this stored procedure code uses "user_tab_cols" and only finds "one user tables/fields". If i use dba_tab_cols in it's place, then will it find all the other users in the entire database? That is correct right Solomon? Also, when i ran this procedure last night in my Test Database, it opened 3 cursors. Do we have to tell in in the code somewhere to close the sys_refcursor cursor? or does Oracle close it itself, since it's "implicit?"
Also, the tables that this will be "searching" actually have 40 Million plus records in them. Could this procedure cause the database to crash? Is 40 Million plus table records to much to use something like what we have below??
create or replace
procedure find_sting_nine_digits
authid current_user
as
l_query long;
l_case long;
l_runquery boolean;
l_tname varchar2(30);
l_cname varchar2(4000);
l_refcur sys_refcursor;
p_str varchar2(4000);
begin
p_str := '^[0-9]{9}$';
dbms_output.enable(buffer_size => NULL);
dbms_application_info.set_client_info (p_str);
dbms_output.put_line ('Searchword Table Column/Value');
dbms_output.put_line ('---------------------------- ------------------------------ --------------------------------------------------');
for x in (select * from all_tables
where table_name not in ('SMEG_WITH_RI','SMEGCITIES','SMEG_WITHOUT_RI'))
loop
l_query := 'select ''' || x.table_name || ''', $$
from ' || x.table_name || '
where 1 = 1 and ( 1=0 ';
l_case := 'case ';
l_runquery := FALSE;
for y in ( select *
from user_tab_cols
where table_name = x.table_name
and data_type in ( 'VARCHAR2', 'CHAR' ))
loop
l_runquery := TRUE;
l_query := l_query || ' or regexp_like (' ||
y.column_name || ', userenv(''client_info'')) ';
l_case := l_case || ' when regexp_like (' ||
y.column_name || ', userenv(''client_info'')) then ' ||
'''<' || y.column_name || '>''||' || y.column_name || '||''</' || y.column_name || '>''';
end loop;
if ( l_runquery )
then
l_case := l_case || ' else NULL end';
l_query := replace( l_query, '$$', l_case ) || ')';
begin
open l_refcur for l_query;
loop
fetch l_refcur into l_tname, l_cname;
exit when l_refcur%notfound;
dbms_output.put_line
(rpad (p_str, 29) ||
rpad (l_tname, 31) ||
rpad (l_cname, 50));
end loop;
exception
when no_data_found then null;
end;
end if;
end loop;
end find_sting_nine_digits; -
Dba_tab_columns and all_tab_columns
HI
AIX 5.3
oracle 10.2.0.3
which view should be accessible to developers dba_tab_columns or all_tab_columns? and why?
also dba_* and all_*
Thanks,
vishalALL_* - displays all the information accessible to the current user, including information from the current user's schema as well as information from objects in other schemas, if the current user has access to those objects by way of grants of privileges or roles.
DBA_ view displays all relevant information in the entire database. DBA_ views are intended only for administrators. They can be accessed only by users with the SELECT_ANY_TABLE privilege. (This privilege is assigned to the DBA role when the system is initially installed.)
USER_ view displays all the information from the schema of the current user. No special privileges are required to query these views
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_1001.htm#i1572007
http://download-uk.oracle.com/docs/cd/A97630_01/server.920/a96536/ch2.htm
HTH
-Anantha -
Raw high_value in the all_tab_columns table
There are some columns defined as float(126) in my database. I need to find out the highest values held in these columns. I am using the high_value column (defined as raw) in the all_tab_columns table. Is there a way to convert raw hig_value to number/decimal?
oracle version - 8iPlease read about [url http://download-west.oracle.com/docs/cd/A87860_01/doc/appdev.817/a76936/utl_raw2.htm]UTL_RAW in the manual
Regards,
Rob. -
How to convert high_value to date data type on dba_tab_partitions
I just want to query and get the max date available on the dba_tab_partitions for the high_value field.
high_value data type is LONG it has the value for Partition bound value expression.
For example on my partition table the expression is like TO_DATE(' 2012-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
I just want to convert the high_value to date data type.we had the same problem and we used a function for that purpose. I am not going to share our function here but Tom has all the gear you need : )
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:665224430110 -
DBA_TAB_PARTITIONS HIGH_VALUE - LONG to Number?
Hi all,
I need to write a procedure that will delete table partitions where the timestamp (number(38)) is less than the parameter passed in. What I want to do is compare the timestamp passed with the HIGH_VALUE of the partition. If the parameters value is > HIGH_VALUE, I want to drop the partition. I just found out that the HIGH_VALUE is a LONG. Any ideas how I can convert it to a number?
Trying something like:
for p in ( select table_name, partition_name
from dba_tab_partitions
where partition_name <> 'PRIOR_INTERVALS'
and table_name = 'FOO'
and high_value <= end_timestamp ) loop
causing:
and high_value <= ending ) loop
ERROR at line 21:
ORA-06550: line 21, column 20:
PL/SQL: ORA-00997: illegal use of LONG datatype
ORA-06550: line 17, column 14:
PL/SQL: SQL Statement ignored
ORA-06550: line 23, column 33:
PLS-00364: loop index variable 'P' use is invalid
ORA-06550: line 23, column 10:
PL/SQL: Statement ignored
Any ideas? Thanks
Running Oracle 11.2.0.1 btw on RHEL5918006 wrote:
Is :1 a bind variable replaced by a number variable? Why did it have to be done this way? HIGH_VALUE column stores text of expression to get high value, not the value itself. Why? Since partitioning column can be of any supported datatype oracle had to use string to store it. And don't forget about MAXVALUE placeholder. That's why you see column HIGH_VALUE values as:
TO_DATE(' 1998-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
TIMESTAMP' 2000-07-01 00:00:00'
MAXVALUESo in general we have to use dynamic SQL to calculate HIGH_VALUE expression value. But you are right, since in your case high value is number, text of expression to get high value is same as value and you do not need to use dynamic SQL:
SQL> CREATE TABLE range_sales
2 ( prod_id NUMBER(6)
3 , cust_id NUMBER
4 , time_id NUMBER
5 , channel_id CHAR(1)
6 , promo_id NUMBER(6)
7 , quantity_sold NUMBER(3)
8 , amount_sold NUMBER(10,2)
9 )
10 PARTITION BY RANGE (time_id)
11 (PARTITION SALES_Q1_1998 VALUES LESS THAN (19980401),
12 PARTITION SALES_Q2_1998 VALUES LESS THAN (19980701),
13 PARTITION SALES_Q3_1998 VALUES LESS THAN (19981001),
14 PARTITION SALES_Q4_1998 VALUES LESS THAN (19990101),
15 PARTITION SALES_Q1_1999 VALUES LESS THAN (19990401),
16 PARTITION SALES_Q2_1999 VALUES LESS THAN (19990701),
17 PARTITION SALES_Q3_1999 VALUES LESS THAN (19991001),
18 PARTITION SALES_Q4_1999 VALUES LESS THAN (20000101),
19 PARTITION SALES_Q1_2000 VALUES LESS THAN (20000401),
20 PARTITION SALES_Q2_2000 VALUES LESS THAN (20000701),
21 PARTITION SALES_Q3_2000 VALUES LESS THAN (20001001),
22 PARTITION SALES_Q4_2000 VALUES LESS THAN (MAXVALUE))
23 /
Table created.
SQL> create or replace
2 procedure truncate_partition(
3 p_owner varchar2,
4 p_tbl varchar2,
5 p_num number
6 )
7 is
8 begin
9 for v_rec in (select partition_name,high_value from dba_tab_partitions where table_owner = p_owner and table_name = p_tbl) loop
10 if v_rec.high_value != 'MAXVALUE'
11 then
12 if to_number(v_rec.high_value) < p_num
13 then
14 dbms_output.put_line('ALTER TABLE ' || p_owner || '.' || p_tbl || ' TRUNCATE PARTITION ' || v_rec.partition_name);
15 execute immediate 'ALTER TABLE ' || p_owner || '.' || p_tbl || ' TRUNCATE PARTITION ' || v_rec.partition_name;
16 end if;
17 end if;
18 end loop;
19 end;
20 /
Procedure created.
SQL> set serveroutput on
SQL> exec truncate_partition(user,'RANGE_SALES',19990701);
ALTER TABLE SCOTT.RANGE_SALES TRUNCATE PARTITION SALES_Q1_1998
ALTER TABLE SCOTT.RANGE_SALES TRUNCATE PARTITION SALES_Q2_1998
ALTER TABLE SCOTT.RANGE_SALES TRUNCATE PARTITION SALES_Q3_1998
ALTER TABLE SCOTT.RANGE_SALES TRUNCATE PARTITION SALES_Q4_1998
ALTER TABLE SCOTT.RANGE_SALES TRUNCATE PARTITION SALES_Q1_1999
PL/SQL procedure successfully completed.
SQL> SY. -
Significant slowness in data transfer from source DB to target DB
Hi DB Wizards,
My customer is noticing significant slowness in Data copy from the Source DB to the Target DB. The copy process itself is using PL/SQL code along with cursors. The process is to copy across about 7M records from the source DB to the target DB as part of a complicated Data Migration process (this will be a onetime Go-Live process). I have also attached the AWR reports generated during the Data Migration process. Are there any recommendations to help improve the performance of the Data transfer process.
Thanks in advance,
Nitinmultiple COMMIT will take longer to complete the task than a single COMMIT at the end!Lets check how much longer it is:
create table T1 as
select OWNER,TABLE_NAME,COLUMN_NAME,DATA_TYPE,DATA_TYPE_MOD,DATA_TYPE_OWNER,DATA_LENGTH,DATA_PRECISION,DATA_SCALE,NULLABLE,COLUMN_ID,DEFAULT_LENGTH,NUM_DISTINCT,LOW_VALUE,HIGH_VALUE,DENSITY,NUM_NULLS,NUM_BUCKETS,LAST_ANALYZED,SAMPLE_SIZE,CHARACTER_SET_NAME,CHAR_COL_DECL_LENGTH,GLOBAL_STATS,USER_STATS,AVG_COL_LEN,CHAR_LENGTH,CHAR_USED,V80_FMT_IMAGE,DATA_UPGRADED,HISTOGRAM
from DBA_TAB_COLUMNS;
insert /*+APPEND*/ into T1 select *from T1;
commit;
-- repeat untill it is >7Mln rows
select count(*) from T1;
9233824
create table T2 as select * from T1;
set autotrace on timing on;
truncate table t2;
declare r number:=0;
begin
for t in (select * from t1) loop
insert into t2 values ( t.OWNER,t.TABLE_NAME,t.COLUMN_NAME,t.DATA_TYPE,t.DATA_TYPE_MOD,t.DATA_TYPE_OWNER,t.DATA_LENGTH,t.DATA_PRECISION,t.DATA_SCALE,t.NULLABLE,t.COLUMN_ID,t.DEFAULT_LENGTH,t.NUM_DISTINCT,t.LOW_VALUE,t.HIGH_VALUE,t.DENSITY,t.NUM_NULLS,t.NUM_BUCKETS,t.LAST_ANALYZED,t.SAMPLE_SIZE,t.CHARACTER_SET_NAME,t.CHAR_COL_DECL_LENGTH,t.GLOBAL_STATS,t.USER_STATS,t.AVG_COL_LEN,t.CHAR_LENGTH,t.CHAR_USED,t.V80_FMT_IMAGE,t.DATA_UPGRADED,t.HISTOGRAM
r:=r+1;
if mod(r,10000)=0 then commit; end if;
end loop;
commit;
end;
--call that couple of times with and without "if mod(r,10000)=0 then commit; end if;" commented.
Results:
One commit
anonymous block completed
Elapsed: 00:11:07.683
Statistics
18474603 recursive calls
0 spare statistic 4
0 ges messages sent
0 db block gets direct
0 calls to kcmgrs
0 PX remote messages recv'd
0 buffer is pinned count
1737 buffer is not pinned count
2 workarea executions - optimal
0 workarea executions - onepass
10000 rows commit
anonymous block completed
Elapsed: 00:10:54.789
Statistics
18475806 recursive calls
0 spare statistic 4
0 ges messages sent
0 db block gets direct
0 calls to kcmgrs
0 PX remote messages recv'd
0 buffer is pinned count
1033 buffer is not pinned count
2 workarea executions - optimal
0 workarea executions - onepass
one commit
anonymous block completed
Elapsed: 00:10:39.139
Statistics
18474228 recursive calls
0 spare statistic 4
0 ges messages sent
0 db block gets direct
0 calls to kcmgrs
0 PX remote messages recv'd
0 buffer is pinned count
1123 buffer is not pinned count
2 workarea executions - optimal
0 workarea executions - onepass
10000 rows commit
anonymous block completed
Elapsed: 00:11:46.259
Statistics
18475707 recursive calls
0 spare statistic 4
0 ges messages sent
0 db block gets direct
0 calls to kcmgrs
0 PX remote messages recv'd
0 buffer is pinned count
1000 buffer is not pinned count
2 workarea executions - optimal
0 workarea executions - onepass
What we've got?
Single commit at the end, avg elapsed: 10:53.4s
Commit every 10000 rows (923 times), avg elapsed: 11:20.5s
Difference: 00:27.1s 3.98%
Multiple commits is just 4% slower. But it is safer regarding Undo consumed. -
Hello guys, I am trying to create a view from the result set of a sql query. The sql when run independently works fine, but when I try to create a view on it it throws an error saying that the table does not exist. below is the create view I am using.
create view ops$oracle.pci_view as SELECT
(select name from v$database@dwdev) database_name,
(select host_name from v$instance) host_name,
(select version from v$instance) dbversion,
o.owner,
o.object_name table_name,
o.CREATED table_created,
o.last_ddl_time table_last_ddl_time,
t.tablespace_name,
t.last_analyzed,
t.partitioned,
t.num_rows,
T.COMPRESSION,
t.compress_for,
t.read_only,
tb.status tablespace_status,
tb.encrypted tablespace_encrypted,
tb.compress_for tablespace_compress,
tb.contents,
TC.COLUMN_NAME,
tc.data_type,
TC.DATA_LENGTH,
tc.data_precision,
tc.data_scale,
tc.nullable,
tc.column_id,
tc.num_distinct,
tc.avg_col_len,
tc.density,
tc.num_nulls,
tc.low_value,
tc.high_value,
tc.last_analyzed col_last_analyzed,
ec.encryption_alg,
ec.salt,
ec.integrity_alg,
(SELECT tcm.comments
FROM dba_tab_comments tcm
WHERE tcm.owner = T.OWNER AND tcm.table_name = t.table_name)
table_comments,
(SELECT ccm.comments column_comments
FROM dba_col_comments ccm
WHERE ccm.owner = TC.OWNER
AND ccm.table_name = tc.table_name
AND ccm.column_name = tc.column_name)
column_comments
FROM dba_objects o,
dba_tables T,
dba_tablespaces tb,
dba_tab_columns tc
LEFT JOIN
dba_encrypted_columns ec ***********************************************
ON ec.owner = TC.OWNER
AND ec.table_name = tc.table_name
AND ec.column_name = tc.column_name
WHERE o.owner NOT IN
('APPQOSSYS',
'DBSNMP',
'EXFSYS',
'GGAPP',
'OPS$ORACLE',
'ORACLE_OCM',
'OUTLN',
'SYS',
'SYSTEM',
'WMSYS',
'XDB')
AND o.object_type = 'TABLE'
AND NOT EXISTS
(SELECT 1
FROM dba_mviews mv
WHERE mv.owner = o.owner
AND mv.container_name = o.object_name)
AND t.owner = o.owner
AND t.table_name = o.object_name
AND tb.tablespace_name = t.tablespace_name
AND tc.owner = o.owner
AND tc.table_name = o.object_name
AND tc.owner = t.owner
AND tc.table_name = t.table_name
AND tc.data_length > 15
AND tc.data_type NOT LIKE ('TIMESTAMP%')
AND (tc.data_precision IS NULL OR tc.data_precision > 15);(The line containing the string of astrixes, that is the table that the error says does not exist)
can someone help me where I am going wrong.
Thanks969224 wrote:
what if I create the view in SYS for few minutes until i can complete the task i was assigned and after that drop the view I created in SYS schema, would that have any effect on the database?Uh, yeah .. SYS isn't any better an option than SYSTEM.
Those schemas are "off limits" to us ... ignore them .. pretend they do not exist. (seriously ..)
Sounds like you need a new schema to store your application objects. -
How to retrieve view column type?
I know how to retrieve column names and comments from
table all_col_comments.But I don't know retrieving view column type.If somebody know please help.ALL_TAB_COLUMNS
ALL_TAB_COLUMNS describes the columns of the tables, views, and clusters accessible to the current user. To gather statistics for this view, use the SQL ANALYZE statement or the DBMS_STATS package.
Related Views
DBA_TAB_COLUMNS describes the columns of all tables, views, and clusters in the database.
USER_TAB_COLUMNS describes the columns of the tables, views, and clusters owned by the current user. This view does not display the OWNER column.
Column Datatype NULL Description
OWNER
VARCHAR2(30)
NOT NULL
Owner of the table, view, or cluster
TABLE_NAME
VARCHAR2(30)
NOT NULL
Name of the table, view, or cluster
COLUMN_NAME
VARCHAR2(30)
NOT NULL
Column name
DATA_TYPE
VARCHAR2(30)
Datatype of the column
DATA_TYPE_MOD
VARCHAR2(3)
Datatype modifier of the column
DATA_TYPE_OWNER
VARCHAR2(30)
Owner of the datatype of the column
DATA_LENGTH
NUMBER
NOT NULL
Length of the column in bytes
DATA_PRECISION
NUMBER
Decimal precision for NUMBER datatype; binary precision for FLOAT datatype, null for all other datatypes
DATA_SCALE
NUMBER
Digits to right of decimal point in a number
NULLABLE
VARCHAR2(1)
Specifies whether a column allows NULLs. Value is N if there is a NOT NULL constraint on the column or if the column is part of a PRIMARY KEY.
COLUMN_ID
NUMBER
NOT NULL
Sequence number of the column as created
DEFAULT_LENGTH
NUMBER
Length of default value for the column
DATA_DEFAULT
LONG
Default value for the column
NUM_DISTINCT
NUMBER
These columns remain for backward compatibility with Oracle7. This information is now in the TAB_COL_STATISTICS views. This view now picks up these values from HIST_HEAD$ rather than COL$.
LOW_VALUE
RAW(32)
HIGH_VALUE
RAW(32)
DENSITY
NUMBER
NUM_NULLS
NUMBER
Number of nulls in the column
NUM_BUCKETS
NUMBER
The number of buckets in histogram for the column
Note: The number of buckets in a histogram is specified in the SIZE parameter of the SQL statement ANALYZE. However, Oracle does not create a histogram with more buckets than the number of rows in the sample. Also, if the sample contains any values that are very repetitious, Oracle creates the specified number of buckets, but the value indicated by this column may be smaller because of an internal compression algorithm.
LAST_ANALYZED
DATE
The date on which this column was most recently analyzed
SAMPLE_SIZE
The sample size used in analyzing this column
CHARACTER_SET_NAME
VARCHAR2(44)
The name of the character set: CHAR_CS or NCHAR_CS
CHAR_COL_DECL_LENGTH
NUMBER
The length
GLOBAL_STATS
VARCHAR2(3)
For partitioned tables, indicates whether column statistics were collected for the table as a whole (YES) or were estimated from statistics on underlying partitions and subpartitions (NO).
USER_STATS
VARCHAR2(3)
Were the statistics entered directly by the user?
AVG_COL_LEN
NUMBER
Average length of the column (in bytes)
CHAR_LENGTH
NUMBER
Displays the length of the column in characters. This value only applies to the following datatypes:
CHAR
VARCHAR2
NCHAR
NVARCHAR
CHAR_USED
VARCHAR2(1)
B | C. B indicates that the column uses BYTE length semantics. C indicates that the column uses CHAR length semantics. NULL indicates the datatype is not any of the following:
CHAR
VARCHAR2
NCHAR
NVARCHAR2
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96536/ch2143.htm#1302694
Joel P�rez -
Feature Request | Allow custom metadata per table/column to be stored
Someone please correct me if there's already a feature that allows this...
I'd like to see a feature where you can define a set of metadata that can be stored per table / column, and perhaps a trigger that updates that metadata.
As a use case, it is sometimes necessary to find out how many records exist in a table, and the typical way of doing this is by running a statement like select count(*) from example_table;. With a large table, this statement might take a long time though, and certainly has overhead associated with it. If this is something that's done on a regular basis, like maybe even once every minute, wouldn't it be much better to store this number as metadata for the table that can be updated on inserts and deletes and then can be queried? It might involve extra overhead on an insert or delete statement to add to or subtract from this number, but then for some applications the benefit of getting the count quickly might outweigh the extra overhead.
Another use case is finding a minimum or maximum out of a table. Say you store a date and you need to find the max value for some feature in your application; with a large table, and especially if its a date with accuracy to the millisecond where an index wouldn't help much because most values are unique, it can take quite a bit of time and overhead to find that max value. If you could define for that column that you'd like to store the max value in metadata, and could query it, it would be very quick to get the info. The added overhead in this scenario would be on insert, update or especially on delete, the value would have to be updated. But in some applications, if you don't expect alot of deletes or updates on this column, it might be worth the added overhead to be able to quickly find the max value for this column.
I know you could probably make a separate table to store such info, and write triggers to keep it up to date, but why not have a built in feature in Oracle that manages it all for you? When you create a table, you could define with the column definition something like 'METADATA MAX' and it will store the max value of that column in metadata for you, etc.
I know that the overhead of this feature wouldn't be good for most circumstances, but there certainly are those cases where it would be hugely beneficial and the overhead wouldn't matter so much.
Any thoughts?
Can this be submitted as a feature request? Am I asking in the right place?
(p.s. while you're at it, make a feature to mimic IDENTITY columns from SQL Server!)I don't think what you mentioned is exactly what I was talking about. There's no min_value or max_value in the dba_tab_columns table; there's only high_value and low_value, and they are stored in binary. And I believe to be accurate in the use cases that I suggested, you would have to analyze the table after every insert/update/delete. So no, that's not the same feature I've asked for, although I appreciate the feedback.
Also, the num_rows in dba_tables relies on the table being analyzed too, so for a table that stores temporary date to be processed where you want to know the size of the queue every few seconds, it wouldn't make sense to analyze the whole table every few seconds when all you want is a count of the records, and it's also inefficient to use the COUNT function with every query when it would be much faster to store the count in some metadata form that is updated with every insert or delete (adding to a count and subtracting from a count with each insert/delete is WAY faster than analyzing the table and letting it literally recount the entire table every time).
So again, while I appreciate the feedback, I don't think what you mentioned addresses either of the use cases I gave. I'm talking about a different kind of user defined metadata that could be stored per table/column with rules to govern how it is updated. Not you standard metadata that requires an analyze and isn't real time. I also only gave a few use cases, but the feature I'm really looking for is the ability for users to define many different types of custom metadata even maybe based on their own logic.
Again, this feature could be implemented right now by creating a USERMETADATA table for every standard table you have, and then using triggers to populate the info you want at the table level and column level, but why do that when it could be built in?
Also, I don't really agree that having to create a trigger/sequence for every table instead of setting a column as IDENTITY is better. It's cumbersome. Why not build these commonly used features in? It can create a trigger/sequence behind the scenes for all I care, but why not at least let someone mark a column as IDENTITY (or use whatever other term you want) at the time of table creation and let it do everything for them. But that's off-topic; I meant it for more of a side comment, but should really have a separate post about it. -
Reg. Removing the Records
Hi Friends,
i want to remove some lakhs of records from the table. while removing it
must not enter the transaction into redo log files. If it is possible pls tell?
Thanks in advance
Rgds.
Vedavathi.Ecollect statistics on your schema, then try this query
select * from DBA_TAB_COL_STATISTICS
where TABLE_NAME=<100 columns table> and OWNER=<schema name>
order by NUM_NULLS desc;You can also use DBA_TAB_COLUMNS
DBA_TAB_COL_STATISTICS
Columns of user's tables,views and clusters
Columns
OWNER
Table,view or cluster owner
TABLE_NAME
Table,view or cluster name
COLUMN_NAME
Column name
NUM_DISTINCT
The number of distinct values in the column
LOW_VALUE
The low value in the column
HIGH_VALUE
The high value in the column
DENSITY
The density of the column
NUM_NULLS
The number of nulls in the column
NUM_BUCKETS
The number of buckets in histogram for the column
LAST_ANALYZED
The date of the most recent time this column was analyzed
SAMPLE_SIZE
The sample size used in analyzing this column
GLOBAL_STATS
Are the statistics calculated without merging underlying partitions?
USER_STATS
Were the statistics entered directly by the user?
AVG_COL_LEN
The average length of the column in bytes
Maybe you are looking for
-
Can you open word doc. In pages? (Opening a doc. From the web) in the iPad or iPod tuch?
-
IPod Classic wiped itself & is no longer recognised in iTunes
iPod Classic, 80GB, from Dec 2007. Plugged iPod in to laptop as normal (Acer Aspire 7730ZG, using WIN7 Ultimate at the time) & tried to add an album. Process froze & iPod would not update. It would not respond within iTunes, so I manually ejected via
-
Photoshop from Lightroom 5.5 to CC Phjotoshop 2014.
Updated to 2014 and now when taking a photo partially edited from Lightroom 5.5 to Photoshop 2014 it will not open. Preferences in Lightroom appear to be correct. Any suggestions?
-
MSDTC should be clustered for BizTalk host clustering
We have two biztalk 2010 servers. Recently as recomended by MS expert, we removed MSDTC from clustered resources. Now I have created two new host and enabled cluster on both(Host instances are Active and Passive).There are other host (running Active
-
Can webgui Iviews use ITS Themes generated by ITS Theme generator
Hi, Can only IAC Components such as ESS etc be modified using ITS Theme genrator. I selected sap_preview service in the ITS Theme Generator wizard, and generated the theme.If I select webgui as the service, then it does not reflect any changes. I hav