Ecrypt LOB using SecureFile
Dear all,
from whom the LOB encrypted file is protected? Suppose I create a table that contains an encrypted LOB and insert a value on it:
CREATE TABLE t1 ( a CLOB ENCRYPT IDENTIFIED BY foo)
LOB(a) STORE AS SECUREFILE (
CACHE
SQL> insert into t1 values ('dada');
1 row created.
SQL> commit;
Commit complete.when I try to access the table from the other schema that have a SELECT privilege on it, it isn't asked for any password. On the other hand, a schema that doesn't have the privilege, won't even able to get into the table since there is an insufficient privilege error.
Is there any condition where a user wants to access the table, he needs to enter password ("foo")?
best regards,
Val
Edited by: Valerie Debonair on Oct 6, 2011 11:30 PM
I think you'll find it's more to do with the underlying data at the operating system level. The datafiles can't just be taken and hacked into another database so that someone can create a similar table against that data and see it without knowing what password is required to decrypt the data.
It's expected that if you give select permission to another user on that table, then they should be able to select from it.
Similar Messages
-
Use securefile for new partitions made by interval partitioning
I am using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
I have a range partitioned table having lobs as basicfile. Due to storage issue and other business constraints , it is determined not to change existing lobs to securefile.
However ,we want new lobs to be in securefile and alter table to have interval partition+.
While researching, I found sql to change lob in range partition to securefile by using
alter table t1 add partition t1_p2 value less than (10000) lob (col3) store as securefile (tablespace tbs_sf1)Please advise me to do similar in case of interval partition.
Many thanks for assistance.>
Can we modify default attribute of lob to store as securefile for partition table.
>
Yes - that is what I meant in my reply. But it seems I may have been wrong since after further testing I was able to find syntax that would appear to work for you. Please test this and post the results.
The line
LOB(CLOB_DATA) store as securefileshould store intervals as securefile. But it accepted syntax to store the predefined partitions as basicfile
PARTITION P0 VALUES LESS THAN
(TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOB(CLOB_DATA) store as basicfile,I don't have time to test with data until next week. But if this works you could predefine all of your partitions that you want to use basicfile for and then use interval partitions for the ones you want securefile for. That sounded like what you were trying to do.
DROP TABLE INTERVAL_SALES1 CASCADE CONSTRAINTS;
CREATE TABLE INTERVAL_SALES1
PROD_ID NUMBER(6),
CUST_ID NUMBER,
TIME_ID DATE,
CHANNEL_ID CHAR(1 BYTE),
PROMO_ID NUMBER(6),
QUANTITY_SOLD NUMBER(3),
AMOUNT_SOLD NUMBER(10,2),
CLOB_DATA CLOB
LOB(CLOB_DATA) store as securefile
PARTITION BY RANGE (TIME_ID)
INTERVAL( NUMTOYMINTERVAL(1,'MONTH'))
PARTITION P0 VALUES LESS THAN
(TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOB(CLOB_DATA) store as basicfile,
PARTITION P1 VALUES LESS THAN (TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P2 VALUES LESS THAN (TO_DATE(' 2009-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P3 VALUES LESS THAN (TO_DATE(' 2010-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
)Post the results of any testing you do. -
Insert into oracle lob using php odbc
is there a way to insert a string > 4000 characters into an oracle lob using php without using oci8 extension? If so can somebody post some sample code to do it?
Perhaps you could externally invoke Oracle's SQL Loader utility.
What are you trying to achieve?
-- cj -
Using SECUREFILE in STORAGE preference?
I know I can set BASIC_STORAGE preferences for a CONTEXT index. For r_table_clause, the documentation states that The default clause is: 'LOB(DATA) STORE AS (CACHE)'.
If I'm using 11g, does it make sense to specify this as: 'LOB(DATA) STORE AS SECUREFILE (CACHE)'?
If so, do any of the other SECURFILE options (like COMPRESS or DEDUPLICATE) make sense?
Thanks!As far as I understand, option SECUREFILE (CACHE) wont make any sense with both base table and $R table, IF
- all your queries are using the specified index
Other way round, if all your queries are using index always, then it can fetch the required data from $R itself and it wont require to access the base table for LOB data.
But, for other option like COMPRESS it very well make sense.
If you use COMPRESS option only in $R table, it compresses the data of only $R table but your base table still uses large amount of space.
So, for big LOB data, you should definitely use COMPRESS for base table too (keeping in mind about the performance issue while syncing index etc). -
Want to use securefile feature to store .pdf files at OS Level
Can Data Guard copy the pdf to the file system of a standby database?
I think not, but have no idea.
ThanksHi,
DataGuard cannot, but you can do this with ACFS (Oracle Cloud Filesystem) Replication in addition to DataGuard, to keep file (BFiles) and the database in Sync.
Alternatively you could put the pdf into DBFS (Database Filesystem), so that the file would be saved inside the DB, hence replication via. the normal DG mechanism.
www.oracle.com/goto/asm
Regards
Sebastian -
Buffer busy waits after cnanging lob storage to oracle securefiles
Hi Everyone
I need help resolving a problem with buffer busy waits in for a lob segment using securefiles for storage.
During the load the application inserts a record into a table with the lob segment and update the record after, populating lob data. The block size on the table space holding the lob is 8 kb and the chunk size on the lob segment is set to 8kb. The average size of the lob record is 6 kb and the minimum size is 4.03 KB. The problem occurs only when running a job with a big number of relatively small inserts (4.03 Kb) in to the lob column . The table definition allow in-row storage and the ptcfree set to 10%. The same jobs runs without problem when using basicfiles storage for the lob column.
According to [oracle white paper |http://www.oracle.com/technetwork/database/options/compression/overview/securefiles-131281.pdf] securefiles have a number of performance enhancements. I was particular interested to test Write Gather Cache as our application does a lot of relatively small inserts into a lob segment.
Below is a fragment from the AWR report. It looks like all buffer busy waits belong to a free list class. The lob segment is located in an ASSM tablespace and I cannot increase freelists.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning option
Host Name Platform CPUs Cores Sockets Memory(GB)
DB5 Microsoft Windows x86 64-bit 8 2 31.99
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1259 01-Apr-11 14:40:45 135 5.5
End Snap: 1260 01-Apr-11 15:08:59 155 12.0
Elapsed: 28.25 (mins)
DB Time: 281.55 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 2,496M 2,832M Std Block Size: 8K
Shared Pool Size: 1,488M 1,488M Log Buffer: 11,888K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 10.0 0.1 0.01 0.00
DB CPU(s): 2.8 0.0 0.00 0.00
Redo size: 1,429,862.3 9,390.5
Logical reads: 472,459.0 3,102.8
Block changes: 9,849.7 64.7
Physical reads: 61.1 0.4
Physical writes: 98.6 0.7
User calls: 2,718.8 17.9
Parses: 669.8 4.4
Hard parses: 2.2 0.0
W/A MB processed: 1.1 0.0
Logons: 0.1 0.0
Executes: 1,461.0 9.6
Rollbacks: 0.0 0.0
Transactions: 152.3
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
buffer busy waits 1,002,549 8,951 9 53.0 Concurrenc
DB CPU 4,724 28.0
latch: cache buffers chains 11,927,297 1,396 0 8.3 Concurrenc
direct path read 121,767 863 7 5.1 User I/O
enq: DW - contention 209,278 627 3 3.7 Other
?Host CPU (CPUs: 8 Cores: 2 Sockets: )
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
38.7 3.5 57.9
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 40.1
% of busy CPU for Instance: 95.2
%DB time waiting for CPU - Resource Mgr: 0.0
Memory Statistics
~~~~~~~~~~~~~~~~~ Begin End
Host Mem (MB): 32,762.6 32,762.6
SGA use (MB): 4,656.0 4,992.0
PGA use (MB): 318.4 413.5
% Host Mem used for SGA+PGA: 15.18 16.50
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
buffer busy waits 1,002,549 0 8,951 9 3.9 53.0
latch: cache buffers chain 11,927,297 0 1,396 0 46.2 8.3
direct path read 121,767 0 863 7 0.5 5.1
enq: DW - contention 209,278 0 627 3 0.8 3.7
log file sync 288,785 0 118 0 1.1 .7
SQL*Net more data from cli 1,176,770 0 103 0 4.6 .6
Buffer Wait Statistics DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
free list 818,606 8,780 11
undo header 512,358 141 0
2nd level bmb 105,816 29 0
-> Total Logical Reads: 800,688,490
-> Captured Segments account for 19.8% of Total
Tablespace Subobject Obj. Logical
Owner Name Object Name Name Type Reads %Total
EAG50NSJ EAG50NSJ SYS_LOB0000082335C00 LOB 127,182,208 15.88
SYS SYSTEM TS$ TABLE 7,641,808 .95
Segments by Physical Reads DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> Total Physical Reads: 103,481
-> Captured Segments account for 224.4% of Total
Tablespace Subobject Obj. Physical
Owner Name Object Name Name Type Reads %Total
EAG50NSJ EAG50NSJ SYS_LOB0000082335C00 LOB 218,858 211.50
....Best regards
Yuri KogunHi Jonathan,
I was puzzled by the number of logical reads as well. This hasn't happened when the lob was stored as a basic fille and I assumed that the database is able to store the records in-row when we switched to securefiles. With regards to ASSM, according to the documentation this is the only option when using securefiles.
We did have high number of HW-enqueue waits in the database when running the test with basic files and had to set 44951 event
alter system set EVENTS '44951 TRACE NAME CONTEXT FOREVER, LEVEL 1024' There are 2 application servers running 16 jobs each, so we should not have more than 32 sessions inserting the data in the same time but I need to check wheter jobs can be brocken to smaller peaces. I that case the number of concurrent session may be bigger. Each session is configured with bundle size of 30 and it will issue commit every 30 inserts.
I am not sure how exactly the code does insert, as I've been told it should be straight insert and update I will be able to check this on Monday.
Below is the extract from the AWR reports with top SQL, I could not find any SQL related to the $TS table in the report. The query to the V$SEGMENT_STATISTICS was executed by me during the job run.
?SQL ordered by Elapsed Time DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 91.3% of Total DB Time (s): 16,893
-> Captured PL/SQL account for 0.1% of Total DB Time (s): 16,893
Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
7,837.5 119,351 0.07 46.4 28.3 .7 2zrh6mw372asz
Module: JDBC Thin Client
update JS_CHANNELDESTS set CHANNELID=:1, DESTID=:2, CHANNELDESTSTATUSDATE=:3, ST
ATUS=:4, BINOFFSET=:5, BINNAME=:6, PAGECOUNT=:7, DATA=:8, SORTORDER=:9, PRINTFOR
MAT=:10, ENVELOPEID=:11, DOCID=:12, CEENVELOPEID=:13, CHANNELTYPE=:14 where ID=:
15
7,119.0 115,997 0.06 42.1 23.1 .2 3vjx93vur4dw1
Module: JDBC Thin Client
insert into JS_CHANNELDESTS (CHANNELID, DESTID, CHANNELDESTSTATUSDATE, STATUS, B
INOFFSET, BINNAME, PAGECOUNT, DATA, SORTORDER, PRINTFORMAT, ENVELOPEID, DOCID, C
EENVELOPEID, CHANNELTYPE, ID) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :
11, :12, :13, :14, :15)
85.6 2 42.80 .5 98.3 .0 cc19qha9pxsa4
Module: SQL Developer
select object_name, statistic_name, value from V$SEGMENT_STATISTICS
where object_name = 'SYS_LOB0000082335C00011$$'
35.0 111,900 0.00 .2 74.3 7.6 c5q15mpnbc43w
Module: JDBC Thin Client
insert into JS_ENVELOPES (BATCHID, TRANSACTIONNO, SPOOLID, JOBSETUPID, JOBSETUPN
AME, SPOOLNAME, STEPNO, MASTERCHANNELJOBID, SORTKEY1, SORTKEY2, SORTKEY3, ID) va
lues (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12)
34.9 111,902 0.00 .2 63.0 2.6 a0hmmbjwgwh1k
Module: JDBC Thin Client
insert into JS_CHANNELJOBPROPERTIES (NAME, VALUE, CHANNELJOBID, ID) values (:1,
:2, :3, :4)
29.2 950 0.03 .2 95.9 .1 du0hgjbn9vw0v
Module: JDBC Thin Client
SELECT * FROM JS_BATCHOVERVIEW WHERE BATCHID = :1
?SQL ordered by Executions DB/Inst: ORA11G/ora11g Snaps: 1259-1260
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Executions: 2,476,038
-> Captured SQL account for 96.0% of Total
Elapsed
Executions Rows Processed Rows per Exec Time (s) %CPU %IO SQL Id
223,581 223,540 1.0 22.4 63.7 .0 gz7n75pf57c
Module: JDBC Thin Client
SELECT SQ_CHANNELJOBPROPERTIES.NEXTVAL FROM DUAL
120,624 120,616 1.0 8.1 99.0 .0 6y3ayqzubcb
Module: JDBC Thin Client
select batch0_.BATCHID as BATCHID0_0_, batch0_.BATCHNAME as BATCHNAME0_0_, batch
0_.STARTDATE as STARTDATE0_0_, batch0_.PARFINDATE as PARFINDATE0_0_, batch0_.PRO
CCOMPDATE as PROCCOMP5_0_0_, batch0_.BATCHSTATUS as BATCHSTA6_0_0_, batch0_.DATA
FILE as DATAFILE0_0_, batch0_.BATCHCFG as BATCHCFG0_0_, batch0_.FINDATE as FINDA
119,351 227,878 1.9 7,837.5 28.3 .7 2zrh6mw372a
Module: JDBC Thin Client
update JS_CHANNELDESTS set CHANNELID=:1, DESTID=:2, CHANNELDESTSTATUSDATE=:3, ST
ATUS=:4, BINOFFSET=:5, BINNAME=:6, PAGECOUNT=:7, DATA=:8, SORTORDER=:9, PRINTFOR
MAT=:10, ENVELOPEID=:11, DOCID=:12, CEENVELOPEID=:13, CHANNELTYPE=:14 where ID=:
15
116,033 223,892 1.9 8.0 92.2 .0 406wh6gd9nk
Module: JDBC Thin Client
select m_jobprope0_.CHANNELJOBID as CHANNELJ4_1_, m_jobprope0_.ID as ID1_, m_job
prope0_.NAME as formula0_1_, m_jobprope0_.ID as ID4_0_, m_jobprope0_.NAME as NAM
E4_0_, m_jobprope0_.VALUE as VALUE4_0_, m_jobprope0_.CHANNELJOBID as CHANNELJ4_4
_0_ from JS_CHANNELJOBPROPERTIES m_jobprope0_ where m_jobprope0_.CHANNELJOBID=:1
115,997 115,996 1.0 7,119.0 23.1 .2 3vjx93vur4d
Module: JDBC Thin Client
insert into JS_CHANNELDESTS (CHANNELID, DESTID, CHANNELDESTSTATUSDATE, STATUS, B
INOFFSET, BINNAME, PAGECOUNT, DATA, SORTORDER, PRINTFORMAT, ENVELOPEID, DOCID, C
EENVELOPEID, CHANNELTYPE, ID) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :
11, :12, :13, :14, :15)
115,996 115,996 1.0 15.9 75.0 4.5 3h58syyk145
Module: JDBC Thin Client
insert into JS_DOCJOBS (CREATEDATE, EFFDATE, JURIST, LANG, IDIOM, DD, DDVID, USE
RKEY1, USERKEY2, USERKEY3, USERKEY4, USERKEY5, USERKEY6, USERKEY7, USERKEY8, USE
RKEY9, USERKEY10, USERKEY11, USERKEY12, USERKEY13, USERKEY14, USERKEY15, USERKEY
16, USERKEY17, USERKEY18, USERKEY19, USERKEY20, REVIEWCASEID, ID) values (:1, :2
115,440 115,422 1.0 11.5 63.3 .0 2vn581q83s6
Module: JDBC Thin Client
SELECT SQ_CHANNELDESTS.NEXTVAL FROM DUAL
...The tablespace holding the lob segment has system extent allocation and the number of blocks for the lob segments roughly the same as the number of blocks in allocated extents.
select segment_name, blocks, count (*)
from dba_extents where segment_name = 'SYS_LOB0000082335C00011$$'
group by segment_name, blocks
order by blocks
SEGMENT_NAME BLOCKS COUNT(*)
SYS_LOB0000082335C00011$$ 8 1
SYS_LOB0000082335C00011$$ 16 1
SYS_LOB0000082335C00011$$ 128 158
SYS_LOB0000082335C00011$$ 256 1
SYS_LOB0000082335C00011$$ 1024 120
SYS_LOB0000082335C00011$$ 2688 1
SYS_LOB0000082335C00011$$ 8192 117
SELECT
sum(ceil(dbms_lob.getlength(data)/8000))
from EAG50NSJ.JS_CHANNELDESTS
SUM(CEIL(DBMS_LOB.GETLENGTH(DATA)/8000))
993216
select sum (blocks) from dba_extents where segment_name = 'SYS_LOB0000082335C00011$$'
SUM(BLOCKS)
1104536 Below is the instance activity stats related to securefiles from the AWR report
Statistic Total per Second per Trans
securefile allocation bytes 3,719,995,392 2,195,042.4 14,415.7
securefile allocation chunks 380,299 224.4 1.5
securefile bytes non-transformed 2,270,735,265 1,339,883.4 8,799.6
securefile direct read bytes 1,274,585,088 752,089.2 4,939.3
securefile direct read ops 119,725 70.7 0.5
securefile direct write bytes 3,719,995,392 2,195,042.4 14,415.7
securefile direct write ops 380,269 224.4 1.5
securefile number of non-transfo 343,918 202.9 1.3Best regards
Yuri
Edited by: ykogun on 02-Apr-2011 13:33 -
LoB/CLOB/RAW/LON RAW to Securefiles convertion
Hello Guys,
I am planning to upgrade oracle 10.2.0.4 to oracle 11g 11.2, and in our database we have lots of tables with LoB/CLOB/RAW/LON RAW datatypes and i want to use securefile feature for this columns.
Question is how to carry out these converstion? and after convertion of this can we do online reorganization for securefile datatype.
Regards,
V.SinghPls, read the doc: http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28393/adlob_smart.htm#BABDIEGE
-
Using EXECUTE IMMEDIATE with XML
Database version : 10.2.0.3.0 - 64bi
Hi All,
I have a xml which is stored in a table, xmltype column.
i have target insert tables whose column names and xml nodes are same.
using all_tab_columns table i will generate columns to be passed to xmltable.
all these data i will store in variables and finally pass to xmltable as below
just want to know using execute immediate is good to use in XML?
SQL_STMT := 'insert into '||table_name|| ' ( '||V_COLUMN_NAME||')';
SQL_STMT := SQL_STMT ||' SELECT ' ||V_XTAB_COLUMN_NAME ||
' FROM TO_XML,
XMLTABLE(' ||v_xpath||
'PASSING XML_VALUE
columns ' || V_COLUMNS_DATA_TYPE
||') XTAB
WHERE Seq_NO = ' || P_SEQUENCE_NO ;
EXECUTE IMMEDIATE SQL_STMT ;
Thanks and Regards,
Rubu1) is it OK? As I stated above, it can be made to work. It would not be my first choice, but then none of us here know the full details as well as you do so maybe there is a compelling reason to use dynamic SQL.
Here is the documentation for [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/executeimmediate_statement.htm#LNPLS01317]EXECUTE IMMEDIATE.
Actually now I finally realize your XML resides in table TO_XML so that means you won't be putting the actual XML into the shared pool via an incorrectly written dynamic SQL statement at least. That is what Odie and I were first concerned about with dynamic SQL usage, that the XML would be hard-coded into your SQL_STMT variable. You are simply changing the columns (in 3 locations). With that setup, you have no need for (nor can use) bind variables. The overall issue of dynamic SQL being slightly slower than static SQL still exists as the SQL statement will first have to be parsed and validated.
A larger issue in terms of performance is how 10.2 handles XMLTypes. If the underlying XML is large, XMLTable performance degrades quickly. Options around this are to parse the XML in PL/SQL or to upgrade to some version of 11 and use SECUREFILE BINARY XML as the underlying storage structure for the TO_XML.XML_VALUE column. -
Regarding lobs...
I am not that familiar with LOBs, and was hoping someone could shed some light for me.
I am running Oracle 11.2.0.2 EE, and have made an interesting discovery of this new database that i am responsible.
First, I found out that I have a table that is about 7.4G, but it has two LOB columns that when I query dba_lobs, I found that they contains 365G of lobs and the table itself has 22G of LOBS - not sure what is the difference.
SQL> 1 select segment_name, round(sum(bytes)/1024/1024/1024,1) as "SIZE" , segment_type
2 from dba_segments where owner = 'ARADMIN'
3 group by segment_name, segment_type
4 having round(sum(bytes)/1024/1024/1024,1) > 1
5* order by 2
SEGMENT_NAME SIZE SEGMENT_TYPE
SYS_LOB0000077517C00027$$ 4.2 LOBSEGMENT
SYS_LOB0000210343C00029$$ 4.4 LOBSEGMENT
SYS_LOB0000077480C00002$$ 4.6 LOBSEGMENT
T465 5 TABLE
T2052 8.3 TABLE
T2115 12.4 TABLE
T2444 13.4 TABLE
T2179 14.8 TABLE
T2192 21.8 TABLE
SYS_LOB0000077549C00015$$ 182 LOBSEGMENT <=== (related to table T2192)
SYS_LOB0000077549C00016$$ 184.4 LOBSEGMENT <=== (related to table T2192)
30 rows selected.Now, let's look at the which table these LOBS belong...
SQL> select table_name, column_name, segment_name
2 from dba_lobs
3 where segment_name in (
4 select segment_name from dba_segments where owner = 'ARADMIN'
5 having round(sum(bytes)/1024/1024/1024,1) > 1
6 group by segment_name
7 )
8 /
TABLE_NAME COLUMN_NAME SEGMENT_NAME
B1947C536880923 C536880923 SYS_LOB0000077310C00002$$
T2051 C536870998 SYS_LOB0000077426C00041$$
T2052 C536870987 SYS_LOB0000077440C00063$$
T2115 C536870913 SYS_LOB0000077463C00009$$
B2125C536880912 C536880912 SYS_LOB0000077480C00002$$
B2125C536880913 C536880913 SYS_LOB0000077483C00002$$
T2179 C536870936 SYS_LOB0000077517C00027$$
T2192 C456 SYS_LOB0000077549C00015$$ <====
T2192 C459 SYS_LOB0000077549C00016$$ <====
T2444 C536870936 SYS_LOB0000210343C00029$$
T1990 C536870937 SYS_LOB0000250271C00026$$
11 rows selected.So, from the above, I noticed in the first query that the table T2192 itself contains 21.8G of LOBS, and, that the columns C456 and C459 of same table contain a total of (181.7+183.9) = 365.6G.
First question is how can the table be only 21.8G, and the lob segments of the table columns be 365.6G of Lobs?
It seems some lobs must be external, while others are part of the actual table.
Next, I am wondering if a row is deleted from the table, would the lobs associated with that row that are referenced by columns C456 and C459 also be deleted.
Discussing this with our Sr. Developer, he says the table is purged of rows older than 6 months, but my question is whether the Lobs are actually purged with the rows.
Any ideas?
Edited by: 974632 on Dec 27, 2012 8:05 AMHi John,
Reading note 386341.1, this is pretty messed up about lobs.
First, the UNDO data for a LOB segment is kept within the LOB segment space, e.g., when lobs are deleted, etc. Yuck!
So, you are right about the space eventually being returned to the database, but surely we can do better than that!
Then, when we check for the size of the lobs using dbms_lob.getlength, (since we are using AL32UTF8), it returns it in the number of characters instead of bytes.
So, then we have to convert - ref. note 790886.1. An enhancement request via Bug 7156454 has been filed to get this functionality and is under consideration by development.
So, how does one (safely) clean up lobs that have been deleted in the database?
It seems that doing an alter table... 'move lob' might work, and also an alter table ... modify lob (...) (shrink space [cascade]);
But with this being production, I'm very concerned about all the related bugs, even though I am on 11.2.0.2.
WARNING : shrinking / reorganizing BASICFILE lobs can cause performance problems due to "enq: HW contention" waits
Serious LOB corruption can occur after an
ALTER TABLE <table> MODIFY LOB ("<lob_column>") (STORAGE (freelists <n>));
has been issued on a LOB column which resides in a manual space managed tablespace. Subsequent use of the LOB can fail with various internal errors such as:
ORA-600 [ktsbvmap1]
ORA-600 [25012]
For more information, please refer to bug 4450606.
#2. Be aware of the following bug before using the SHRINK option in releases which are <=10.2.0.3:
Bug: 5636728 LOB corruption / ORA-1555 when reading LOBs after a SHRINK operation
Please check:
Note.5636728.8 Ext/Pub Bug 5636728 - LOB corruption / ORA-1555 when reading LOBs after a SHRINK operation
for details on it.
#3. Be aware that, sometimes, it could be needed to perform the shrink operation twice, in order to avoid the:
Bug:5565887 SHRINK SPACE IS REQUIRED TWICE FOR RELEASING SPACE.
is fixed in 10.2.From looking at note: 1451124.1, it seems the best options are:
1) alter table move (locks the table, and requires additional space of at least double the size of the table).
2) do an export / drop the table / and reimport - again downtime required.
Neither option are possible in our environment. -
Hi, i'm trying to do an inital load and I keep getting errors like these:
ERROR OGG-01192 Oracle GoldenGate Capture for Oracle, ext1.prm: Trying to use RMTTASK on data types which may be written as LOB chunks (Table: 'TESTDB.BLOBTABLE').
ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
The table looks like this:
COLUMN_NAME|DATA_TYPE|NULLABLE|DATA_DEFAULT|COLUMN_ID|COMMENTS
UUID VARCHAR2(32 BYTE) No 1
DESCRIPTION VARCHAR2(2000 BYTE) Yes 2
CONTENT BLOB Yes 3
I've checked and the source database does contain data in the blobtable and both databases have the same tables, so now I have no idea what can be wrong? =/For initial loads with LOBs, use a RMTFILE and a normal replicat. There are a number of things that are not supported with "RmtTask". A "rmtfile" is basically the same format as a 'rmttrail' file, but is specifically for initial loads or other "captured" data that is not a continuous stream. And make sure you do have a newer build of GG (either v11 or a latest 10.4 from the support site).
The 'extract' would look something like this:
ggsci> add extract e1aa, sourceIsTable
ggsci> edit param e1aa
extract e1aa
userid ggs, password ggs
-- either local or remote
-- extFile dirdat/aa, maxFiles 999999, megabytes 100
rmtFile dirdat/aa, maxFiles 999999, megabytes 100
Table myschema1.*;
Table myschema2.*;
Then on the target, use a normal 'replicat' to read the "files".
Note that if the source and target are both oracle, this is not the most efficient way to instantiate the target. Using export/import or backup/restore (or any other mechanism) would usually be preferable. -
Moving Large amount of data using IMPDP with network link
Hi Guru,
Here we are having a requirement to move 2TB of data from production to non-prod using Network_link parameter. What is the process to make it fast.
Previously we did it but it took 7 days for importing data and index .
Here i am having an idea can you please guide me is it good to make import faster .
Step 1) import only metadata .
Step 2) import only table data using table_exists_action=append or truncate.( Here indexes are allready created in step 1 and import will be fast as per my plan.)
Please help me the better way if we can.
Thanks & Regards,
Venkata Poorna Prasad.SYou might want to check these as well:
DataPump Import (IMPDP) Over NETWORK_LINK Is Sometimes Very Slow (Doc ID 1439691.1)
DataPump Import Via NETWORK_LINK Is Slow With CURSOR_SHARING=FORCE (Doc ID 421441.1)
Performance Problems When Transferring LOBs Using IMPDP With NETWORK_LINK (Doc ID 1488229.1) -
Has anybody had any experience using either mySQL or postgreSQL?
I am currently looking at a project requiring database integration and in addition to the usual suspects (Oracle, SQL Server) I am also interested in mySQL and postgreSQL. Advantages? Disadvantages? Has anybody used these with LV--especially to store BLOBS?
Mike...
Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion
"... after all, He's not a tame lion..."
Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First StepsMike,
Here are the main differences...
MySQL is licensed under the GPL, but for about $400 you can buy a comercial license.
PostgreSQL is licensed under the BDS, which means you don't ever have to pay to use it.
Both have ODBC drivers (myODBC and psqlODBC), so talking to them from LabVIEW is a no-brainer.
PostgreSQL has more features like store proceedures and subqueries, so if you have complex data processing, you can handle them on the server side.
MySQL has fewer features and is probably simpler to administer, but I _think_ that the next version might have subquery capabilities. It's also faster for simple commands.
As far as BLOBs are concerned, here's what I dug up:
In Postgres, Large Objects are very special beasties. You need
to create them using lo_create function and store the result of the function - OID - in a regular table. Later you can manipulate the LOB using the OID and other functions - lo_read/lo_write, etc. Large object support is broken in Postgres - pg_dump cannot dump LOBs; you need to develop your own backup mechanism. Tthe team is working on implementing large rows; this will replace current LOB support.
In MySQL, text and binary LOBs are just fields in the table. Nothing special - just INSERT, UPDATE, SELECT and DELETE it the way you like. There are some limitations on indexing and applying functions to these fields.
I hope this is enough to get you started. I suggest you STFW (Search the Fine Web). You will get many results from "MySQL vs PostgreSQL".
Here are a couple that I found.
http://phd.pp.ru/Software/SQL/PostgreSQL-vs-MySQL.html
http://www.webtechniques.com/archives/2001/09/jepson/ -
What is the encryption algorithm used in cwallet.sso
HI,
I am using Webcenter External Application feature to store psswords. I would like to know more information abut the default file based credential store
What is ecryption methodlogy used in cwallet.sso or anyther file used for creating a credetial store map in Webcenter
This information is needed for our client security team. I appreciate your help
Thanks
SamBefore you can answer that question, you have to answer the question "what is the zip format"? It is a trick quesetion. There has never been such a thing. Zip was invented by the proverbial "some dude" (Phil Katz) who was a good at coding. The Zip format was never standardized so there is really no such thing as a zip file. There are many zip variants that use different forms of encryption.
The original zip encryption was a homemade algorithm written by Roger Shalfly. He has a PhD in math but this was an early attempt before the field was mature and people starting giving names to individual algorithms. Here is some information about the original zip encryption: http://cs.sjsu.edu/~stamp/crypto/PowerPoint_PDF/8_PKZIP.pdf
And here is a paper about one particular Zip variant: http://eprint.iacr.org/2004/078.pdf -
Buffer busy wait, 1st level bmp
Hi All !
OS: Linux redhat 5
DB: 11gr2
Block size: 8K
In an application we use I can see high buffer busy waits over a various periods.
I collect some info during this event.
SQL_HASH_VALUE FILE# BLOCK# REASON
769132182 6 17512 8
3983195767 6 17512 8
769132182 6 17512 8
3240261994 6 17512 8
3240261994 6 17512 8
3240261994 6 17512 8
769132182 6 17512 8
... I have total 35 sessions
File6 / block 17512 =
TABLESPACE_NAME SEGMENT_TYPE OWNER SEGMENT_NAME
GBSLOB LOBSEGMENT GBSASP SYS_LOB0000017961C00006$$
The sql are both inserts and updates to the same large table, blobs are involved (insert/update)
blobs using securefile
AWR reports this for a short period
Buffer busy waits is the top wait event
Buffer Wait Statistics DB/Inst: GGBISP01/ggbisp01 Snaps: 20925-20926
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
1st level bmb 574,636 17,118 30
free list 20,538 70 3
undo header 41,150 7 0
data block 263 1 3
undo block 18 0 0
I'm trying to find more details about this wait event, I believe it is related to usage of ASSM.
Can anyone explain more when 1st level bmp is seen ?
Thank you !
Best regards
Magnus JohanssonMaJo wrote:
SQL_HASH_VALUE FILE# BLOCK# REASON
769132182 6 17512 8
3983195767 6 17512 8
769132182 6 17512 8
3240261994 6 17512 8
3240261994 6 17512 8
3240261994 6 17512 8
769132182 6 17512 8
... I have total 35 sessions
File6 / block 17512 =
TABLESPACE_NAME SEGMENT_TYPE OWNER SEGMENT_NAME
GBSLOB LOBSEGMENT GBSASP SYS_LOB0000017961C00006$$The sql are both inserts and updates to the same large table, blobs are involved (insert/update)
blobs using securefile
AWR reports this for a short period
Buffer busy waits is the top wait event
Buffer Wait Statistics DB/Inst: GGBISP01/ggbisp01 Snaps: 20925-20926
-> ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
1st level bmb 574,636 17,118 30
free list 20,538 70 3
undo header 41,150 7 0
data block 263 1 3
undo block 18 0 0
-------------------------------------------------------------I'm trying to find more details about this wait event, I believe it is related to usage of ASSM.
Can anyone explain more when 1st level bmp is seen ?
Your AWR shows an interesting mix of ASSM and freelist group blocks - are you running ASSM ?
1st level bitmap blocks (bmb) are the blocks in a segment (usually the first one or two of each extent) that show the availability of free space in the other data blocks. Each bitmap block can identify up to 256 other blocks (the last time I checked), although you have to have a fairly large data segment before you reach this level of mapping.
If you have a high rate of concurrent inserts and updates on a LOB column then you may be running into code that frequently updates bitmap blocks to show that data blocks have changed from empty to full. It's also possible that you've run into one of the many bugs that appeared when you mixed ASSM with LOB segments - you haven't given the exact version of 11.2, but you might want to check the latest versions and any bug reports for patches to your version.
Regards
Jonathan Lewis -
Can't reclaim space in tablespace after deleting records
Oracle 11gR1 RHEL5 64bit
Hi.
I am having trouble reclaiming space from a tablespace after having deleted all (thousands) of the records from a table (which resides in that tablespace). I have tried the following options to no avail:
- Alter table <table_name> shrink
- purge tablespace
- purge recyclebin
This table has several LOB columns and is using securefiles. I don't know if that has something to do with it or not. The tablespace is locally Managed and Segment space management is set to AUTO. Below is the create table command:
CREATE TABLE IIQ.DICOM_OBJECT
DICOM_OBJECT_RID NUMBER CONSTRAINT NN_DICOM_OBJECT_DICOM_OBJ_RID NOT NULL,
SUBMISSION_RID NUMBER,
SUBMISSION_ITEM_RID NUMBER,
DICOM ORDSYS.ORDDICOM,
IMAGETHUMB ORDSYS.ORDIMAGE,
ANONDICOM ORDSYS.ORDDICOM,
ACTIVE_FLAG VARCHAR2(1 CHAR) DEFAULT 'Y' CONSTRAINT NN_DICOM_OBJECT_ACTIVE_FLAG NOT NULL,
CREATED_TIMESTAMP TIMESTAMP(6) WITH LOCAL TIME ZONE DEFAULT SYSTIMESTAMP CONSTRAINT NN_DICOM_OBJECT_TIMESTAMP NOT NULL,
SOURCE_DESCRIPTION VARCHAR2(100 CHAR) CONSTRAINT NN_DICOM_OBJECT_SOURCE NOT NULL,
OP_CONFORMANCE_FLAG VARCHAR2(1 CHAR)
COLUMN IMAGETHUMB NOT SUBSTITUTABLE AT ALL LEVELS
TABLESPACE IIQDCMDAT01
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 80K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
LOB ("DICOM"."EXTENSION") STORE AS SECUREFILE
( TABLESPACE IIQDCMLOB01
DISABLE STORAGE IN ROW
CHUNK 16384
RETENTION
NOCACHE
INDEX (
TABLESPACE IIQDCMLOB01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 208K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOB (SYS_NC00050$) STORE AS
( TABLESPACE IIQDCMDAT01
ENABLE STORAGE IN ROW
CHUNK 16384
PCTVERSION 10
NOCACHE
INDEX (
TABLESPACE IIQDCMDAT01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 80K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOB ("DICOM"."SOURCE"."LOCALDATA") STORE AS SECUREFILE
( TABLESPACE IIQDCMLOB01
DISABLE STORAGE IN ROW
CHUNK 16384
RETENTION
NOCACHE
INDEX (
TABLESPACE IIQDCMLOB01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 208K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOB ("ANONDICOM"."SOURCE"."LOCALDATA") STORE AS SECUREFILE
( TABLESPACE IIQDCMLOB01
DISABLE STORAGE IN ROW
CHUNK 16384
RETENTION
NOCACHE
INDEX (
TABLESPACE IIQDCMLOB01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 208K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
XMLTYPE SYS_NC00017$ STORE AS CLOB
( TABLESPACE IIQDCMLOB01
DISABLE STORAGE IN ROW
CHUNK 16384
RETENTION
CACHE READS
INDEX (
TABLESPACE IIQDCMLOB01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 208K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOB ("IMAGETHUMB"."SOURCE"."LOCALDATA") STORE AS SECUREFILE
( TABLESPACE IIQDCMLOB01
DISABLE STORAGE IN ROW
CHUNK 16384
RETENTION
NOCACHE
INDEX (
TABLESPACE IIQDCMLOB01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 208K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOB ("ANONDICOM"."EXTENSION") STORE AS SECUREFILE
( TABLESPACE IIQDCMLOB01
DISABLE STORAGE IN ROW
CHUNK 16384
RETENTION
NOCACHE
INDEX (
TABLESPACE IIQDCMLOB01
STORAGE (
INITIAL 80K
NEXT 1
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
STORAGE (
INITIAL 208K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL
MONITORING
ENABLE ROW MOVEMENT;
Thank you all.Justin Cave wrote:
OK, so you did a SHRINK SPACE CASCADE? Not just a SHRINK SPACE?That is correct.
What makes you believe that there is more space that can be reclaimed? Well, what I don't understand is that when a table (and only that table) was assigned to a specific tablespace whose data was completely removed is showing as if the data is still there...at least when you look at the tablespace. If all the rows of a table are removed, then shouldn't the tablespace size go down? There was 95 GB of data in that tablespace and all from that one table, which was completely emptied. However, it still shows the tablespace as being 95GB full.
Can you post the size of the table segment and the LOB segments as well as the size of the actual data in the table and the LOBs?Can you tell me which views you would like to the see the data from ? dba_lobs, dba_segments, etc... I want to make sure i have the right query for you.
Here is some info...not sure if this is what you want (formatiing is off):
select owner, segment_name, segment_type, tablespace_name, bytes
from dba_segments
where owner = 'IIQ'
and tablespace_name = 'IIQDCMLOB01'
and segment_type = 'LOBSEGMENT';
OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME BYTES
IIQ SYS_LOB0000651630C00012$$ LOBSEGMENT IIQDCMLOB01 9.8416E+10
IIQ SYS_LOB0000651630C00018$$ LOBSEGMENT IIQDCMLOB01 755236864
IIQ SYS_LOB0000651630C00021$$ LOBSEGMENT IIQDCMLOB01 755236864
IIQ SYS_LOB0000651630C00023$$ LOBSEGMENT IIQDCMLOB01 262144
IIQ SYS_LOB0000651630C00044$$ LOBSEGMENT IIQDCMLOB01 262144
IIQ SYS_LOB0000651630C00053$$ LOBSEGMENT IIQDCMLOB01 262144
OWNER TABLE_NAME
COLUMN_NAME
----------------------------------- SEGMENT_NAME TABLESPACE_NAME INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS CACHE LOGGING ENCR COMPRE DEDUPLICATION IN_ FORMAT PAR SEC
IIQ DICOM_OBJECT
"DICOM"."SOURCE"."LOCALDATA"
SYS_LOB0000651630C00012$$ IIQDCMLOB01 SYS_IL0000651630C00012$$ 16384 10800 NO YES NO NO NO NO NOT APPLICABLE NO YES
IIQ DICOM_OBJECT
SYS_NC00018$
SYS_LOB0000651630C00018$$ IIQDCMLOB01 SYS_IL0000651630C00018$$ 16384 10800 CACHEREADS YES NO NO NO NO ENDIAN NEUTRAL NO YES
IIQ DICOM_OBJECT
"DICOM"."EXTENSION"
SYS_LOB0000651630C00021$$ IIQDCMLOB01 SYS_IL0000651630C00021$$ 16384 10800 NO YES NO NO NO NO NOT APPLICABLE NO YES
IIQ DICOM_OBJECT
"IMAGETHUMB"."SOURCE"."LOCALDATA"
SYS_LOB0000651630C00023$$ IIQDCMLOB01 SYS_IL0000651630C00023$$ 16384 10800 NO YES NO NO NO NO NOT APPLICABLE NO YES
IIQ DICOM_OBJECT
"ANONDICOM"."SOURCE"."LOCALDATA"
SYS_LOB0000651630C00044$$ IIQDCMLOB01 SYS_IL0000651630C00044$$ 16384 10800 NO YES NO NO NO NO NOT APPLICABLE NO YES
IIQ DICOM_OBJECT
"ANONDICOM"."EXTENSION"
SYS_LOB0000651630C00053$$ IIQDCMLOB01 SYS_IL0000651630C00053$$ 16384 10800 NO YES NO NO NO NO NOT APPLICABLE NO YES
Thanks.
Maybe you are looking for
-
Do not want attachments embedded - how do I fix this problem?!?!?!
I frequently have to send attachments in my mail messages (jpegs, tiffs, eps). However, every time I send attachments in jpeg, eps and tiff formats the recipient cannot download them. They are either invisible or embedded in the message - which I do
-
Getting error while importing the Sequence
All, I created a sequence in DB and using in ODI for row_wid col in DIM1. I used seqname.NEXTVAL for the col in DIM, interfacing running fine, data also loading. Dim, sequence is under Dev SCHEMA1. But when i export the scenario and run in the test r
-
Can u charger your nano with a usb port thru the headphone port?
I am wondering if you can charge your Nano with a usb headphone cable like one that comes with a Ipod shuffle?
-
itunes stops working, cannot play video. I have tried uninstalling, quicktime and more to no avail. HELP!
-
EA2 right click to open SQL file/opening file creates new connection
EA2, win XP pro, sp2 1. Right click to open an existing SQL file missing from the context menu. 2. Even after I set my default path to look for scripts, it opens in Documents and Settings\USERID\Application Data\SQL Developer 3. When I open a new fil