Dbwr and writing undo data block
hi guys,
If dbwr was to write a dirty block to disk, would it also have to write the dirty uno data block that contains the before value of our database block to disk too?
Thanks
OracleGuy777 wrote:
hi guys,
If dbwr was to write a dirty block to disk, would it also have to write the dirty uno data block that contains the before value of our database block to disk too?
The first thing to remember is that recovery depends on the REDO, not the UNDO, so technilcally it doesn't matter whether dbwr manages to write neither, one, or both of the two blocks to disc to avoid any problems with crash recovery. This should give you a clue that dbwr doesn't HAVE to write the undo block at the same time as the data block.
In practice, it is possible that both blocks will be written virtually simultaneously because the two blocks will have been modified at the same instant and therefore could be written during the same 3-second timeout write (continuous checkpoint). It is also possible, though, for the undo block to be written some time before the data block, or some time after the data block - it all depends on what activity has been going at about the same time.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan
Similar Messages
-
Data Concurrency and Consistency ( SCN , DATA block)
Hi guys, i am getting very very very confused about how oracle implement consistency / multiversioning with regards to SCN in a data block and transaction list in the data block..
I will list out what i know so you guys can gauge me on where i am..
When a SELECT statement is issued, SCN for the select query is determined. Then Blocks with higher SCN are rebuilt from the RBS.
Q1) The SCN in the block implied here - is it different from the SCNs in the transaction list of the block ? where is this SCN store ? where is the transaction list store ? how is the SCN of the block related with the SCNs in the transaction list of the block ?
Q2) can someone tell me what happen to the BLOCK SCN and the transaction list
of the BLOCK when a transaction start to update to a row in the block occurs.
Q3) If the BLOCK SCN reflects the latest change made to the block and If the SCN of the block is higher then the SCN of the SELECT query, it means that the block has change since the start of the SELECT query, but it DOESNT mean that the row (data) that the SELECT query requires has changed.
Therefore why cant ORACLE just check to see whether the row has changed and if it has, rebuilt a block from the RBS ?
Q4) when ORACLE compares the BLOCK SCN, does it only SCAN for the BLOCK SCN or does it also SEARCH through the TRANSACTION LIST ? or it does both ? and why ?
Q5) is transaction SCN same as Transaction ID ? which is store in the RBS , the transaction SCN or ID ?
Q6) in short i am confuse with the relationship between BLOCK SCN, transaction list SCN, their location, their usage and relationship of the BLOCK SCN and transaction list when doing a SELECT, their link with RBS..
any gurus clear to give me a clearer view of what is actually happening ?Hi Aman
Hmm agreed.So when commit is issued , what happens at that time?Simply put:
- The SCN for the transaction is determined.
- The transaction is marked as committed in the undo header (the commit SCN is also stored in the undo header).
- If fast cleanout takes place, the commit SCN is also stored in the ITL. If not, the ITL (i.e. the modified data blocks) are not modified.
So at commit, Oracle will replace the begin scn in the ITL with this scn
and this will tell that the block is finally committed is it?The ITL does not contain the begin SCN. The undo header (specifically the transaction table) contains it.
I lost here.In the ITL , the scn is transaction SCN or commit scn?As I just wrote, the ITL contains (if the cleanout occured) the commit SCN.
This sounds like high RBA information?What is RBA?
Commit SCNThis is the SCN associated with a committed transaction.
Begin SCNThis is the SCN at which a transaction started.
Transaction SCNAs I wrote, IMO, this is the same as the commit SCN.
Also please explain that what exactly the ITL stores?If you print an ITL slot, you see the following information:
BBED> print ktbbhitl[0]
struct ktbbhitl[0], 24 bytes @44
struct ktbitxid, 8 bytes @44
ub2 kxidusn @44 0x0009
ub2 kxidslt @46 0x002e
ub4 kxidsqn @48 0x0000fe77
struct ktbituba, 8 bytes @52
ub4 kubadba @52 0x00800249
ub2 kubaseq @56 0x3ed6
ub1 kubarec @58 0x4e
ub2 ktbitflg @60 0x2045 (KTBFUPB)
union _ktbitun, 2 bytes @62
b2 _ktbitfsc @62 0
ub2 _ktbitwrp @62 0x0000
ub4 ktbitbas @64 0x06f4c2a3- ktbitxid --> XID, the transaction holding the ITL slot
- ktbituba --> UBA, used to locate the undo information
- ktbitflg --> flags (active, committed, cleaned out, ...)
- _ktbitfsc --> free space generated by this transaction in this block
- _ktbitwrp+ktbitbas --> commit SCN
HTH
Chris -
Ora-01113 and ora-01110 -- Data Block Corruption
Running 10g no backup and noarchivelog.
I put the datafile offline so I can bring up the database. Can anyone help me figure out how to fix the Bad datafile?
Thank You,TRACE FILE INFORMATION
SQL> select pa.value || '/' || i.instance_name || '_ora_'
2 || pr.spid || '.trc' as trace_file
3 from v$session s, v$process pr, v$parameter pa, v$instance i
4 where s.username = user and s.paddr = pr.addr
5* and pa.name='user_dump_dest';
TRACE_FILE
/oracle/admin/ora9i/udump/ora9i_ora_25199.trcDUMPING A TABLE BLOCK
SQL> select file_id,block_id,bytes,blocks
2 from dba_extents
3 where owner='P' and segment_name='EMP';
FILE_ID BLOCK_ID BYTES BLOCKS
3 9 65,536 8next is to find out the tablespace name and the datafile...
SQL> select tablespace_name,file_name from dba_data_files
2 where relative_fno = 3;
TABLESPACE_NAME FILE_NAME
USER_DATA /oradata3/ora9i/user_data01.dbfNow that we know which file and blocks hold our table, let’s dump a sample block of the table. This is done as follows:
SQL> alter system dump datafile 3 block 10;System altered.
Let’s now look at the contents of dumping one block.
Start dump data blocks tsn: 3 file#: 3 minblk 10 maxblk 10
buffer tsn: 3 rdba: 0x00c0000a (3/10)
scn: 0x0000.00046911 seq: 0x02 flg: 0x04 tail: 0x69110602
frmt: 0x02 chkval: 0x579d type: 0x06=trans data
Block header dump: 0x00c0000a
Object id on Block? Y
seg/obj: 0x6d9c csc: 0x00.46911 itc: 2 flg: O typ: 1 - DATA
fsl: 0 fnx: 0x0 ver: 0x01
Itl Xid Uba Flag Lck Scn/Fsc
0x01 xid: 0x0005.02f.0000010c uba: 0x00806f10.00ca.28 C--- 0 scn 0x0000.00046900
0x02 xid: 0x0003.01c.00000101 uba: 0x00800033.0099.04 C--- 0 scn 0x0000.00046906
This is the beginning of the data block dump. The first line tells us that we are dumping file#3, starting at block# 10 (minblk), and finishing with block# 10 (maxblk). Had we dumped more than one data block, these values would represent a range. The relative data block address (rdba) is 0x00c0000a. For more information on the rdba, refer to a later section in this paper. At the end of this line, we can see in parentheses that the rdba corresponds to file# 3, block# 10 (3/10).
The third line describes the SCN of the data block. In our case, the SCN is 0x0000.00046911. The tail of the data block is composed of the last two bytes of the SCN (6911) appended with the type (06) and the sequence (02). If the decomposition of the tail does not match these three values, then the system knows that the block is inconsistent and needs to be recovered. While this tail value shows up at the beginning of the block dump, it is physically stored at the end of the data block.
The block type shows up on the fourth line. Some of the valid types correspond to the following table:
Type Meaning
0x02 undo block
0x06 table or index data block
0x0e undo segment header
0x10 data segment header block
0x17 bitmapped data segment headeri hope it will help... -
Question around UTL_FILE and writing unicode data to a file.
Database version : 11.2.0.3.0
NLS_CHARACTERSET : AL32UTF8
OS : Red Hat Enterprise Linux Server release 6.3 (Santiago)
I did not work with multiple language characters and manipulating them. So, the basic idea is to write UTF8 data as Unicode file using UTL_FILE. This is fairly an empty database and does not have any rows in at least the tables I am working on. First I inserted a row with English characters in the columns.
I used utl_file.fopen_nchar to open and used utl_file.put_line_nchar to write it to the file on the Linux box. I open the file and I still see English characters (say "02CANMLKR001".
Now, I updated the row with some columns having Chinese characters and ran the same script. It wrote the file. Now when I "vi" the file, I see "02CANè¹æ001" (some Unicode symbols in place of the Chinese characters and the regular English.
When I FTP the file to windows and open it using notepad/notepad++ it still shows the Chinese characters. Using textpad, it shows ? in place of Chinese characters and the file properties say that the file is of type UNIX/UTF-8.
My question : "Is my code working and writing the file in unicode? If not what are the required changes?" -- I know the question is little vague, but any questions/suggestions towards answering my question would really help.
sample code:
{pre}
DECLARE
l_file_handle UTL_FILE.file_type;
l_file_name VARCHAR2 (50) := 'test.dat';
l_rec VARCHAR2 (250);
BEGIN
l_file_handle := UTL_FILE.fopen_nchar ('OUTPUT_DIR', l_file_name, 'W');
SELECT col1 || col2 || col3 INTO l_rec FROM table_name;
UTL_FILE.put_line_nchar (l_file_handle, l_rec);
UTL_FILE.fclose (l_file_handle);
END;
{/pre}Regardless of what you think of my reply I'm trying to help you.
I think you need to reread my reply because I can't find ANY relation at all between what I said and what you responded with.
I wish things are the way you mentioned and followed text books.
Nothing in my reply is related to 'text books' or some 'academic' approach to development. Strictly based on real-world experience of 25+ years.
Unfortunately lot of real world projects kick off without complete information.
No disagreement here - but totally irrevelant to anything I said.
Till we get the complete information, it's better to work on the idea rather than wasting project hours. I don't think it can work that way. All we do is to lay a ground preparation, toy around multiple options for the actual coding even when we do not have the exact requirements.
No disagreement here - but totally irrevelant to anything I said.
And I think it's a good practice rather than waiting for complete information and pushing others.
You can't just wait. But you also can't just go ahead on your own. You have to IMMEDIATELY 'push others' as soon as you discover any issues affecting your team's (or your) ability to meet the requirements. As I said above:
Your problems are likely:
1. lack of adequate requirements as to what the vendor really requires in terms of format and content
2. lack of appropriate sample data - either you don't have the skill set to create it yourself or you haven't gotten any from someone else.
3. lack of knowledge of the character sets involved to be able to create/conduct the proper tests
If you discover something missing with the requirements (what character sets need to be used, what file format to use, whether BOMs are required in the file, etc) you simply MUST bring that to your manager's attention as soon as you suspect it might be an issue.
It is your manager's job, not yours, to make sure you have the tools needed to do the job. One of those tools is the proper requirements. If there is ANYTHING wrong, or if you even THINK something is wrong with those requirements it is YOUR responsibility to notify your manager ASAP.
Send them an email, leave a yellow-sticky on their desk but notify them. Nothing in what I just said says, or implies, that you should then just sit back and WAIT until that issue is resolved.
If you know you will need sample data you MUST make sure the project plan includes SOME means to obtain sample data witihin the timeline needed by your project. As I repeated above if you don't have the skill set to create it yourself someone else will need to do it.
I did not work with multiple language characters and manipulating them.
Does your manager know that? If the project requires 'work with multiple language characters and manipulating them' someone on the project needs to have experience doing that. If your manager knows you don't have that experience but wants you to proceed anyway and/or won't provide any other resource that does have that experience that is ok. But that is the manager's responsibility and that needs to be documented. At a minimum you need to advise your manager (I prefer to do it with an email) along the following lines:
Hey - manager person - As you know I have little or no experience to 'work with multiple language characters and manipulating them' and those skills are needed to properly implement and test that the requirements are met. Please let me know if such a resource can be made available.
And I'm serious about that. Sometimes you have to make you manager do their job. That means you ALWAYS need to keep them advised of ANY issue that might affect the project. Once your manager is made aware of an issue it is then THEIR responsibility to deal with it. They may choose to ignore it, pretend they never heard about it or actually deal with it. But you will always be able to show that you notified them about it.
Now, I updated the row with some columns having Chinese characters and ran the same script.
Great - as long as you actually know Chinese that is; and how to work with Chinese characters in the context of a database character set, querying, creating files, etc.
If you don't know Chinese or haven't actually worked with Chinese characters in that context then the project still needs a resource that does know it.
You can't just try to bluff your way through something like character sets and code conversions. You either know what a BOM (byte order mark) is or you don't. You have either learned when BOMs are needed or you haven't.
That said, we are in process of getting the information and sample data that we require.
Good!
Now make sure you have notified your manager of any 'holes' in the requirements and keep them up to date with any other issues that arise.
NONE of the above suggests, or implies, that you should just sit back and wait until that is done. But any advice offered on the forums about specifics of your issue (such as whether you need to even worry about BOMs) is premature until the vendor or the requirements actually document the precise character set and file format needed. -
Hierarchical tree and interacting with data block
I have a hierarchical tree that is used to pick records that are shown in another data block. I have a list item in the data block that changes the arrangement of nodes in the hierarchy. My problem is that after the update, I select the node, but the wrong data shows up in the other data block (eg. node x is selected, x becomes a child of y, the data for y still appears in the data block even though node x is selected in the tree). There seems to be something wrong with my triggers, but I can't figure out what.
Here are the triggers that I am using.
In the tree block, tree item:
-WHEN-TREE-NODE-SELECTED
set_block_property(v_data_block, default_where, 'global_id = ' | | v_org);
go_block(v_data_block);
execute_query;
In the data block, list item
-WHEN-LIST-CHANGED:
-- find the tree node and select it
node := Ftree.Find_Tree_Node(...)
Ftree.set_tree_selection(v_tree,node,Ftree.SELECT_ON);
-- update the data block to show data from the selected node
set_block_property('GREENPAGES_DIR', default_where, 'global_id = ' | | :greenpages_dir.global_id);
go_block('GREENPAGES_DIR');
execute_query;
Thanks for your help.hi
i didn't try doing it.. i assume the the reasons can be:
1.check the value of the variable 'v_org' because 'w-t-n-s' trigger fires twice..
2. 'w-t-n-s' trigger is not firing eventhough u select it through ur code in the list item trigger.. if not firing, use execute_trigger('w-t-n-s') and remove the last set of code (update section) from the list item trigger..(assuming 'GREENPAGES_DIR' = v_data_block)
hope this helps u
ajith -
Insert and update a data block which is based on view--urgent help required
Hi experts,
I created a view(A_VIEW) which is based on a union select. I have created a data block A_VIW_BLOCK which is based on this view. I need to insert/update one of the base tablesfor A_VIEW through this data block. I also need to be able to make a query through all the fields in the view.
The questions are:
1.Can it be done at all?
2. What properties need to be set?
3. If can't be done, what the best approach to achieve this?
Thanks in advance!!
Michaelhi
try something like this.
CREATE TABLE demo_tab (
person_id NUMBER(3),
first_name VARCHAR2(20),
last_name VARCHAR2(20));
CREATE OR REPLACE VIEW upd_view AS
SELECT * FROM demo_tab;
INSERT INTO demo_tab
(person_id, first_name, last_name)
VALUES
(1, 'Daniel', 'Morgan');
INSERT INTO demo_tab
(person_id, first_name, last_name)
VALUES
(2, 'Helen', 'Lofstrom');
COMMIT;
SELECT * FROM upd_view;
UPDATE upd_view
SET person_id = person_id * 10;
SELECT * FROM upd_view;
desc user_updatable_columns
SELECT table_name, column_name, updatable, insertable, deletable
FROM user_updatable_columns
WHERE table_name IN (
SELECT view_name
FROM user_views);
SQL> create table dummy (f1 number);
Table created.
SQL> create view dummy_v
2 as
3 select f1 from dummy
4 union all
5 select f1 from dummy;
View created.
SQL> create trigger dummy_v_it
2 instead of insert
3 on dummy_v
4 for each row
5 begin
6 insert into dummy values (:NEW.f1);
7 end;
8 /
Trigger created.
SQL> insert into dummy_v values (1);
1 row created.
SQL> select * from dummy_v;
F1
1
1
SQL> select *
2 from user_updatable_columns
3 where table_name = 'DUMMY_V';
OWNER TABLE_NAME COLUMN_NAME UPD INS DEL
FORBESC DUMMY_V F1 NO NO NOforms settings.
Enforce Primary Key - No
Query Allowed - Yes
Query datasource Name - V_TSFDETAIL
Insert Allowed - Yes
Update Allowed - Yes
Delete Allowed - Yes
Locking Mode - Automatic
Key Mode - Automatic
do not forget to create synonyms.
hope this helps.
sarah -
Loading and Writing Budget Data FDM vs ERPi
Say one wants to load data from a GL (EBS Suite) into Planning, and then write this budget data back to the EBS. Also need drill through functionality off planning web forms. What is the best way to do this?
I was thinking load data using FDM (so users can get the drill through) and then write back to EBS using ERPi. Does this sound right?Thanks a lot for your response as always. Two questions more:
1)Can data be loaded using erpi? Or is it only for writing back?
2)If data can be loaded using erpi, then do I need fdm? If I load using erpi can i still get my drill through? -
Problems with parport and writing to data adresse 0x037a
Hi,
Iam currently trying to control a DC-motor, I need to change the strobe which is at the data control. At adresss 0x037a i cant write to it. Running userport on winxp and i can access and turn data ports at 0x0378 on and off.
I get this error message:
XCEPTION_PRIV_INSTRUCTION (0xc0000096) at pc=0x1000107b, pid=3584, tid=1064
Code:
import parport.ParallelPort;
class SimpleIO {
public static void main ( String []args )
//ParallelPort lpt1 = new ParallelPort(0x378); // 0x378 is normally the base address for the LPT1 port
//int aByte;
//aByte = lpt1.read(); // read a byte from the port's STATUS pins
//System.out.println("Input from parallel port: " + aByte);
//int a=0xFF;
//lpt1.write(a); // write a byte to the port's DATA pins
ParallelPort lpt2 = new ParallelPort(0x03bc);
//lpt2.write(0x00);
int c=0x0379;
//System.out.println("Output to port: " + aByte);
//ParallelPort lpt2 = new ParallelPort(0x37A);
//int b=0x00;
//lpt2.write(b);
}Hola!!!
Me preguntaba de donde sacaste esa APi de parport.ParallelPort ???
Precisamente yo tambien quiero usar el puero paralelo para controlar un motor a pasos, pero no lo logro conseguir, me baje la API de javax.comm y todo parece estar bien solo que no obtengo nada a la salida del puerto. Creo que es porque WinXP te cierra los puertos, no sabes como abrirlos???
Mi codigo es el siguiente:
import java.io.*;
import javax.comm.*;
public class TestPuertoParalelo{
public static void main(String[] args){
CommPortIdentifier portId;
ParallelPort paralelo = null;
InputStream entrada = null;
OutputStream salida = null;
try{
portId = CommPortIdentifier.getPortIdentifier("LPT1");
paralelo = (ParallelPort) portId.open("ParallelPort",5000);
paralelo.setMode(ParallelPort.LPT_MODE_SPP); //parece bien SPP, los demas modos no me funcionan
salida = paralelo.getOutputStream();
System.out.println ("escribiendo 8");
salida.write(8); //Aqui se queda y no recibo nada en el puerto
System.out.println("Se Recibio Correctamente");
catch(NoSuchPortException e){
System.out.println("Error del Id Puerto");
catch(PortInUseException e) {
System.out.println("Error: Puerto en Uso");
catch(UnsupportedCommOperationException e){
System.out.println("Error en el Modo");
catch(IOException e){
System.out.println("Error en la conexion de flujos");
} -
Hot Data Block with concurrent read and write
Hi,
This is from ADDM Report.
FINDING 8: 2% impact (159 seconds)
A hot data block with concurrent read and write activity was found. The block
belongs to segment "SIEBEL.S_SRM_REQUEST" and is block 8138 in file 7.
RECOMMENDATION 1: Application Analysis, 2% benefit (159 seconds)
ACTION: Investigate application logic to find the cause of high
concurrent read and write activity to the data present in this block.
RELEVANT OBJECT: database block with object# 73759, file# 7 and
block# 8138
RATIONALE: The SQL statement with SQL_ID "f1dhpm6pnmmzq" spent
significant time on "buffer busy" waits for the hot block.
RELEVANT OBJECT: SQL statement with SQL_ID f1dhpm6pnmmzq
DELETE FROM SIEBEL.S_SRM_REQUEST WHERE ROW_ID = :B1
RECOMMENDATION 2: Schema, 2% benefit (159 seconds)
ACTION: Consider rebuilding the TABLE "SIEBEL.S_SRM_REQUEST" with object
id 73759 using a higher value for PCTFREE.
RELEVANT OBJECT: database object with id 73759
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Wait class "Concurrency" was consuming significant database
time. (4% impact [322 seconds])
what does it mean by hot block with concurrent read and write??
is rebuilding the table solves the problem as per addm report?Hi,
You must suffer from buffer busy waits.
When a buffer is updated, the buffer will be latched, and other sessions can not read it or write it.
You must have multiple sessions reading and writing that one block.
Recommendation 2 results in fewer records per block, so less chance multiple sessions are modifying and reading 1 block. It will also result in a bigger table.
The recommendation doesn't make sense for tablespaces with segment storage management auto, as for those tablespaces pctfree does not apply.
Buffer busy waits will also occur if the blocksize of your database is set too high.
Sybrand Bakker
Senior Oracle DBA -
What does redo log buffer holds, changed value or data block?
Hello Everyone,
i am new to database side and have one query as i know redo log buffer contain change information , my doubt is does it store the value only or the changed data block? because if we can see data buffer cache size is more as it holds data block and redo log buffer size is less .The Redo Log buffer contains OpCodes that represent the SQL commands, the "address" (file,block,row) where the change is to be made and the nature of the change.
It does NOT contain the data block.
(the one exception is when you run a User Managed Backup with ALTER DATABASE BEGIN BACKUP or ALTER TABLESPACE BEGIN BACKUP : The first time a block is modified when in BEGIN BACKUP mode, the whole block is written to the redo stream).
The log buffer can be and is deliberately smaller than the blocks buffer cache. Entries in the redo log buffer are quickly written to disk (at commits, when it is 1/3rd or 1MB full, every 3seconds, before DBWR writes a modified data block).
Hemant K Chitale -
Reading the Blob and writing it to an external file in an xml tree format
Hi,
We have a table by name clarity_response_log and content of the column(Response_file) is BLOB and we have xml file or xml content in that column. Most probably the column or table may be having more than 5 records and hence we need to read the corresponding blob content and write to an external file.
CREATE TABLE CLARITY_RESPONSE_LOG
REQUEST_CODE NUMBER,
RESPONSE_FILE BLOB,
DATE_CRATED DATE NOT NULL,
CREATED_BY NUMBER NOT NULL,
UPDATED_BY NUMBER DEFAULT 1,
DATE_UPDATED VARCHAR2(20 BYTE) DEFAULT SYSDATE
)The xml content in the insert statement is very small because of some reason and cannot be made public and indeed we have a very big xml file stored in the BLOB column or Response_File column
Insert into CLARITY_RESPONSE_LOG
(REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
Values
(5, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
Insert into CLARITY_RESPONSE_LOG
(REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
Values
(6, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
Insert into CLARITY_RESPONSE_LOG
(REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
Values
(7, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
Insert into CLARITY_RESPONSE_LOG
(REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
Values
(8, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
Insert into CLARITY_RESPONSE_LOG
(REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
Values
(9, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');THe corresponding proc for reading the data and writing the data to an external file goes something like this
SET serveroutput ON
DECLARE
vstart NUMBER := 1;
bytelen NUMBER := 32000;
len NUMBER;
my_vr RAW (32000);
x NUMBER;
l_output UTL_FILE.FILE_TYPE;
BEGIN
-- define output directory
l_output :=
UTL_FILE.FOPEN ('CWFSTORE_RESPONCE_XML', 'extract500.txt', 'wb', 32760);
vstart := 1;
bytelen := 32000;
---get the Blob locator
FOR rec IN (SELECT response_file vblob
FROM clarity_response_log
WHERE TRUNC (date_crated) = TRUNC (SYSDATE - 1))
LOOP
--get length of the blob
len := DBMS_LOB.getlength (rec.vblob);
DBMS_OUTPUT.PUT_LINE (len);
x := len;
---- If small enough for a single write
IF len < 32760
THEN
UTL_FILE.put_raw (l_output, rec.vblob);
UTL_FILE.FFLUSH (l_output);
ELSE
-------- write in pieces
vstart := 1;
WHILE vstart < len AND bytelen > 0
LOOP
DBMS_LOB.READ (rec.vblob, bytelen, vstart, my_vr);
UTL_FILE.put_raw (l_output, my_vr);
UTL_FILE.FFLUSH (l_output);
---------------- set the start position for the next cut
vstart := vstart + bytelen;
---------- set the end position if less than 32000 bytes
x := x - bytelen;
IF x < 32000
THEN
bytelen := x;
END IF;
UTL_FILE.NEW_LINE (l_output);
END LOOP;
----------------- --- UTL_FILE.NEW_LINE(l_output);
END IF;
END LOOP;
UTL_FILE.FCLOSE (l_output);
END;The above code works well and all the records or xml contents are being written simultaneously adjacent to each other but we each records must be written to a new line or there must be a line gap or a blank line between any two records
the code which I get is as follow all all xml data comes on a single line
<?xml version="1.0" encoding="ISO-8859-1"?><emp><empno>7369</empno><ename>James</ename><job>Manager</job><salary>1000</salary></emp><?xml version="1.0" encoding="ISO-8859-1"?><emp><empno>7370</empno><ename>charles</ename><job>President</job><salary>500</salary></emp>But the code written to an external file has to be something like this.
<?xml version="1.0" encoding="ISO-8859-1"?>
<emp>
<empno>7369</empno>
<ename>James</ename>
<job>Manager</job>
<salary>1000</salary>
</emp>
<?xml version="1.0" encoding="ISO-8859-1"?>
<emp>
<empno>7370</empno>
<ename>charles</ename>
<job>President</job>
<salary>500</salary>
</emp>Please adviceWhat was wrong with the previous answers given on your other thread:
Export Blob data to text file(-29285-ORA-29285: file write error)
If there's a continuing issue, stay with the same thread, don't just ask the same question again and again, it's really Pi**es people off and causes confusion as not everyone will be familiar with what answers you've already had. You're just wasting people's time by doing that.
As already mentioned before, convert your BLOB to a CLOB and then to XMLTYPE where it can be treated as XML and written out to file in a variety of ways including the way I showed you on the other thread.
You really seem to be struggling to get the worst possible way to work. -
Updating record in a data block based on view in oracle forms
hi all ,
We have two data blocks in our custom oracle form.The first data block is for search criteria provided with buttons 'GO' and 'ADD ROW' and the second data block is based on a view that fetches record when user clicks on GO based on the the criteria specified in the above block. The Below block contains one SAVE button too.
We have a requirement when GO button is pressed and corresponding records are shown in the below block, user should be able to edit the record. Want to know how to make it editable?
Help appreciated....!!!Your view is based on how many tables and does it include all NOT NULL fields from all tables?
-
Problem In Update Statement In Multiple Record Data Block
Hi Friends,
I have problem in update Statement for updating the record in multiple record data Block.
I have two data Block the master block is single Record block and the 2nd data block is Multiple Record data Block.
I am inserting the fields like category,and post_no for partiular job in single data block
Now in second Multiple Record Data Block,i am inserting the multiple record for above fileds like no. of employees work in the position
There is no problem in INSERT Statement as it is inerting all record But whenever i want to update particular Record (in Multiple Block) of employee for that category and Post_no
then its updating all the record.
my code is Bellow,
IF v_count <> 0 THEN
LOOP
IF :SYSTEM.last_record <> 'TRUE' THEN
UPDATE post_history
SET idcode = :POST_HISTORY_MULTIPLE.idcode,
joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
WHERE post_no = :POST_HISTORY_SINGLE.post_no
AND category = :POST_HISTORY_SINGLE.category
AND roster_no = :POST_HISTORY_SINGLE.roster_no;
AND idcode = :POST_HISTORY_MULTIPLE.idcode;
IF SQL%NOTFOUND THEN
INSERT INTO post_history(post_no,roster_no,category,idcode,joining_post_dt,leaving_post_dt,entry_gp_stage)
VALUES(g_post_no, g_roster_no, g_category, :POST_HISTORY_MULTIPLE.idcode, :POST_HISTORY_MULTIPLE.joining_post_dt,
:POST_HISTORY_MULTIPLE.leaving_post_dt,:POST_HISTORY_MULTIPLE.entry_gp_stage);
END IF;
next_record;
ELSIF :SYSTEM.last_record = 'TRUE' THEN
UPDATE post_history
SET idcode = :POST_HISTORY_MULTIPLE.idcode,
joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
WHERE post_no = :POST_HISTORY_SINGLE.post_no
AND category = :POST_HISTORY_SINGLE.category
AND roster_no = :POST_HISTORY_SINGLE.roster_no;
AND idcode = :POST_HISTORY_MULTIPLE.idcode;
IF SQL%NOTFOUND THEN
INSERT INTO post_history(post_no,roster_no,category,idcode,joining_post_dt,leaving_post_dt,entry_gp_stage)
VALUES (g_post_no,g_roster_no,g_category,:POST_HISTORY_MULTIPLE.idcode,
:POST_HISTORY_MULTIPLE.joining_post_dt,:POST_HISTORY_MULTIPLE.leaving_post_dt,:POST_HISTORY_MULTIPLE.entry_gp_stage);
END IF;
EXIT;
END IF;
END LOOP;
SET_ALERT_PROPERTY('user_alert',ALERT_MESSAGE_TEXT, 'Record Updated successfuly' );
v_button_no := SHOW_ALERT('user_alert');
FORMS_DDL('COMMIT');
CLEAR_FORM(no_validate);
Please Guide me
Thanks in advenceUPDATE post_history
SET idcode = :POST_HISTORY_MULTIPLE.idcode,
joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
WHERE post_no = :POST_HISTORY_SINGLE.post_no
AND category = :POST_HISTORY_SINGLE.category
AND roster_no = :POST_HISTORY_SINGLE.roster_no;
AND idcode = :POST_HISTORY_MULTIPLE.idcode;
UPDATE post_history
SET idcode = :POST_HISTORY_MULTIPLE.idcode,
joining_post_dt = :POST_HISTORY_MULTIPLE.joining_post_dt,
leaving_post_dt = :POST_HISTORY_MULTIPLE.leaving_post_dt,
entry_gp_stage = :POST_HISTORY_MULTIPLE.entry_gp_stage
WHERE post_no = :POST_HISTORY_SINGLE.post_no
AND category = :POST_HISTORY_SINGLE.category
AND roster_no = :POST_HISTORY_SINGLE.roster_no;
AND idcode = :POST_HISTORY_MULTIPLE.idcode;These update statements are without where clause, so it will update all records.
If it is specific to oracle forms then u may get better help at Forms section. -
We have a legacy Forms and Reports application(~750 items combined). I suspect the applications were developed initially under 6i, but they work quite well under 10GR2, and is testing well under 11GR2 (which we plan on migrating to soon). We ran into an issue the other day for Data Query Blocks, and wanted your expert opinion on it.
Due to a change in business requirements, we recently had to increase the length of a database field (from 20 to 30). We went into Forms Developer, made the appropriate changes, and recompiled them all successfully. Everything *seems* to work fine, but we noticed we get failures on any item that tries to utilize that extra 10 characters. Apparently this is because the Query Data Block seems to use a cached/stale copy of the database field sizes, and still expects a size of 20! What we realized now is that we have to go into each of those forms, and REFRESH the Data Block (through the Data Block Wizard) for every one of them.
My question is this: is there anyway to force the Form to refresh those Query Data Blocks absent going into each one and REFRESHING it? We've tried compiling through Forms Developer, and through Batch Compiles, and nothing seems to get those cached copies to update. I'd really prefer to tackle this issue once and for all since we are prepping for migration testing. Frankly I suspect the table space was tweaked a good bit over the years since development (before I ever got here), and if they didn't REFRESH those Data Blocks properly, a great many of the Forms will be suffering this same cached/stale view problem.
Appreciate your thoughts,
DaveQ_Stephenson,
That's an excellent suggestion. I have seen vague hints and allegations of Forms manipulation using that method, but never felt comfortable with my overall knowledge(*) of Forms to look at it that deep. I think it's time to clear some space and dig into the topic for a better look.
Thank you,
Dave
(*) The majority of my programming background is in Java; it's only been recently that I've had to really dig deep into Forms. -
How to Save Multiple Records In Data Block
Hi All,
I Have Two Blocks --> Control Block,Database Block
Please Any Idea How to Save Multiple Records In Data Block when User changed Data in Control Block.
Thanks For Your Help
SaNow i have to use each record of control block(ctl_blk) as where condition in data base block(dat_blk)and display 10 records from database table.>
Do you want this coordination to be automatic/synchronized or only when the user clicks a button or something else to signal the coordination? Your answer here will dicate which trigger to put your code in.
As to the coordination part, as the user selects a record in the Control Block (CB), you will need to take the Key information and modify the Data Block's (DB) "DEFAULT_WHER E" block property. The logical place to put this code is the CB When-New-Record-Instance (WNRI) trigger. You code will look something like the following:
/* Sample WNRI trigger */
/* This sample assumes you do not have a default value in the BLOCK WHER E property */
DECLARE
v_tmp_dw VARCHAR2(250);
BEGIN
v_tmp_dw := ' DB_Key_Column1 = '||:CONTROL.Key_Column1||' AND DB_Key_Column2 = '||:CONTROL.Key_Column_2;
Set_Block_Property('DATA_BLOCK', DEFAULT_WHER E, v_tmp_df);
/* If you want auto coordination to occur, do the following */
Go_Block('DATA_BLOCK');
Execute_Query;
/* Now, return to the Control Block */
Go_Block('CONTROL_BLOCK');
END;
The Control block items are assigned with values in Form level (Key_exeqry).If your CD is populated from a single table, it would be better to create a Master - Detail relationship (as Abdetu) describes.
Hope this helps,
Craig B-)
If someone's response is helpful or correct, please mark it accordingly.
Maybe you are looking for
-
New Logic Board - Time Machine wants to create a new backup
This has me stumped. I just got a new hard drive and logic board from Apple. Plugged my external hard drive into my computer, turned time machine off and 'Entered Time Machine' to copy files over back to my computer. All worked as advertised. Once I
-
I have an external hard drive with numerous WMA files that I would like to download to iTunes. How do I do this on my MAC?
-
I can't install Desinger on my machine ,please help me
I want to install Designer, but I can't succeed either when install Oracle Designer 6i Release 4.5 or when install Oracle Designer 9i. I have installed Oracle9i Release 2 (9.2.0.1.0) standard edition for Windows NT on my machine. When I install Oracl
-
From the support article, it seems that I have to pay for an iTunes Match subscription in order to permanently delete songs from iCloud. Is there a way to delete songs from iCloud without having iTunes Match?
-
How to assign forms for Physical Inventory Doc printing ?
Hi all, I have 2 forms for 2 differnet factory to print Physical Inventory Document . I have create 2 output type with differnet form setting , but i can't find the way to assgin output type for the factory . Does anyone know how to config ? Thanks