Which CKM is used for moving data from Oracle to delimited file ?
Hi All
Please let me know Which CKM is used for moving data from Oracle to delimited file ?
Also is there need of defining each columns before hand in target datastore. Cant ODI take it from the oracle table itself ?
Addy,
A CKM is a Check KM which is used to validate data and log errors. It is not going to assist you in data movement. You will need an LKM SQL to File append as answered in another thread.
Assuming that you have a one to one mapping, to make things simpler you can duplicate the Oracle based model and create a file based model. This will take all the column definitions from the Oracle based model.
Alternatively, you can also use an ODI tool odiSQLUnload to dump the data to a file
HTH
Similar Messages
-
Which adapter to use to download data from site dynamically except file typ
hi,
I am to download data dynamically from a secure website to PI server without using file/FTP adapter and send it to any folder using PI.
(At receiver end i will use file adapter , Please suggest for sender side)
I will give user name and password to that site through PI.
Which adapter I should use at sender side?
At the same time I am to call an predefined RFC to trigger a Function module in SAP system.
How to do that?
Please help me out.
Response will be appreciated..
Thanks,
JaideepHi
add "&sap-client=<client>&sap-user=<loginId>&sap-password=<password>" to your wsdl
like http://server:8000/sap/bc/srt/rfc/sap/webservice _ name?wsdl&sap-client=100&sap-user=andre&sap-password=123456
Regard's
Chetan Ahuja -
Acquire data from a tab delimited file using a popup dialog object on a stamp
I am trying to import data from a tab delimited file using a popup dialog object on a stamp. I have purchased the book by Thom Parker--All About PDF Stamps in Acrobat and Paperless Workflows and have been working through the examples in the appendix.
My problem is understanding how to bring the data into the dialog object from the file.
I don't want to plagiarize his book--so am electing at this time not to show my code. The script is reading the file, just not bringing in the records from the file so that I can select which line to import into the stamp.
I have typed in the code exactly how the book describes, but when the popup dialog object is selected, there is nothing in the drop-down. When I click OK, the first record is put on the stamp--except for the fields that I am wanting to appear in the dialog object popup box.
I have searched the forums, and also the JavaScript reference. There are examples of the popup dialog object, but none of them show how to import the data from a file--just for the items to be typed in as the list.
Any help would be greatly appreciated! i have been trying to work on this for several months now.Karl
Thank you for getting back with me!
In answer to your questions:
1. Your trusted function is not a trusted function. Did you put this
function into a folder level script so that it will get executed at system
startup?--
yes--I saved the script as a .js file and put it in the following path (I have Acrobat XI Pro for Windows)
C:\Documents and Settings\tjohnson\Application Data\Adobe\Acrobat\Privileged\11.0\JavaScripts\GetTabData.js
2. The script cannot find your tab delimited data file, or it cannot
extract the data. Did you add the data file in the correct location? The
location from the script in the book would be c:\mydata\Contacts.txt
Yes--the file is in the same path as the book.
Below is my code that references the file.
var cPath = "/c/mydata/Contacts.txt";
the slashes in the book go in the direction of the text above--should they go in the direction that you have in your question?
Also, the name and email address need to be separated by one Tab character.
They are.
3. The fields need to be named the same way as the columns in the data file (the two names are in the first line of the file).
My headings are RevByFromTab and EmailFromTab--which match the names of the two fields on the stamp.
So, check that you are not getting any errors in the JavaScript console
(Ctrl-J or Cmd-J), and verify that the tab delimited file is in the correct
location
When I run in the java script console--and I just run the script on the stamp,
it says
TypeError: event.source is null
17:Console:Exec
undefined
When I place the stamp on the page, the popup box is working, but when you click on the down arrow, there is nothing listed. When I click OK, the RevByFromTab is populated by the first item in the file, but the EmailFromTab field says undefined.
Thank you
Message was edited by: tdjohnson7700 -
Extract data from Oracle in excel file
Hi,
I have a requirement where in I need to extract data from Oracle in excel file and the excel worksheet name should be "Data".
for eg. excel file name "AR Data_DDMMYY" and excel worksheet name "Data"
I have used the UTL_FILE API to extract the tab delimited data which can be opened in excel but it is not exactly an excel file as the worksheet name is same as the file name.
I tried using utl_file.fcopy and frename.
Is there any way to do this using PLSQL?
select * from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
"CORE 10.2.0.5.0 Production"
TNS for HPUX: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - ProductionSample Code:
declare
cursor c is
select * from scott.emp;
v varchar2(100);
f utl_file.file_type;
file_name varchar2(100) := 'AR Data.xls';
dir varchar2(50) := 'CESDIR191710';
--select * from dba_directories
begin
f := utl_file.fopen(dir, file_name, 'W');
v := 'EMPNO'||chr(9)||'ENAME'||chr(9)||'JOB'||chr(9)||'SAL'||chr(9)||'HIREDATE'||chr(9)||'DEPTNO';
utl_file.put_line(f, v);
for i in c
loop
v := i.empno||chr(9)||i.ename||chr(9)||i.job||chr(9)||i.sal||chr(9)||i.hiredate||chr(9)||i.deptno;
utl_file.put_line(f, v);
end loop;
utl_file.fclose(f);
--utl_file.frename(dir, file_name, dir, replace(file_name, '.xls', '_')||to_char(sysdate, 'MMDDYY')||'.xls', false);
utl_file.fcopy(dir, file_name, dir, replace(file_name, '.xls', '_')||to_char(sysdate, 'MMDDYY')||'.xls');
end;Thanks
ImranImran Soudagar wrote:
Hi,
I was able to generate the excel 2007 file with the data using the package from below link;
http://technology.amis.nl/2011/02/19/create-an-excel-file-with-plsql/
but the requirement is to generate excel 2003 file.
I tried changing the .xlsx to .xls and it gives a note while opening the file "The file you are trying to open, abc.xls, is in a different format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now?"Then you have three options:
1) stop using anton's package and find another one that supports the old and deprecated version of Excel 2003
2) write your own package to produce an Excel file.
3) Upgrade your version of Excel to a recent version
I tried the programs from other links on the forum but I am still getting this message. The client does not want this message to be displayed as the excel file works as an input to another system.
Can anyone help me with the issue?
Also, is it true that the programatically generated excel file is actually an xml file which is renamed to .xls and hence it shows the message while opening such files?Yes, Excel supports several different formats for it's files. By default, if you save an XLS files from Excel, it writes it out in a Microsoft proprietary binary format, which you would be hard pushed to replicate easily from PL/SQL. Excel also has the ability to save it's files as XML format, which is more readable and easier to produce programatically, whilst still allowing you to have multiple sheets, formulas and formatting included in it. That's the format that most people who need formatting and multiple sheets opt for when programatically generating their data as excel workbooks. (There's also an SLYK format that people used to use before that, but it's not as flexible)
If you want to write your own, the easiest thing to do is to start with a blank workbook in Excel, and put in your basic requirements e.g. a couple of named sheets, and some data in different formats (number, date, text etc.) and different formatting options etc. Save that file in XML format from Excel and then open up the file using notepad/wordpad to look at the structure. There'll be a whole load of redundant rubbish Microsoft put in there, but you should be able to figure out the basic structure of XML required to give you what you want. -
Program to upload data from a tab-delimited file ...
I have to upload data from a tab-delimited file with following fields into database table(ZCBU) with same fields:
CBU (parent)
KUNNR (child)
ERDAT (effective from)
MANDT (client)
SFID (salesforce ID)
AEDAT (effective to)
AENAM (assigned by).
This file can be of type PC(txt) or UNIX.
plz tell me how to do this in both type of filesHi,
DATA: bdcdata LIKE bdcdata OCCURS 0 WITH HEADER LINE.
DATA: xfile TYPE string.
DATA: BEGIN OF itab OCCURS 0,
empno TYPE zmemp-empno,
name TYPE zmemp-first_name,
last TYPE zmemp-last_name,
comp TYPE zmemp-comp,
place TYPE zmemp-place,
END OF itab.
PARAMETER : p_file TYPE rlgrap-filename OBLIGATORY.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
Form to get the file path of legacy data stored on presentation server
PERFORM get_file_path.
START-OF-SELECTION.
MOVE p_file TO xfile.
to get the data from excel sheet data into an internal table
PERFORM get_data.
LOOP AT itab .
REFRESH bdcdata.
PERFORM bdc_dynpro USING 'ZM_EMPLOYEE' '9001'.
PERFORM bdc_field USING 'BDC_CURSOR'
'S9001_EMPNO'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=CREA'.
PERFORM bdc_field USING 'S9001_EMPNO'
itab-empno.
PERFORM bdc_dynpro USING 'ZM_EMPLOYEE' '9002'.
PERFORM bdc_field USING 'BDC_CURSOR'
'S9002_PLACE'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=SAVE'.
PERFORM bdc_field USING 'S9002_EMPNO'
itab-empno.
PERFORM bdc_field USING 'S9002_FIRST_NAME'
itab-name.
PERFORM bdc_field USING 'S9002_LAST_NAME'
itab-last.
PERFORM bdc_field USING 'S9002_COMP'
itab-comp.
PERFORM bdc_field USING 'S9002_PLACE'
itab-place.
PERFORM bdc_dynpro USING 'ZM_EMPLOYEE' '9001'.
PERFORM bdc_field USING 'BDC_CURSOR'
'S9001_EMPNO'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=BACK'.
CALL TRANSACTION 'ZMEMP'
USING bdcdata
UPDATE 'A'
MODE 'N'.
ENDLOOP.
Start new screen *
FORM bdc_dynpro USING program dynpro.
CLEAR bdcdata.
bdcdata-program = program.
bdcdata-dynpro = dynpro.
bdcdata-dynbegin = 'X'.
APPEND bdcdata.
ENDFORM. "BDC_DYNPRO
Insert field *
FORM bdc_field USING fnam fval.
CLEAR bdcdata.
bdcdata-fnam = fnam.
bdcdata-fval = fval.
APPEND bdcdata.
ENDFORM. "BDC_FIELD
*& Form get_file_path
FORM get_file_path .
CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
CHANGING
file_name = p_file.
ENDFORM. " get_file_path
*& Form get_data
FORM get_data .
DATA : lines1 TYPE i.
MOVE p_file TO xfile.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = xfile
filetype = 'ASC'
has_field_separator = 'X'
TABLES
data_tab = itab.
DESCRIBE TABLE itab LINES lines1.
WRITE : / lines1 , 'REcords uploaded' .
ENDFORM. " get_data
Regards,
Nihar Swain, -
SOS!!!!----Error for Loading data from Oracle 11g to Essbase using ODI
Hi all.
I want to load data from oracle database to essbase using ODI.
I configure successfully the physical and logical Hyperion essbase on Topology Manager, and got the ESSBASE structure of BASIC app DEMO.
The problem is.
1. When I try view data right click on the essbase table,
va.sql.SQLException: Driver must be specified
at com.sunopsis.sql.SnpsConnection.a(SnpsConnection.java)
at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java)
at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java)
at com.sunopsis.graphical.frame.b.jc.bE(jc.java)
at com.sunopsis.graphical.frame.bo.bA(bo.java)
at com.sunopsis.graphical.frame.b.ja.dl(ja.java)
at com.sunopsis.graphical.frame.b.ja.<init>(ja.java)
at com.sunopsis.graphical.frame.b.jc.<init>(jc.java)
I got answer from Oracle Supporter It's ok, just omit it. Then the second problem appear.
2. I create an interface between oracle database and essbase, click the option "staging area deffirent from target"(meaning the staging is created at oracle database), and using IKM SQL to Hyperion Essbase(metadata), execute this interface
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 61, in ?
com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
at com.hyperion.odi.essbase.ODIEssbaseMetaWriter.validateLoadOptions(Unknown Source)
at com.hyperion.odi.essbase.AbstractEssbaseWriter.beginLoad(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
at org.python.core.PyMethod.__call__(PyMethod.java)
at org.python.core.PyObject.__call__(PyObject.java)
at org.python.core.PyInstance.invoke(PyInstance.java)
at org.python.pycode._pyx1.f$0(<string>:61)
at org.python.pycode._pyx1.call_function(<string>)
at org.python.core.PyTableCode.call(PyTableCode.java)
at org.python.core.PyCode.call(PyCode.java)
at org.python.core.Py.runCode(Py.java)
at org.python.core.Py.exec(Py.java)
I'm very confused by it. Anybody give me a solution or related docs.
Ethan.Hi ethan.....
U always need a driver to be present inside ur <ODI installation directory>\drivers folder.....for both the target and the source......Because ojdbc14.jar is the driver for oracle already present inside the drivers folder ....we don't have to do anything for it.....But for Essbase ...you need a driver......If u haven't already done this.....try this.....
Hope it helps...
Regards
Susane -
Error Loading data from oracle to flat file in oracle data integrator
Hi All,
I m trying to load data from oracle database to flat file.But I am getting the error at step "Insert new row"(integration)
Error Message is:
7000 : null : com.sunopsis.jdbc.driver.file.b.i
com.sunopsis.jdbc.driver.file.b.i
at com.sunopsis.jdbc.driver.file.b.d.setDate(d.java)
at com.sunopsis.sql.SnpsQuery.updateExecStatement(SnpsQuery.java)
at com.sunopsis.sql.SnpsQuery.addBatch(SnpsQuery.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.i(e.java)
at com.sunopsis.dwg.cmd.g.y(g.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source)
I m nt able to find out the exact reason.Can anyone help me in that.yes...I got the mistake which i was doing.Actually the target file data type should be string and I was defined it diffrent for each column.
When I chnged the data type to string,data get loaded to target table.
Thanks for your reply. -
I am z/OS system programmer, our company is using IMS as its main OLTP database. We are investigating moving data off the mainframe for data warehousing and online fraud detection. One option is using IBM InfoSphere CDC and DB2, another option is using IMS connect and writing our own program, I am wondering what is the oracle solution for this kind of issue?
I am not oracle technician but I googled and find out Oracle has some product like Oracle Legacy Adapters, OracleAS CDC Adapter and Oracle Connect on z/OS, however I didn't find them in Oracle site(https://edelivery.oracle.com/), I don't know whether these products are deprecated or not?!
I would very much appreciate any help or guidance you are able to give meThank you for responding.
I've considered dumping the data into a flat file and using SQL*Loader to import as you suggest but this would require some scripting on a per-table basis. Again: all I want to do is to copy the contents of a table from one database to another. I do not think I should have to resort to creating my own dump and load scripts in order to do that. However I agree with you that this type of solution may be my final solution.
I've tried the db link solution. It was just a slow as the 'imp' solution for some reason. Don't know why. The tables are rather large (3 tables of a few Gb each) and therefore require intermediate commits when loaded. Otherwise the rollback segment will run out of space. So the 'db link solution' is really a PL/SQL script with a commit for each x records.
I think Oracle is making it a bit difficult for me to copy the contents of a table from one database to another and to do it efficiently. Perhaps I'm missing something here? -
Sample pgm for moving data from table control to internal table
Hi Experts,
I am newbi to ABAP. I don't have good material for Table control . Appreciate if you direct me to some good source of knowledge on Table control.
The problem at hand : I am trying to move info/data from table control (in screen painter/ input and output mode ) to ITAB but couldn't . Sample pgm if possible.
<b>Modify ITAB index TC-Current_Line .</b>
The above statement is not inserting new lines to ITAB . Help me!
Thanks for your timehi,
do like this...
<b>PROCESS AFTER INPUT.</b>
*&SPWIZARD: PAI FLOW LOGIC FOR TABLECONTROL 'TAB1'
LOOP AT itab_det.
CHAIN.
FIELD itab_det-comp_code.
FIELD itab_det-bill_no.
FIELD itab_det-bill_date.
FIELD itab_det-vend_cust_code.
FIELD itab_det-bill_amt.
MODULE <b>tab1_modify</b> ON CHAIN-REQUEST.
ENDCHAIN.
FIELD itab_det-mark
MODULE tab1_mark ON REQUEST.
ENDLOOP.
<b>MODULE tab1_modify INPUT.</b>
APPEND itab_det.
<b>ENDMODULE. "TAB1_MODIFY INPUT</b> -
How to see data for particular date from a alert log file
Hi Experts,
I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
Thanks
ShaanHi Jaffar,
Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
It's not working. Here is the sample log file
Mon Nov 26 21:42:43 2007
Thread 1 advanced to log sequence 138
Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Mon Nov 26 21:42:43 2007
ARCH: Evaluating archive log 1 thread 1 sequence 137
Mon Nov 26 21:42:43 2007
ARC1: Evaluating archive log 1 thread 1 sequence 137
ARC1: Unable to archive log 1 thread 1 sequence 137
Log actively being archived by another process
Mon Nov 26 21:42:43 2007
ARCH: Beginning to archive log 1 thread 1 sequence 137
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
.dbf'
ARCH: Completed archiving log 1 thread 1 sequence 137
Mon Nov 26 21:42:44 2007
Thread 1 advanced to log sequence 139
Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Mon Nov 26 21:42:44 2007
ARC0: Evaluating archive log 3 thread 1 sequence 138
ARC0: Beginning to archive log 3 thread 1 sequence 138
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
.dbf'
Mon Nov 26 21:42:44 2007
ARCH: Evaluating archive log 3 thread 1 sequence 138
ARCH: Unable to archive log 3 thread 1 sequence 138
Log actively being archived by another process
Mon Nov 26 21:42:45 2007
ARC0: Completed archiving log 3 thread 1 sequence 138
Mon Nov 26 21:45:12 2007
Starting control autobackup
Mon Nov 26 21:45:56 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0033'
handle 'c-2861328927-20071126-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Tue Nov 27 21:23:50 2007
Starting control autobackup
Tue Nov 27 21:30:49 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0280'
handle 'c-2861328927-20071127-00'
Tue Nov 27 21:30:57 2007
ARC1: Evaluating archive log 2 thread 1 sequence 139
ARC1: Beginning to archive log 2 thread 1 sequence 139
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
.dbf'
Tue Nov 27 21:30:57 2007
Thread 1 advanced to log sequence 140
Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
Tue Nov 27 21:30:57 2007
ARCH: Evaluating archive log 2 thread 1 sequence 139
ARCH: Unable to archive log 2 thread 1 sequence 139
Log actively being archived by another process
Tue Nov 27 21:30:58 2007
ARC1: Completed archiving log 2 thread 1 sequence 139
Tue Nov 27 21:30:58 2007
Thread 1 advanced to log sequence 141
Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Tue Nov 27 21:30:58 2007
ARCH: Evaluating archive log 1 thread 1 sequence 140
ARCH: Beginning to archive log 1 thread 1 sequence 140
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
.dbf'
Tue Nov 27 21:30:58 2007
ARC1: Evaluating archive log 1 thread 1 sequence 140
ARC1: Unable to archive log 1 thread 1 sequence 140
Log actively being archived by another process
Tue Nov 27 21:30:58 2007
ARCH: Completed archiving log 1 thread 1 sequence 140
Tue Nov 27 21:33:16 2007
Starting control autobackup
Tue Nov 27 21:34:29 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0205'
handle 'c-2861328927-20071127-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Wed Nov 28 21:43:31 2007
Starting control autobackup
Wed Nov 28 21:43:59 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0202'
handle 'c-2861328927-20071128-00'
Wed Nov 28 21:44:08 2007
Thread 1 advanced to log sequence 142
Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Wed Nov 28 21:44:08 2007
ARCH: Evaluating archive log 3 thread 1 sequence 141
ARCH: Beginning to archive log 3 thread 1 sequence 141
Wed Nov 28 21:44:08 2007
ARC1: Evaluating archive log 3 thread 1 sequence 141
ARC1: Unable to archive log 3 thread 1 sequence 141
Log actively being archived by another process
Wed Nov 28 21:44:08 2007
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
.dbf'
Wed Nov 28 21:44:08 2007
ARC0: Evaluating archive log 3 thread 1 sequence 141
ARC0: Unable to archive log 3 thread 1 sequence 141
Log actively being archived by another process
Wed Nov 28 21:44:08 2007
ARCH: Completed archiving log 3 thread 1 sequence 141
Wed Nov 28 21:44:09 2007
Thread 1 advanced to log sequence 143
Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
Wed Nov 28 21:44:09 2007
ARCH: Evaluating archive log 2 thread 1 sequence 142
ARCH: Beginning to archive log 2 thread 1 sequence 142
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
.dbf'
Wed Nov 28 21:44:09 2007
ARC0: Evaluating archive log 2 thread 1 sequence 142
ARC0: Unable to archive log 2 thread 1 sequence 142
Log actively being archived by another process
Wed Nov 28 21:44:09 2007
ARCH: Completed archiving log 2 thread 1 sequence 142
Wed Nov 28 21:44:36 2007
Starting control autobackup
Wed Nov 28 21:45:00 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0202'
handle 'c-2861328927-20071128-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Thu Nov 29 21:36:44 2007
Starting control autobackup
Thu Nov 29 21:42:53 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0206'
handle 'c-2861328927-20071129-00'
Thu Nov 29 21:43:01 2007
Thread 1 advanced to log sequence 144
Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Thu Nov 29 21:43:01 2007
ARCH: Evaluating archive log 1 thread 1 sequence 143
ARCH: Beginning to archive log 1 thread 1 sequence 143
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
.dbf'
Thu Nov 29 21:43:01 2007
ARC1: Evaluating archive log 1 thread 1 sequence 143
ARC1: Unable to archive log 1 thread 1 sequence 143
Log actively being archived by another process
Thu Nov 29 21:43:02 2007
ARCH: Completed archiving log 1 thread 1 sequence 143
Thu Nov 29 21:43:03 2007
Thread 1 advanced to log sequence 145
Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Thu Nov 29 21:43:03 2007
ARCH: Evaluating archive log 3 thread 1 sequence 144
ARCH: Beginning to archive log 3 thread 1 sequence 144
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
.dbf'
Thu Nov 29 21:43:03 2007
ARC0: Evaluating archive log 3 thread 1 sequence 144
ARC0: Unable to archive log 3 thread 1 sequence 144
Log actively being archived by another process
Thu Nov 29 21:43:03 2007
ARCH: Completed archiving log 3 thread 1 sequence 144
Thu Nov 29 21:49:00 2007
Starting control autobackup
Thu Nov 29 21:50:14 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0280'
handle 'c-2861328927-20071129-01'
Thanks
Shaan -
Looking for best design approach for moving data from one db to another.
We have a very simple requirement to keep 2 tables synched up that live in 2 different databases. There can be up to 20K rows of data we need to synch up (nightly).
The current design:
BPEL process queries Source DB, puts results into memory and inserts into Target DB. Out of memory exception occurs. (no surprise).
I am proposing a design change to get the data in 1000 row chunks, something like this:
1. Get next 1000 records from Source DB. (managed through query)
2. Put into memory (OR save to file).
3. Read from memory (OR from a file).
4. Save into Target DB.
Question is:
1 Is this a good approach and if so, does SOA have any built in mechanisms to handle this? I would think so since I believe this is a common problem - we don't want to reinvent the wheel.
2. Is it better to put records into memory or writing to a file before inserting into the Target DB?
The implementation team told me this would have to be done with Java code, but I would think this would be out of the box functionality. Is that correct?
I am a SOA newby, so please let me know if there is a better approach.
Thank you very much for your valued input.
wildemanHi,
After going through your question, the first thing that came to my mind is what would be the size of the 20K records.
If this is going to be huge then even the 1000 row logic might take significant time to do the transfer. And I think even writing it to a file will not be efficient enough.
If the size is not huge then probably your solution might work. But I think you will need to decide on the chunk size based on how well your BPEL process will work. Possible you can try different size and test the performance to arrive at an optimal value.
But in case the size is going to be huge, then you might want to consider using ETL implementations. Oracle ODI does provide such features out of the box with high performance.
On the other hand, implementing the logic using the DBAdapter should be more efficient than java code.
Hope this helps. Please do share your thoughts/suggestions.
Thanks,
Patrick -
Which trigger to use for insert data into db table in Forms
Hi,
My form is current having a database block with table reference. When enter data into form field and click on save button. Automatically the record is inserted into database table.
I want to make this as manual insert. I changed the data block to a non-database. Where should i write the insert statement in order to insert data into table.
Is it Key-commit trigger at form level?
Please advise.
Thanks,
Yuvaraaj.Hi Yuvaraaj.
Insert should happen when we click on the save which is inbuilt in the form. In this case where should i write the insert statement.Forms in built save commit's the form data where block is based on database not non database.
@2nd reply
Ypu are right. The reason i chnaged the database block to non-database is Currently i have a database block with form field canvas which insert only 1 record in to >table when we click on standard save button. The requirement was to add a field called CHANNEL which should have multiple values displayed. (i created this channel >field in a seperate datablock (non database) and used the same canvas.) When we insert data in all fields (single record) and channel we should be able to selected >multiple channel (say A,B and C) when we click on save then 3 records should be inserted in to the table which looping values for each channel. This was the actual >requirement and this is the reason why iam changing the block to non-database block.You are talking about two blocks.. 1. Master block and 2. Details block name channel
You are inserting one record in master block then insert 3 record name A,B,C for that master record.
Now you want master record should insert to each A,B,C record. Means
'how are you' --master record
and you want
'A'- 'how are you'
'B'- 'how are you'
'C'- 'how are you'OR
?Ok. If you want master record save in database and then want to save non-database(channel) data into database USE Post-Insert trigger at block level and do the rest.
Hope this helps...
Hamid
Mark correct/helpful to help others to get right answer(s).*
Edited by: HamidHelal on Jan 26, 2013 1:20 AM -
ESB not moving data from database to flat file
Using ESB, created an database adapter/routing service (polling for changed records) & file adapter to write all records from a database table to a flat file as a series of XML entries in the file. I only get the last record from the database into the flat file. Using JDev 10.1.3.2 and SOA Suite 10.1.3. Have a 'status' column in the source database table that is set to 'PROCESSED' when the record has been processed. After I get the one XML element, representing the last record in the database, I look at the database table and ALL rows have their 'status' column set to PROCESSED - the field was null before the database table was processed.
How do I get ALL of the records into the flat file?
Thanks - CaseyHi.
Check this:
ESB File Adapter: Append to log file not supported???
Denis -
Which LKM is to be used for data from oracle to file
Hi All
I am new to ODI. Please help me in understanding this tool.
I want to know Which LKM is to be used for moving data from oracle to a comma seperated file etc.Hi John
Thanks a ton... I was able to move ahead. I executed thinking all params are correct.
But it did not go thru and gave me this error:-
936 : 42000 : java.sql.SQLException: ORA-00936: missing expression
java.sql.SQLException: ORA-00936: missing expression
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
at oracle.jdbc.driver.T4CPreparedStatement.execute_for_describe(T4CPreparedStatement.java:503)
at oracle.jdbc.driver.OracleStatement.execute_maybe_describe(OracleStatement.java:965)
at oracle.jdbc.driver.T4CPreparedStatement.execute_maybe_describe(T4CPreparedStatement.java:535)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1051)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:2984)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3026)
at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.i(e.java)
at com.sunopsis.dwg.cmd.g.y(g.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source) -
Extracting data from Oracle to a flat file
I'm looking for options to extract data a table or view at a time to a text file. Short of writing the SQL to do it, are there any other ideas? I'm dealing with large volumes of data, and would like a bulk copy routine.
Thanks in advance for your help.Is there any script which i can use for pulling data from tables to a flat file and then import that data to other DB'sFlat file is adequate only from VARCHAR2, NUMBER, & DATA datatypes.
Other datatypes present a challenge within text file.
A more robust solution is to use export/import to move data between Oracle DBs.
Maybe you are looking for
-
Definition of a new operating concern
Hi experts, We have a question about transporting a new operating concern. To transport customizing settings of the new operating concern we used Tcode KE3I. We defined new characteristics and value fields for the new operating concern. In the custo
-
Hello everybody> my itunes has stopped working and the problem report stated the following: Problem signature: Problem Event Name: APPCRASH Application Name: iTunes.exe Application Version: 10.5.3.3 Application Timestamp: 4f14cc3d
-
How to set highlight color for JTabbedPane????
Hi, In JTabbedPane class There are methods setbackGround(...) setForeGround(..) methods but what is the way to tell that, when a perticular Tab of a tabbedPane is selected it should be highlight with this color.. Now every thing is defaulting to grey
-
Final Log Checkup: Warning: file will not be included...
Just a final checkup... I am currently building my finished DVD project on to my master disk. I've noticed this throughout my test burns as well. The alert comes up saying: Warning: The file 'filename.ext' found in the VIDEO_TS or HVDVD_TS folder wil
-
Lost CD for Photoshop Elements 11
I've lost my CD for Photoshop Elements 11 and need to download again.