RE: Table to table to data load in odi11g
All,
i am getting below error whiel executing the interface:
I am getting Error not warning. when i see in the Operator
ODI-1228: Task EAM (Integration) fails on the target ORACLE connection DEV2DWH.
Caused By: java.sql.SQLSyntaxErrorException: ORA-00936: missing expression
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:91)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1035)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:953)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1224)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3386)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3467)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
at com.sunopsis.sql.SnpsQuery.executeUpdate(SnpsQuery.java:665)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.executeUpdate(SnpSessTaskSql.java:3218)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execStdOrders(SnpSessTaskSql.java:1785)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java:2805)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java:68)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2515)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:534)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:449)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1954)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:322)
at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:224)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:246)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:237)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:794)
at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:114)
at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
at java.lang.Thread.run(Thread.java:662)
Thanks in advance
Hi,
Probably has some syntax error or missing parameter in the ORACLE tables used in the interface ODI. See below:
ERROR: ORA-00936: missing expression
CAUSE: A required part of a clause or expression has been omitted. For example, a SELECT statement may have been entered without a list of columns or expressions or with an incomplete expression. This message is also issued in cases where a reserved word is misused, as in SELECT TABLE.
ACTION: Check the statement syntax and specify the missing component.
Similar Messages
-
How to update existing table using Data Load from spreadsheet option?
Hi there,
I need to update an existing table, but in Data Load application when you select csv file to upload it inserts all data by replacing existing one. How can i change this?
Let me know,
Thank you.
A.B.A.And how do you expect your database server to access a local file in your machine ?
Is the file accessible from outside your machine say inside a webserver folder so that some DB process can poll on the file ?
Or, is your DB server in the same machine where you have the text file ?
You will have to figure out the file acess part before automating user interaction or even auto-refreshing. -
External table and data load order
I'm unsing Oracle 10g exterbal table feature to load a text file to the database
1- What is the default order of loading the text file into the Oracle table
2- How to ensure or to change the this default behavior1- What is the default order of loading the text file into the Oracle tableTop (row#1) to bottom
2- How to ensure or to change the this default behavioruse ORDER BY clause -
Issue:
I have SAP BW system and SAP HANA System
SAP BW to SAP HANA connecting through a DB Connection (named HANA)
Whenever I created any Open Hub as Destination like DB Table with the help of DB Connection, table will be created at HANA Schema level ( L_F50800_D )
Executed the Open Hub service without checking DELETING Data from table option
Data loaded with 16 Records from BW to HANA same
Second time again executed from BW to HANA now 32 records came ( it is going to append )
Executed the Open Hub service with checking DELETING Data from table option
Now am getting short Dump DBIF_RSQL_TABLE_KNOWN getting
If checking in SAP BW system tio SAP BW system it is working fine ..
will this option supports through DB Connection or not ?
Please follow the attachemnet along with this discussion and help me to resolve how ?
From
Santhosh KumarHi Ramanjaneyulu ,
First of all thanks for the reply ,
Here the issue is At OH level ( Definition Level - DESTINATION TAB and FIELD DEFINITION )
in that there is check box i have selected already that is what my issue even though selected also
not performing the deletion from target level .
SAP BW - to SAP HANA via DBC connection
1. first time from BW suppose 16 records - Dtp Executed -loaded up to HANA - 16 same
2. second time again executed from BW - now hana side appaended means 16+16 = 32
3. so that i used to select the check box at OH level like Deleting data from table
4. Now excuted the DTP it throws an Short Dump - DBIF_RSQL_TABLE_KNOWN
Now please tell me how to resolve this ? will this option is applicable for HANA mean to say like , deleting data from table option ...
Thanks
Santhosh Kumar -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Data loading: formatting data for timestamp column
Hi All,
I have a table with a timestamp column named as created_date. I want to upload data to that table using data loading page. but there is one problem while uploading data, I have a csv file in which the created_date column data in two different format as follows ,
09/03/2013 03:33am
09/02/2013 03:24pm
the above data throws an error ORA-01821: date format not recognized.
In Data / Table Mapping page, I tried with MM/DD/YYYY HH12:MI:SS AM. What format should i use for am and pm??
Please help me to solve....
Thanks in advance
LakshmiI solved by using the format MM/DD/YYYY HH:MIAM.
Thanks
Lakshmi -
Loading MS Access Table and Data into Oracle
Hi,
I have few tables in MS Access. I want to create same layout of tables in Oracle and want to populate data from MS Access tables to Oracle tables.
Please let me know if there is a way by which I can create tables and load data automatically (thru some option or script)?
I have Oracle 10g database and its clients.
Thanks in advance,
Rajeev.You can use Oracle migration workbench
Loading MS Access Table and Data into Oracle
It´s very easy to use and good to import
regards,
Felipe -
How can I load data into table with SQL*LOADER
how can I load data into table with SQL*LOADER
when column data length more than 255 bytes?
when column exceed 255 ,data can not be insert into table by SQL*LOADER
CREATE TABLE A (
A VARCHAR2 ( 10 ) ,
B VARCHAR2 ( 10 ) ,
C VARCHAR2 ( 10 ) ,
E VARCHAR2 ( 2000 ) );
control file:
load data
append into table A
fields terminated by X'09'
(A , B , C , E )
SQL*LOADER command:
sqlldr test/test control=A_ctl.txt data=A.xls log=b.log
datafile:
column E is more than 255bytes
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)Check this out.
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/ch06.htm#1006961 -
Is it possible to upload few column in table through Apex Data Loading
Hi All,
I have to do upload into the table through a csv file . The table's primary key i have to load the rest through user's uploaded file. Is it possible to do the data loading to the table only to required columns and fill the other columns from backend. Or is there any other way to do this?Hi,
Your query is not really clear.
>
Is it possible to do the data loading to the table only to required columns and fill the other columns from backend. Or is there any other way to do this?
>
How do you plan to "link" the rows from these 2 sets of data in the "backend"? There has to be a way to have a relation between them.
Regards, -
Comparison of Data Loading techniques - Sql Loader & External Tables
Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
1) SQL Loader:
a. Place the flat file( .txt or .csv) on the desired Location.
b. Create a control file
Load Data
Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
Append or Truncate (-- based on requirement) into oracle tablename
Separated by "," (or the delimiter we use in input file) optionally enclosed by
(Field1, field2, field3 etc)
c. Now run sqlldr utility of oracle on sql command prompt as
sqlldr username/password .CTL filename
d. The data can be verified by selecting the data from the table.
Select * from oracle_table;
2) External Table:
a. Place the flat file (.txt or .csv) on the desired location.
abc.csv
1,one,first
2,two,second
3,three,third
4,four,fourth
b. Create a directory
create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
c. After granting appropriate permissions to the user, we can create external table like below.
create table ext_table_csv (
i Number,
n Varchar2(20),
m Varchar2(20)
organization external (
type oracle_loader
default directory ext_dir
access parameters (
records delimited by newline
fields terminated by ','
missing field values are null
location ('file.csv')
reject limit unlimited;
d. Verify data by selecting it from the external table now
select * from ext_table_csv;
External tables feature is a complement to existing SQL*Loader functionality.
It allows you to –
• Access data in external sources as if it were in a table in the database.
• Merge a flat file with an existing table in one statement.
• Sort a flat file on the way into a table you want compressed nicely
• Do a parallel direct path load -- without splitting up the input file, writing
Shortcomings:
• External tables are read-only.
• No data manipulation language (DML) operations or index creation is allowed on an external table.
Using Sql Loader You can –
• Load the data from a stored procedure or trigger (insert is not sqlldr)
• Do multi-table inserts
• Flow the data through a pipelined plsql function for cleansing/transformation
Comparison for data loading
To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
Conclusion:
SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.Please let me know your views on this.
-
Main Table data load u2013 UNSPSC fields is not loading
I am new to SAP MDM
I have the main table data that includes UNSPSC field. UNSPSC (hierarchy) table is already loaded.
It works fine when I use import manager with field mapping and value mapping. (UNSPSC field value mapping is done).
When I use the import server using the same map to load the main table data with UNSPSC field (in this case the UNSPSC field value is different but UNSPSC lookup table has that value) , UNSPSC field is not loading but all other fields are loaded including images and PDF's with new values
If I go to the import manager and do the value mapping again for the UNSPSC field with the new value then save the map and use the import server to load the data then it is loading correctly.
My question when we use the import server, main table data with UNSPSC codes value will be different each time and it doesnu2019t make sense to go to the import manager and do the value mapping and saving the import map before loading the data again.
What I am missing here?. Anyone can help me?Could anyone clarify this?
Issue: UNSPSC field value mapping automatically by using the import server while loading the Main table.
This issue was resolved yesterday and still works fine with the remote system MDC UNSPSC.
Is there anyn settings in the ' Set MDIS Unmapped value handling'? (Right click on the field Product hierarchy field at the destination side). By default it is setting to 'Add' for both the working remote system as well as the non working remote system
SAP MDM 5.5 SP6 and I am using the standard Product Master repository
I tried this in a different remote system MDC R/3 & ERP and it worked some time and didnu2019t work later. If it is working then during the UNSPSC code field mapping, it automatically maps the values also.
The destination side the main table Products and the destination side [Remote key] field is displayed.
Source file, I have only 3 fields and they are Product No, Product Name and UNSPSC Category and UNSPSC Category is mapped to the destination field Product Hierarchy field(lookup hierarchy)
Do I have to map any field or clone any field and map to the [Remote Key Field] in the destination side? If yes, what field I have to clone and map it to the Remote Key filed? Is there any other settings necessary. I am not using any matching with this field or any other field.
Steve.
Edited by: SteveLat on Oct 8, 2009 11:57 PM
Edited by: SteveLat on Oct 9, 2009 12:03 AM
Edited by: SteveLat on Oct 9, 2009 12:47 AM -
Hi All,
i am facing issue with apex 4.2.4 ,using the Data Load Table concept's and in this look up used the
Where Clause option ,it seems to be not working this where clause ,Please help me on thishi all,
it looks this where clause not filter with 'N' data ,Please help me ,how to solve this or help me on this -
Where Clause in Table Lookups for Data Load
Hello,
In Shared Components I created in Data Load Table. In this Data Load Table I added a Table Lookup. On the page to edit the Table Lookup, there is a field called Where Clause. I tried to add a Where Clause to my Table Lookup in this field but it seems that it has no effect on the Data Load process.
Does someone know how to use this Where Clause field?
Thanks,
SebHi,
I'm having the same problem with the where clause being ignored in the table lookup, is this a bug and if so is there a work around?
Thanks in advance -
Insert data file name into table from sql loader
Hi All,
I have a requirement to insert the data file name dynamically into table using sql loader.
Example:
sqlldr userid=username/passwword@host_string control=test_ctl.ctl data=test_data.dat
test_ctl.ctl
LOAD DATA
FILED TERMINATED BY ','
INTO TABLE test
(empid number,
ename varchar2(20),
file_name varchar2(20) ---------- This should be the data file name which can be dynamic (coming from parameter)
test_data.dat
1,test
2,hello
3,world
4,end
Please help..
Thanks in advance.
Regards
Anujyou'll probably have to write your control file on the fly, using a .bat or .sh file
rem ===== file : test.bat ========
rem
rem ============== in pseudo speak =============
rem
rem
echo LOAD DATA > test.ctl
echo FILED TERMINATED BY ',' >> test.ctl
echo INTO TABLE test >> test.ctl
echo (empid number, >> test.ctl
echo ename varchar2(20), >> test.ctl
echo file_name constant %1% >> test.ctl
echo ) >> test.ctl
rem
rem
rem
sqlldr userid=username/passwword@host_string control=test.ctl data=test_data.dat
rem =============== end of file test.bat =======================
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_field_list.htm#i1008664 -
Blank Row in table during Master Data Load
I am having some success with my master data loads, but when I maintain the master data I have noticed that every table has had a blank row inserted.
Does anybody know why and what I should do with the row (i.e. delete it)?This blank row is by default created and you can no way delete it. Though you delete it, a new row with blank values will be appended. This is required for some technical reason while reading the table within ABAP programs.
This is applicable only for SAP tables and may not be required for custom developed ones unless you want to use this in screen programs.
Regards,
Raj
Maybe you are looking for
-
WEBI 3.1 Scheduling with Prompts issue
A developer built a webi report using a prompt filter using a single-select "Year". Then they tested the Scheduling feature in infoview. Works fine. Later, the developer changed the report prompt to an InList "multi select" to filter to handle multip
-
Creating PL/SQL package in SQL Developer 4.0.0.12
Hi, I have built a model in SQL Developer 4.0.0.12 and I want to create it's PL/SQL package. In ODMiner 11.1.0.4 from "Tools" > "Create pl/sql package" I can get a script for creating package and every thing is OK. but in SQL Developer I select "Depl
-
Are italicized, block lettering, different fonts available for download ?
I don't have any problems with Firefox, but I'd like to know: Is there a download available that will permit me to use block, italicized or different fonts when posting comments on blog sites that I frequent?
-
I have just started using PE 11 and have found there is only a basic theme in photo collage, are there others available to download? my product is activated etc. it isn't a trial version.
-
Hi all. I have read the tutorial to on Wifi-sync and gone through everystep making sure that this could work. I tried everything i was suppossed to do with my iPhone 3gs and it worked perfectly and now is syncing over wifi with ease. Yet when i plug