Index used or not for selecting data from ODS in a start routine
Dear friends,
In the start routine of the update rules to a cube, I am reading some data of an ODS in to an internal table .
The ODS is indexed. But, I am not sure if the index is at all used in the Select statement (that gets the data from ODS to the internal table in the start routine) while loading data to the cube.
Any help is highly appreciated.
regards,
atlaj
Hi Atlaj
You can findout this is display execution plan for SQL statement in DB02.
Goto DB02. and under diagnostic, you find explain. Select that and enter your query. Make sure that everything here is in capital format. Below is a sample query which I have entered.
SELECT "CRM_SALORG" "SALESORG" FROM "/BI0/QORGUNIT"
WHERE "SALESORG" = ? AND "OBJVERS" = ? AND "DATETO" >= ?
AND "DATEFROM" <= ?
The select parameters should be inside qoutes and in caps and even the from table. Once you enter your query in this format, click on explain. It will show the index scan if the index is present. the output for my query will be something like
0 SELECT STATEMENT ( Estimated Costs = 1,348E+01 [timerons] )
1 (COOR) RETURN
2 ( 0) TQ
3 ( 0) FETCH /BI0/QORGUNIT
4 ( 0) IXSCAN /BI0/QORGUNIT~Z1
Where my last statement (line 4) is showing index scan and the name of index read is Z1.
Hope this helps.
Please let me know if you have any problems entering the query in the specified format and u get any error.
Regards
Sriram
Similar Messages
-
Shortdump problem for loadinf data from ODS to InfoCube
hai
im trying to load the data from ODS to InfoCube.But i got the following error like below
Short dump in the Warehouse
Diagnosis
The data update was not completed. A short dump has probably been logged in BW providing information about the error.
<b>System response
"Caller 70" is missing.
Further analysis:
Search in the BW short dump overview for the short dump belonging to the request. Pay attention to the correct time and date on the selection screen.
You get a short dump list using the Wizard or via the menu path "Environment -> Short dump -> In the Data Warehouse".
Error correction:
Follow the instructions in the short dump.</b>
I looked at the shortdump.But it says that there is no shortdump for that particular date selection.
pls tell me wht i have to do
i ll assing the points
bye
rizwanHi Rizwan,
Why does the error occurs ?
This error normally occurs whenever BW encounters error and is not able to classify them. There could be multiple reasons for the same
o Whenever we are loading the Master Data for the first time, it creates SIDs. If system is unable to create SIDs for the records in the Data packet, we can get this error message.
o If the Indexes of the cube are not deleted, then it may happen that the system may give the caller 70 error.
o Whenever we are trying to load the Transactional data which has master data as one of the Characteristics and the value does not exist in Master Data table we get this error. System can have difficultly in creating SIDs for the Master Data and also load the transactional data.
o If ODS activation is taking place and at the same time there is another ODS activation running parallel then in that case it may happen that the system may classify the error as caller 70. As there were no processes free for that ODS Activation.
o It also occurs whenever there is a Read/Write occurring in the Active Data Table of ODS. For example if activation is happening for an ODS and at the same time the data loading is also taking place to the same ODS, then system may classify the error as caller 70.
o It is a system error which can be seen under the Status tab in the Job Over View.
What happens when this error occurs ?
The exact error message is System response "Caller 70" is missing.
It may happen that it may also log a short dump in the system. It can be checked at "Environment -> Short dump -> In the Data Warehouse".
What can be the possible actions to be carried out ?
If the Master Data is getting loaded for the first time then in that case we can reduce the Data Package size and load the Info Package. Processing sometimes is based on the size of Data Package. Hence we can reduce the data package size and then reload the data again. We can also try to split the data load into different data loads
If the error occurs in the cube load then we can try to delete the indexes of the cube and then reload the data again.
If we are trying to load the Transactional and Master Data together and this error occurs then we can reduce the size of the Data Package and try reloading, as system may be finding it difficult to create SIDs and load data at the same time. Or we can load the Master Data first and then load Tranactional Data
If the error is happening while ODS activation cause of no processes free, or available for processing the ODS activation, then we can define processes in the T Code RSCUSTA2.
If error is occurring due to Read/Write in ODS then we need to make changes in the schedule time of the data loading.
Once we are sure that the data has not been extracted completely, we can then go ahead and delete the red request from the manage tab in the InfoProvider. Re-trigger the InfoPackage again.
Monitor the load for successful completion, and complete the further loads if any in the Process Chain.
(From Re: caller 70 missing).
Also check links:
Caller 70 is missing
Re: Deadlock - error
"Caller 70 Missing" Error
Caller 70 missing.
Bye
Dinesh -
How to use BULK INSERT for a data from a cursor?
Oracle 10G enterprise edition.
I tried to Bulk insert datas returning from a cursor, its returning error.
PLS-00302: component 'LAST' must be declared
I need some help to use the Bulk INSERT here.Can any one help me to specify what error i have made?
CREATE OR REPLACE PROCEDURE HOT_ADMIN.get_search_keyword_stats_prc
IS
CURSOR c_get_scenarios
IS
SELECT a.*,ROWNUM rnum
FROM (
SELECT TRUNC(r.search_date) sdate,
r.search_hits hits,
r.search_type stype,
r.search_qualification qual,
r.search_location loc,
r.search_town stown,
r.search_postcode pcode,
r.search_college college,
r.search_colname colname,
r.search_text text,
r.affiliate_id affiliate,
r.search_study_mode smode,
r.location_hint hint,
r.search_posttown ptown,
COUNT(1) cnt
FROM w_search_headers r
WHERE search_text IS NOT NULL
AND NVL(search_type,' ') <> 'C'
AND TRUNC(search_date)= TO_DATE(TO_CHAR(SYSDATE-1,'DD-MON-RRRR'))
GROUP BY TRUNC(r.search_date),
r.search_hits,
r.search_type,
r.search_qualification,
r.search_location,
r.search_town,
r.search_postcode,
r.search_college,
r.search_colname,
r.search_text,
r.affiliate_id,
r.search_study_mode,
r.location_hint,
r.search_posttown
ORDER BY cnt desc
) a
WHERE ROWNUM <=1000;
lc_get_data c_get_scenarios%ROWTYPE;
BEGIN
OPEN c_get_scenarios;
FETCH c_get_scenarios into lc_get_data;
CLOSE c_get_scenarios;
FORALL i IN 1..lc_get_data.last
INSERT INTO W_SEARCH_SCENARIO_STATS VALUES ( i.sdate,
i.hits,
i.stype,
i.qual,
i.loc,
i.stown,
i.pcode,
i.college,
i.colname,
i.text,
i.affiliate,
i.smode,
i.hint,
i.ptown,
i.cnt
COMMIT;
END;This isn't what you asked, but I've generally found it helpful to list the columns in an INSERT statement before the values. It is of course optional, but useful for reference when looking at the statement later
-
Method :Want to Use Z-Table for Accessing Data
Dear All,
I am new to BADI. I have implented one HR-PAYROLL Badi.
In this BADI I am having one method . I want to Use One Z-table for accessing data from
that table . Not able define the table in method.
Kindly tell me how to do it .
Thanking you in Advance
SiladityaHello Siladitya
I assume your problem is the definition of an itab for selecting data from your Z-table. In case of classes you have to use table types and workareas, e.g.:
METHOD name_of_interface_method.
DATA:
lt_itab TYPE TABLE OF <name of z-table>,
ls_record TYPE <name of z-table>.
SELECT * FROM <name of z-table> INTO TABLE lt_itab.
LOOP AT lt_itab INTO ls_record.
ENDLOOP.
ENDMETHOD.
Regards
Uwe -
Problem while using BCP utility for witing data in file
hi all,
I have a batch file in which I am using bcp command for reading data from MS SQL and writing it in delimiter file. Now there are some exceptions in MS SQL that while writing into file whenever it encounters new line character it switches to next line while writing and starts writing the rest of the data on next.
Could you help me in getting rid of this problem. I wanted to replace the new line character with space.
Thanks and regards
NitinHi Dilip,
Before going for any other table,
As Kalnr is only one of the primary keys of table KEKO, You can try creating secondary index on KEKO, which might help in improving your report performance.
Also, you can add more conditions in where clause if possible, which will also help in improving performance.
Thansk,
Archana -
Performance issue in selecting data from a view because of not in condition
Hi experts,
I have a requirement to select data in a view which is not available in a fact with certain join conditions. but the fact table contains 2 crore rows of data. so this view is not working at all. it is running for long time. im pasting query here. please help me to tune it. whole query except prior to not in is executing in 15 minutes. but when i add not in condition it is running for so many hours as the second table has millions of records.
CREATE OR REPLACE FORCE VIEW EDWOWN.MEDW_V_GIEA_SERVICE_LEVEL11
SYS_ENT_ID,
SERVICE_LEVEL_NO,
CUSTOMER_NO,
BILL_TO_LOCATION,
PART_NO,
SRCE_SYS_ID,
BUS_AREA_ID,
CONTRACT,
WAREHOUSE,
ORDER_NO,
LINE_NO,
REL_NO,
REVISED_DUE_DATE,
REVISED_QTY_DUE,
QTY_RESERVED,
QTY_PICKED,
QTY_SHIPPED,
ABBREVIATION,
ACCT_WEEK,
ACCT_MONTH,
ACCT_YEAR,
UPDATED_FLAG,
CREATE_DATE,
RECORD_DATE,
BASE_WAREHOUSE,
EARLIEST_SHIP_DATE,
LATEST_SHIP_DATE,
SERVICE_DATE,
SHIP_PCT,
ALLOC_PCT,
WHSE_PCT,
ABC_CLASS,
LOCATION_ID,
RELEASE_COMP,
WAREHOUSE_DESC,
MAKE_TO_FLAG,
SOURCE_CREATE_DATE,
SOURCE_UPDATE_DATE,
SOURCE_CREATED_BY,
SOURCE_UPDATED_BY,
ENTITY_CODE,
RECORD_ID,
SRC_SYS_ENT_ID,
BSS_HIERARCHY_KEY,
SERVICE_LVL_FLAG
AS
SELECT SL.SYS_ENT_ID,
SL.ENTITY_CODE
|| '-'
|| SL.order_no
|| '-'
|| SL.LINE_NO
|| '-'
|| SL.REL_NO
SERVICE_LEVEL_NO,
SL.CUSTOMER_NO,
SL.BILL_TO_LOCATION,
SL.PART_NO,
SL.SRCE_SYS_ID,
SL.BUS_AREA_ID,
SL.CONTRACT,
SL.WAREHOUSE,
SL.ORDER_NO,
SL.LINE_NO,
SL.REL_NO,
SL.REVISED_DUE_DATE,
SL.REVISED_QTY_DUE,
NULL QTY_RESERVED,
NULL QTY_PICKED,
SL.QTY_SHIPPED,
SL.ABBREVIATION,
NULL ACCT_WEEK,
NULL ACCT_MONTH,
NULL ACCT_YEAR,
NULL UPDATED_FLAG,
SL.CREATE_DATE,
SL.RECORD_DATE,
SL.BASE_WAREHOUSE,
SL.EARLIEST_SHIP_DATE,
SL.LATEST_SHIP_DATE,
SL.SERVICE_DATE,
SL.SHIP_PCT,
0 ALLOC_PCT,
0 WHSE_PCT,
SL.ABC_CLASS,
SL.LOCATION_ID,
NULL RELEASE_COMP,
SL.WAREHOUSE_DESC,
SL.MAKE_TO_FLAG,
SL.source_create_date,
SL.source_update_date,
SL.source_created_by,
SL.source_updated_by,
SL.ENTITY_CODE,
SL.RECORD_ID,
SL.SRC_SYS_ENT_ID,
SL.BSS_HIERARCHY_KEY,
'Y' SERVICE_LVL_FLAG
FROM ( SELECT SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
MAX (SL_INT.REL_NO) REL_NO,
SL_INT.REVISED_DUE_DATE,
SUM (SL_INT.REVISED_QTY_DUE) REVISED_QTY_DUE,
SUM (SL_INT.QTY_SHIPPED) QTY_SHIPPED,
SL_INT.ABBREVIATION,
MAX (SL_INT.CREATE_DATE) CREATE_DATE,
MAX (SL_INT.RECORD_DATE) RECORD_DATE,
SL_INT.BASE_WAREHOUSE,
MAX (SL_INT.LAST_SHIPMENT_DATE) LAST_SHIPMENT_DATE,
MAX (SL_INT.EARLIEST_SHIP_DATE) EARLIEST_SHIP_DATE,
MAX (SL_INT.LATEST_SHIP_DATE) LATEST_SHIP_DATE,
MAX (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
THEN
TRUNC (SL_INT.LAST_SHIPMENT_DATE)
ELSE
TRUNC (SL_INT.LATEST_SHIP_DATE)
END)
SERVICE_DATE,
MIN (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) >=
TRUNC (SL_INT.EARLIEST_SHIP_DATE)
AND TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
AND SL_INT.QTY_SHIPPED = SL_INT.REVISED_QTY_DUE
THEN
100
ELSE
0
END)
SHIP_PCT,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
MAX (SL_INT.source_create_date) source_create_date,
MAX (SL_INT.source_update_date) source_update_date,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY
FROM (SELECT SL_UNADJ.*,
DECODE (
TRIM (TIMA.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
- 1
- early_ship_days,
'sunday', SL_UNADJ.REVISED_DUE_DATE
- 2
- early_ship_days,
SL_UNADJ.REVISED_DUE_DATE - early_ship_days)
EARLIEST_SHIP_DATE,
DECODE (
TRIM (TIMB.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
+ 2
+ LATE_SHIP_DAYS,
'sunday', SL_UNADJ.REVISED_DUE_DATE
+ 1
+ LATE_SHIP_DAYS,
SL_UNADJ.REVISED_DUE_DATE + LATE_SHIP_DAYS)
LATEST_SHIP_DATE
FROM (SELECT NVL (s2.sys_ent_id, '00') SYS_ENT_ID,
cust.customer_no CUSTOMER_NO,
cust.bill_to_loc BILL_TO_LOCATION,
cust.early_ship_days,
CUST.LATE_SHIP_DAYS,
ord.PART_NO,
ord.SRCE_SYS_ID,
ord.BUS_AREA_ID,
ord.BUS_AREA_ID CONTRACT,
NVL (WAREHOUSE, ord.entity_code) WAREHOUSE,
ORDER_NO,
ORDER_LINE_NO LINE_NO,
ORDER_REL_NO REL_NO,
TRUNC (REVISED_DUE_DATE) REVISED_DUE_DATE,
REVISED_ORDER_QTY REVISED_QTY_DUE,
-- NULL QTY_RESERVED,
-- NULL QTY_PICKED,
SHIPPED_QTY QTY_SHIPPED,
sold_to_abbreviation ABBREVIATION,
-- NULL ACCT_WEEK,
-- NULL ACCT_MONTH,
-- NULL ACCT_YEAR,
-- NULL UPDATED_FLAG,
ord.CREATE_DATE CREATE_DATE,
ord.CREATE_DATE RECORD_DATE,
NVL (WAREHOUSE, ord.entity_code)
BASE_WAREHOUSE,
LAST_SHIPMENT_DATE,
TRUNC (REVISED_DUE_DATE)
- cust.early_ship_days
EARLIEST_SHIP_DATE_UnAdj,
TRUNC (REVISED_DUE_DATE)
+ CUST.LATE_SHIP_DAYS
LATEST_SHIP_DATE_UnAdj,
--0 ALLOC_PCT,
--0 WHSE_PCT,
ABC_CLASS,
NVL (LOCATION_ID, '000') LOCATION_ID,
--NULL RELEASE_COMP,
WAREHOUSE_DESC,
NVL (
DECODE (MAKE_TO_FLAG,
'S', 0,
'O', 1,
'', -1),
-1)
MAKE_TO_FLAG,
ord.CREATE_DATE source_create_date,
ord.UPDATE_DATE source_update_date,
ord.CREATED_BY source_created_by,
ord.UPDATED_BY source_updated_by,
ord.ENTITY_CODE,
ord.RECORD_ID,
src.SYS_ENT_ID SRC_SYS_ENT_ID,
ord.BSS_HIERARCHY_KEY
FROM EDW_DTL_ORDER_FACT ord,
edw_v_maxv_cust_dim cust,
edw_v_maxv_part_dim part,
EDW_WAREHOUSE_LKP war,
EDW_SOURCE_LKP src,
MEDW_PLANT_LKP s2,
edw_v_incr_refresh_ctl incr
WHERE ord.BSS_HIERARCHY_KEY =
cust.BSS_HIERARCHY_KEY(+)
AND ord.record_id = part.record_id(+)
AND ord.part_no = part.part_no(+)
AND NVL (ord.WAREHOUSE, ord.entity_code) =
war.WAREHOUSE_code(+)
AND ord.entity_code = war.entity_code(+)
AND ord.record_id = src.record_id
AND src.calculate_back_order_flag = 'Y'
AND NVL (cancel_order_flag, 'N') != 'Y'
AND UPPER (part.source_plant) =
UPPER (s2.location_code1(+))
AND mapping_name = 'MEDW_MAP_GIEA_MTOS_STG'
-- AND NVL (ord.UPDATE_DATE, SYSDATE) >=
-- MAX_SOURCE_UPDATE_DATE
AND UPPER (
NVL (ord.order_status, 'BOOKED')) NOT IN
('ENTERED', 'CANCELLED')
AND TRUNC (REVISED_DUE_DATE) <= SYSDATE) SL_UNADJ,
EDW_TIME_DIM TIMA,
EDW_TIME_DIM TIMB
WHERE TRUNC (SL_UNADJ.EARLIEST_SHIP_DATE_UnAdj) =
TIMA.ACCOUNT_DATE
AND TRUNC (SL_UNADJ.LATEST_SHIP_DATE_Unadj) =
TIMB.ACCOUNT_DATE) SL_INT
WHERE TRUNC (LATEST_SHIP_DATE) <= TRUNC (SYSDATE)
GROUP BY SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
SL_INT.REVISED_DUE_DATE,
SL_INT.ABBREVIATION,
SL_INT.BASE_WAREHOUSE,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY) SL
WHERE (SL.BSS_HIERARCHY_KEY,
SL.ORDER_NO,
Sl.line_no,
sl.Revised_due_date,
SL.PART_NO,
sl.sys_ent_id) NOT IN
(SELECT BSS_HIERARCHY_KEY,
ORDER_NO,
line_no,
revised_due_date,
part_no,
src_sys_ent_id
FROM MEDW_MTOS_DTL_FACT
WHERE service_lvl_flag = 'Y');
thanks
asnAlso 'NOT IN' + nullable columns can be an expensive combination - and may not give the expected results. For example, compare these:
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2 );
no rows selected
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2
where key2 is not null );
KEY1
1
1 row selected.Even if the columns do contain values, if they are nullable Oracle has to perform a resource-intensive filter operation in case they are null. An EXISTS construction is not concerned with null values and can therefore use a more efficient execution plan, leading people to think it is inherently faster in general. -
Selecting data from a RDBMS table using connector framework, JDBC-Connector
Hi Experts,
I'm trying to select data from a mysql database that is connected via the BI-JDBC System in the portal (EP 7 SP 11).
The connection (using the Connector Gateway Service) is set up, but selecting data fails.
Can you please give me a code example how to select data from a table using INativeQuery or IOperation / IExcecution ?? IQuery is depracted...
Here my piece of code, actually method "connection.newNativeQuery()" throws exception "BICapabilityNotSupportedException: Operation is not supported":
IConnection connection = null;
public void connectToJDBCSystem(String jdbcSystem) {
// open a connection
try { // get the Connector Gateway Service
Object connectorservice =
PortalRuntime.getRuntimeResources().getService(IConnectorGatewayService.KEY);
IConnectorGatewayService cgService =
(IConnectorGatewayService) connectorservice;
if (cgService == null) {
response.write("Error in get Connector Gateway Service <br>");
try {
ConnectionProperties prop =
new ConnectionProperties(request.getLocale(), request.getUser());
connection = cgService.getConnection(jdbcSystem, prop);
} catch (Exception e) {
response.write("Connection to JDBC system " + jdbcSystem + " failed <br>");
response.write((String)e.getMessage() + "<br>");
e.printStackTrace();
if (connection == null) {
response.write("No connection to JDBC system " + jdbcSystem + "<br>");
} else {
response.write("Connection to JDBC system " + jdbcSystem + " successful <br>");
} catch (Exception e) {
response.write("Exception occurred <br>");
// up to this point it works fine...
// build query using an IExecution object obtained from the Connection object
response.write("prepare query 'SELECT idArt, art FROM art' <br>");
String qstr = "SELECT idArt, art FROM art";
INativeQuery query = null;
try {
query = connection.newNativeQuery();
response.write("execute query...<br>");
ResultSet rs = (ResultSet)query.execute(qstr);
while (rs.next()) {
response.write("<th>" + rs.getString(1) + "</th>");
} catch(Exception e)
response.write("connection.newNativeQuery() failed <br>");
response.write((String)e.getMessage() + "<br>");
e.printStackTrace();
} finally {
if( connection != null )
try {connection.close();}
catch(Exception ee){}
Many thanks in advance, Monika
Edited by: Monika Verwohlt on Jan 26, 2010 9:49 AMHowever this doesn't affect the XML encoding in data retrieved from XMLType using PL/SQL or programs (but it does work OK for SQLPlus SELECT).When performing an explicit serialization, with getClobVal method or XMLSerialize function, the database character set is used :
NLS_LANG = FRENCH_FRANCE.WE8MSWIN1252
NLS_CHARACTERSET = AL32UTF8
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> create table test_nls of xmltype;
Table créée.
SQL> insert into test_nls values(xmltype('<?xml version="1.0" encoding="UTF-8"?><root/>'));
1 ligne créée.
SQL> select * from test_nls;
SYS_NC_ROWINFO$
<?xml version="1.0" encoding="WINDOWS-1252"?>
<root/>
SQL> select t.object_value.getclobval() from test_nls t;
T.OBJECT_VALUE.GETCLOBVAL()
<?xml version="1.0" encoding="UTF-8"?><root/>
SQL> select xmlserialize(document object_value as clob) from test_nls;
XMLSERIALIZE(DOCUMENTOBJECT_VALUEASCLOB)
<?xml version="1.0" encoding="UTF-8"?><root/> -
Why select data from Function directly (SE37) got data , but when use funct
why select data from Function directly (SE37) got data , but when use function in program donot found data
i use function
CS_BOM_EXPL_MAT_V2
when i run function directly at SE37 .
i found data.
but when i use same function in program .
system not found.
please see my attachment.
help me please.
[http://www.quickfilepost.com/download.do?get=c974356a498b3a4d369aa0c50622e50b]
http://www.quickfilepost.com/download.do?get=c974356a498b3a4d369aa0c50622e50bI know why U get empty data.
In Program U should follow the rules:
U'd better data a variant typed the function parameters's type.
for example:
in your program the parameter: stlal = '1'
U'd better like follow:
Data l_stlal type STKO-STLAL
l_stlal = '01'. *Attention: '1' <> '01'.
stlal = l_stlal.
in this way U may have less mistake
and the parameter MTNRV
U should use material convernt : "CONVERSION_EXIT_MATN1_INPUT" to change material into internal types before U put into function's parameter.
why se37 has no problem? because In se37, the data you filled was be processed before use -
How to select data from a table using a date field in the where condition?
How to select data from a table using a date field in the where condition?
For eg:
data itab like equk occurs 0 with header line.
select * from equk into table itab where werks = 'C001'
and bdatu = '31129999'.
thanks.Hi Ramesh,
Specify the date format as YYYYMMDD in where condition.
Dates are internally stored in SAP as YYYYMMDD only.
Change your date format in WHERE condition as follows.
data itab like equk occurs 0 with header line.
select * from equk into table itab where werks = 'C001'
and bdatu = <b>'99991231'.</b>
I doubt check your data base table EQUK on this date for the existince of data.
Otherwise, just change the conidition on BDATU like below to see all entries prior to this date.
data itab like equk occurs 0 with header line.
select * from equk into table itab where werks = 'C001'
and <b> bdatu <= '99991231'.</b>
Thanks,
Vinay
Thanks,
Vinay -
Which CKM is used for moving data from Oracle to delimited file ?
Hi All
Please let me know Which CKM is used for moving data from Oracle to delimited file ?
Also is there need of defining each columns before hand in target datastore. Cant ODI take it from the oracle table itself ?Addy,
A CKM is a Check KM which is used to validate data and log errors. It is not going to assist you in data movement. You will need an LKM SQL to File append as answered in another thread.
Assuming that you have a one to one mapping, to make things simpler you can duplicate the Oracle based model and create a file based model. This will take all the column definitions from the Oracle based model.
Alternatively, you can also use an ODI tool odiSQLUnload to dump the data to a file
HTH -
Select data from BKPF for a date range
Hi,
I want to select data from BKPF depending upon the date range given in the selection screen.
SELECTION-SCREEN : BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
" Selection Criteria
SELECT-OPTIONS : S_DATE FOR SY-DATUM OBLIGATORY.
SELECTION-SCREEN : END OF BLOCK B1.
How do I do that??.. Please help.
Thank You,
SB.HI SB,
Create Indexes and select the data ...
i.e,
SELECTION-SCREEN : BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
" Selection Criteria
<b>SELECT-OPTIONS : S_DATE FOR BKPF-BLDAT OBLIGATORY.</b>
SELECTION-SCREEN : END OF BLOCK B1.
<b> SELECT * FROM BKPF WHERE BLDAT IN s_date.</b>
Regards,
Santosh P
Message was edited by: Santosh Kumar Patha -
Select data from table not in another table
Hi,
I want to select data from table A which is not in table B.
Currently I am doing:
select
snoA,
nameA,
dobA
from A
where snoA not in
(select snoB from A, B
where snoA = snoB
and nameA = nameB)
But above is very slow.
Can I do something like:
select
snoA,
nameA,
dobA
from A, B
where
EXCLUDE ( snoA = snoB and nameA = nameB)
Please note that I need the where condition on both the columns.
any help will be appreciated.
-- HarveyWhat are the approximate data volumes in A and B?
What is "very slow"?
What version of Oracle?
What is the query plan?
Without knowing anything about your system, my first thought would be to see if a NOT EXISTS happened to be faster for your data
SELECT snoA,
nameA,
dobA
FROM a
WHERE NOT EXISTS (
SELECT 1
FROM b
WHERE a.snoA = b.snoB
AND a.nameA = b.nameB )Of course, I'm not sure why you are joining A & B in your NOT IN subquery. It would seem like you would just need a correlated subquery, i.e.
SELECT snoA,
nameA,
dobA
FROM a
WHERE snoA NOT IN (
SELECT snoB
FROM b
WHERE a.snoA = b.snoB
AND a.nameA = b.nameB )That should be more efficient than the original query. The NOT EXISTS version may or may not be more efficient than the NOT IN depending on data volumes.
Justin -
Hi All,
I am new to TestStand. Still in the process of learning it.
What are Parameters? How are they differenet from Variables? Why can't we use variables for passing data from one sequnece to another? What is the advantage of using Parameters instead of Variables?
Thanks in advance,
LaVIEWan
Solved!
Go to Solution.Hi,
Using the Parameters is the correct method to pass data into and out of a sub sequence. You assign your data to be passed into or out of a Sequence when you are in the Edit Sequence Call dialog and in the Sequence Parameter list.
Regards
Ray Farmer -
How to select data from 3rd row of Excel to insert into Sql server table using ssis
Hi,
Iam having Excel files with headers in first two rows , i want two skip that two rows and select data from 3rd row to insert into Sql Server table using ssis.3rd row is having column names.CUSTOMER DETAILS
REGION
COL1 COL2 COL3 COL4 COL5 COL6 COL7
COL8 COL9 COL10 COL11
1 XXX yyyy zzzz
2 XXX yyyy zzzzz
3 XXX yyyy zzzzz
4 XXX yyyy zzzzz
First two rows having cells merged and with headings in excel , i want two skip the first two rows and select the data from 3rd row and insert into sql server using ssis
Set range within Excel command as per below
See
http://www.joellipman.com/articles/microsoft/sql-server/ssis/646-ssis-skip-rows-in-excel-source-file.html
Please Mark This As Answer if it solved your issue
Please Mark This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
Deleting selected data from an Infocube in BW using ABAP program?
Hi Everybody,
I have to create a ABAP program in SE38 which, on execution will delete selected data from a Cube in BW module. How to achieve it. Is there any function module that can do so? Eagerly waiting for your suggestions.
Regards,
Pulokeshselect the records from the cube or ods and put in to one internal table.
and then u can delete the records
Maybe you are looking for
-
HI, I am struggling to apply filters for one of my query scenario 1. I have multiple Site Collections http://RootWeb/Sites/Jobs, http://RootWeb/Sites/Job1 Etc Each site collection has a list with data. One of the column in those lists is Division
-
Canvas ID not being used when Canvas is created by State change
After logging into my app I do a State change, which adds a few more buttons on a Linkbar and adds two more sections to a ViewStack. For the initial Canvas sections, I have an ID associated with each. Upon clicking a button, for example, I can alert
-
I can't copy text from text page from safari. Copy seems copy, but paste, will paste the old text in tre buffer. Same problem in iphone4s and ipad2 iOS 5.1
-
Making a field mandatory in additional data tab in sales area data in XD01/XD02
Hi, Making fields in additional data tab in sales area data in XD01/XD02 as mandatory is not achievable through configuration. Also, user exit SAPMF02D was not helpful in achieving this. Kindly help me on this.
-
Do IOS restriction settings prevent restoring from a different backup
Can a child restore their iPad from their own backup and override restrictions set in place?