Process huge number of records in XI
Hello
I am having a simple scenario where I am sending details from SAP to legacy system on weekly basis. I am having SAP ABAP proxy as a sender and file as a receiver.
My problem is when the data is huge (around 600,000 records) the its giving RFC timeout error on SAP ECC side. I tried to sending limited number of records per submit to XI but it creates multiple files on receiving FTP server.
Instead of that can I use collect pattern of BPM to collect messages in BPM? Will it not be an overhead using BPM in this case as the data is tooo huge?
If someone has already tackeled this problem pl respond.
Thanks in advance.
Regards
Rajeev
Hi Rajeev,
dont use BPM for solving that, BPM itself has still a lot performance problems.
If possible, split the 600.000 at R/3 side into several proxy requests. If required, use modus "add lines" of the file adapter to collect the messages again. You should think about to send the messages "Exacly Once In Order" to avoid processing more than one message at the same time.
Regards,
Udo
Similar Messages
-
Query using system parameter LEVEL returns incorrect huge number of records
We migrate our database from Oracle *9.2.0.6* to *11.2.0.1*
The query below throws "ORA-01788: CONNECT BY clause required in this query block".
select * from (
+select a.BOARD_ID, code, description, is_displayable, order_seq, board_parent_id, short_description, IS_SUB_BOARD_DISPLAYABLE, <font color=blue>LEVEL</font> child_level, sp_board.get_parent_id(a.board_id) top_parent_id, is_top_selected isTopSelected+
from boards a, ALERT_MESSAGE_BOARD_TARGETS b
where a.board_id = b.board_id and is_displayable = 'Y' and alert_message_id = 5202) temp
start with board_parent_id = 0
connect by prior board_id = board_parent_id
ORDER SIBLINGS BY order_seq;
Based from online resources we modified "*_allow_level_without_connect_by*" by executing the statement.
alter system set "_allow_level_without_connect_by"=true scope=spfile;
After performing the above, ORA-01788 is resolved.
The new issue is that the same query above returns *9,015,853 records in 11g* but in *9i it returns 64 records*. 9i returns the correct number of records. And the cause for 11g returning greater number of records is due to system parameter <font color=blue>LEVEL</font> used in the query.
Why 11g is returning an incorrect huge number of records?
Any assistance to address this is greatly appreciated. Thanks!The problem lies in th query.
Oracle <font color=blue>LEVEL</font> should not be used inside a subquery. After <font color=blue>LEVEL</font> is moved in the main query, the number of returned records is the same as in 9i.
select c.BOARD_ID, c.code, c.description, c.is_displayable, c.order_seq, c.board_parent_id, c.short_description, c.IS_SUB_BOARD_DISPLAYABLE, <font color=blue>LEVEL</font> child_level, c.top_parent_id, c.isTopSelected
from (
select a.BOARD_ID, code, description, is_displayable, order_seq, board_parent_id, short_description, IS_SUB_BOARD_DISPLAYABLE, sp_board.get_parent_id(a.board_id) top_parent_id, is_top_selected isTopSelected
from boards a, ALERT_MESSAGE_BOARD_TARGETS b
where a.board_id = b.board_id and is_displayable = 'Y' and alert_message_id = 5202
) c
start with c.board_parent_id = 0
connect by prior c.board_id = c.board_parent_id
ORDER SIBLINGS BY c.order_seq -
Process Chains - number of records loaded in various targets
Hello
I have been trying to figure to access the metadata on the process chain log to extract information on the number of records loaded in various targets(Cubes/ODS/Infoobjects).
I have seen a few tables like RSMONICTAB, RSPCPROCESSLOG and RSREQICODS through various threads posted, however I would like to know whether there is any data model ( relationship structure beween these and other std. tables) so that I could seamless traverse through them to get the information I need.
In traditional ETL tools I would approach the problem :
> Load a particular sequence(in our case = BW Process chain) name into the program
> Extract the start time and end time information.
> Tranverse through all the objects of the sequence(BW Process chain)
> Check if the object is a data target
> If yes scan through the logs to extract the number of records loaded.
could I have a list of similar tables which I could traverse through ABAP code to extract such information?
Thanks in advanceHi Richa,
Please check these tables which may be very useful for you.
rsseldone,
rsdodso,
rsstatmanpsa,
rsds,
rsis,
rsdcube
I have got a abap code where you can get all the information for a particular request.
If you need more information goto ST13; select BI TOOLS and Execute.
hope this helps .....
Regards,
Ravi Kanth
Edited by: Ravi kanth on May 15, 2009 11:08 AM -
Program Process chain-- Number of records per day.
Hi Friends,
i need to develop a program based upon the below template.
Chain Name Start Date Start Time End Date End Time Total Run Time Number of Records Total number of records
IP1/IP2/IP3
Imgaine that i have a process chain with 3 infocpakages..
Please help us in having the code as i donot know ABAP and program /ssa/bwt will not survive my purpose.\
Regards,
Siddhuhi sid,
i think while creating the abap code u need to take help of following tables:-
Chain Name RSPCLOGCHAIN. take chain id from corresponding Chain name, and navigate to table RSPCPROCESSLOG.
Start Date RSPCPROCESSLOG
Start Time RSPCPROCESSLOG
End Date
End Time RSPCPROCESSLOG
Total Run Time can be caluted using ENDSTAMP and STARTTIME.
Number of Records RSMONICDP
Total number of records RSMONICDP
you have to hit these table and take the latest data from your abap code.
thanks -
Time Limit exceeded Error while updating huge number of records in MARC
Hi experts,
I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
as we want to update.
Below is the part of code in my proxy
Call the BAPI update the safety stock Value.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
headdata = gs_headdata
CLIENTDATA =
CLIENTDATAX =
plantdata = gs_plantdata
plantdatax = gs_plantdatax
IMPORTING
return = ls_return.
IF ls_return-type <> 'S'.
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
MOVE ls_return-message TO lv_message.
Populate the error table and process next record.
CALL METHOD me->populate_error
EXPORTING
message = lv_message.
CONTINUE.
ENDIF.
Can any one please let me know what could be the best possible approach for this issue.
Thanks in Advance,
Jitender
Hi experts,
I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
as we want to update.
Below is the part of code in my proxy
Call the BAPI update the safety stock Value.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
headdata = gs_headdata
CLIENTDATA =
CLIENTDATAX =
plantdata = gs_plantdata
plantdatax = gs_plantdatax
IMPORTING
return = ls_return.
IF ls_return-type <> 'S'.
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
MOVE ls_return-message TO lv_message.
Populate the error table and process next record.
CALL METHOD me->populate_error
EXPORTING
message = lv_message.
CONTINUE.
ENDIF.
Can any one please let me know what could be the best possible approach for this issue.
Thanks in Advance,
JitenderHi Raju,
Use the following routine to get fiscal year/period using calday.
*Data definition:
DATA: l_Arg1 TYPE RSFISCPER ,
l_Arg2 TYPE RSFO_DATE ,
l_Arg3 TYPE T009B-PERIV .
*Calculation:
l_Arg2 = TRAN_STRUCTURE-POST_DATE. (<b> This is the date that u have to give</b>)
l_Arg3 = 'V3'.
CALL METHOD CL_RSAR_FUNCTION=>DATE_FISCPER(
EXPORTING I_DATE = l_Arg2
I_PER = l_Arg3
IMPORTING E_FISCPER = l_Arg1 ).
RESULT = l_Arg1 .
Hope it will sove ur problem....!
Please Assign points.......
Best Regards,
SG -
Performance issue fetching huge number of record with "FOR ALL ENTRIES"
Hello,
We need to extract an huge amount of data (about 1.000.000 records) from VBEP table, which overall dimension is about 120 milions records.
We actually use this statements:
CHECK NOT ( it_massive_vbep[] IS INITIAL ) .
SELECT (list of fields) FROM vbep JOIN vbap
ON vbepvbeln = vbapvbeln AND
vbepposnr = vbapposnr
INTO CORRESPONDING FIELDS OF w_sched
FOR ALL ENTRIES IN it_massive_vbep
WHERE vbep~vbeln = it_massive_vbep-tabkey-vbeln
AND vbep~posnr = it_massive_vbep-tabkey-posnr
AND vbep~etenr = it_massive_vbep-tabkey-etenr.
notice that internal table it_massive_vbep contains always records with fully specified key.
Do you think this query could be further optimized?
many thanks,
-Enricothe are 2 option to improve performance:
+ you should work in blocks of 10.000 to 50.000
+ you should check archiving options, does this really make sense
> VBEP table, which overall dimension is about 120 milions records.
it_massive_vbep into it_vbep_notsomassive (it_vbep_2)
CHECK NOT ( it_vbep_2[] IS INITIAL ) .
get runtime field start.
SELECT (+list of fields+)
INTO CORRESPONDING FIELDS OF TABLE w_sched
FROM vbep JOIN vbap
ON vbep~vbeln = vbap~vbeln AND
vbep~posnr = vbap~posnr
FOR ALL ENTRIES IN it_vbep_2
WHERE vbep~vbeln = it_vbep_2-vbeln
AND vbep~posnr = it_vbep_2-posnr
AND vbep~etenr = it_vbep_2-etenr.
get runtime field stop.
t = stop - start.
write: / t.
Be aware that even 10.000 will take some time.
Other question, how did you get the 1.000.000 records in it_massive_vbep. They are not typed in, but somehow select.
Change the FAE into a JOIN and it will be much faster.
Siegfried -
Huge number of records got failed in psa
Hi,
i got one error in psa, more records (around 1000)got failed for ZQCUSTNM infoobject due to alpha conversion problem. i know manually edit one by one, but it will take more time ,so i want quick solution , if any thing is there pls tell me .
thanks
reddygo to the transfer rules for this data load's infosource..
go to 'transfer rules' tab there..
For this infoobject..scroll to right and tick the last checkbox for 'apply conversion automatically'.This will automatically apply alpha conversion to all values of that field,which come in the source records.
save tr rules and reactivate them.
then reload..
cheers,
Vishvesh -
Rows to column for huge number of records
my database version is 10gr2
i want to transfer the rows to column .....i have seen the examples for small no of records but how can it be done if there are more the 1000 records in a table ...???
here is the sample data that i would like to change it to column
SQL> /
NE RAISED CLEARED RTTS_NO RING
10100000-1LU 22-FEB-2011 22:01:04/28-FEB-20 22-FEB-2011 22:12:27/28-FEB-20 SR-10/ ER-16/ CR-25/ CR-29/ CR-26/ RIDM-1/ NER5/ CR-31/ RiC600-1
11 01:25:22/ 11 02:40:06/
10100000-2LU 01-FEB-2011 12:15:58/06-FEB-20 05-FEB-2011 10:05:48/06-FEB-20 RIMESH/ RiC342-1/ 101/10R#10/ RiC558-1/ RiC608-1
11 07:00:53/18-FEB-2011 22:04: 11 10:49:18/18-FEB-2011 22:15:
56/19-FEB-2011 10:36:12/19-FEB 17/19-FEB-2011 10:41:35/19-FEB
-2011 11:03:13/19-FEB-2011 11: -2011 11:08:18/19-FEB-2011 11:
16:14/28-FEB-2011 01:25:22/ 21:35/28-FEB-2011 02:40:13/
10100000-3LU 19-FEB-2011 20:18:31/22-FEB-20 19-FEB-2011 20:19:32/22-FEB-20 INR-1/ ISR-1
11 21:37:32/22-FEB-2011 22:01: 11 21:48:06/22-FEB-2011 22:12:
35/22-FEB-2011 22:20:03/28-FEB 05/22-FEB-2011 22:25:14/28-FEB
-2011 01:25:23/ -2011 02:40:20/
10100000/10MU 06-FEB-2011 07:00:23/19-FEB-20 06-FEB-2011 10:47:13/19-FEB-20 101/IR#10
11 11:01:50/19-FEB-2011 11:17: 11 11:07:33/19-FEB-2011 11:21:
58/28-FEB-2011 02:39:11/01-FEB 30/28-FEB-2011 04:10:56/05-FEB
-2011 12:16:21/18-FEB-2011 22: -2011 10:06:10/18-FEB-2011 22:
03:27/ 13:50/
10100000/11MU 01-FEB-2011 08:48:45/22-FEB-20 02-FEB-2011 13:15:17/22-FEB-20 1456129/ 101IR11 RIMESH
11 21:59:28/22-FEB-2011 22:21: 11 22:08:49/22-FEB-2011 22:24:
52/01-FEB-2011 08:35:46/ 27/01-FEB-2011 08:38:42/
10100000/12MU 22-FEB-2011 21:35:34/22-FEB-20 22-FEB-2011 21:45:00/22-FEB-20 101IR12 KuSMW4-1
11 22:00:04/22-FEB-2011 22:21: 11 22:08:21/22-FEB-2011 22:22:
23/28-FEB-2011 02:39:53/ 26/28-FEB-2011 02:41:07/
10100000/13MU 22-FEB-2011 21:35:54/22-FEB-20 22-FEB-2011 21:42:58/22-FEB-20 LD MESH
11 22:21:55/22-FEB-2011 22:00: 11 22:24:52/22-FEB-2011 22:10:could you do something like this?
with t as (select '10100000-1LU' NE, '22-FEB-2011 22:01:04/28-FEB-2011 01:25:22/' raised , '22-FEB-2011 22:12:27/28-FEB-2011 02:40:06/' cleared from dual union
select '10100000-2LU', '01-FEB-2011 12:15:58/06-FEB-2011 07:00:53/18-FEB-2011 22:04:56/19-FEB-2011 10:36:12/19-FEB-2011 11:03:13/19-FEB-2011 11:16:14/28-FEB-2011 01:25:22/',
'05-FEB-2011 10:05:48/06-FEB-2011 10:49:18/18-FEB-2011 22:15:17/19-FEB-2011 10:41:35/19-FEB-2011 11:08:18/19-FEB-2011 11:21:35/28-FEB-2011 02:40:13/' from dual
select * from(
select NE, regexp_substr( raised,'[^/]+',1,1) raised, regexp_substr( cleared,'[^/]+',1,1) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,2) , regexp_substr( cleared,'[^/]+',1,2) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,3) , regexp_substr( cleared,'[^/]+',1,3) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,4) , regexp_substr( cleared,'[^/]+',1,4) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,5) , regexp_substr( cleared,'[^/]+',1,5) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,6) , regexp_substr( cleared,'[^/]+',1,6) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,7) , regexp_substr( cleared,'[^/]+',1,7) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,8) , regexp_substr( cleared,'[^/]+',1,8) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,9) , regexp_substr( cleared,'[^/]+',1,9) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,10) , regexp_substr( cleared,'[^/]+',1,10) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,11) , regexp_substr( cleared,'[^/]+',1,11) cleared from t
where nvl(raised,cleared) is not null
order by ne
NE RAISED CLEARED
10100000-1LU 28-FEB-2011 01:25:22 28-FEB-2011 02:40:06
10100000-1LU 22-FEB-2011 22:01:04 22-FEB-2011 22:12:27
10100000-2LU 28-FEB-2011 01:25:22 28-FEB-2011 02:40:13
10100000-2LU 19-FEB-2011 10:36:12 19-FEB-2011 10:41:35
10100000-2LU 19-FEB-2011 11:03:13 19-FEB-2011 11:08:18
10100000-2LU 19-FEB-2011 11:16:14 19-FEB-2011 11:21:35
10100000-2LU 06-FEB-2011 07:00:53 06-FEB-2011 10:49:18
10100000-2LU 01-FEB-2011 12:15:58 05-FEB-2011 10:05:48
10100000-2LU 18-FEB-2011 22:04:56 18-FEB-2011 22:15:17you should be able to do it without all those unions using a connect by but I can't quite get it to work
the following doesn't work but maybe someone can answer.
select NE, regexp_substr( raised,'[^/]+',1,level) raised, regexp_substr( cleared,'[^/]+',1,level) cleared from t
connect by prior NE = NE and regexp_substr( raised,'[^/]+',1,level) = prior regexp_substr( raised,'[^/]+',1,level + 1)Edited by: pollywog on Mar 29, 2011 9:38 AM
here it is with the model clause which gets rid of all the unions.
WITH t
AS (SELECT '10100000-1LU' NE,
'22-FEB-2011 22:01:04/28-FEB-2011 01:25:22/' raised,
'22-FEB-2011 22:12:27/28-FEB-2011 02:40:06/' cleared
FROM DUAL
UNION
SELECT '10100000-2LU',
'01-FEB-2011 12:15:58/06-FEB-2011 07:00:53/18-FEB-2011 22:04:56/19-FEB-2011 10:36:12/19-FEB-2011 11:03:13/19-FEB-2011 11:16:14/28-FEB-2011 01:25:22/',
'05-FEB-2011 10:05:48/06-FEB-2011 10:49:18/18-FEB-2011 22:15:17/19-FEB-2011 10:41:35/19-FEB-2011 11:08:18/19-FEB-2011 11:21:35/28-FEB-2011 02:40:13/'
FROM DUAL)
SELECT *
FROM (SELECT NE, raised, cleared
FROM t
MODEL RETURN UPDATED ROWS
PARTITION BY (NE)
DIMENSION BY (0 d)
MEASURES (raised, cleared)
RULES
ITERATE (1000) UNTIL raised[ITERATION_NUMBER] IS NULL
(raised [ITERATION_NUMBER + 1] =
REGEXP_SUBSTR (raised[0],
'[^/]+',
1,
ITERATION_NUMBER + 1),
cleared [ITERATION_NUMBER + 1] =
REGEXP_SUBSTR (cleared[0],
'[^/]+',
1,
ITERATION_NUMBER + 1)))
WHERE raised IS NOT NULL
ORDER BY NEEdited by: pollywog on Mar 29, 2011 10:34 AM -
Max number of records in MDM workflow
Hi All
Need urgent recommendations.
We have a scenario where we need to launch a workflow upon import of records. The challenge is source file contains 80k records and its always a FULL load( on daily basis) in MDM. Do we have any limitation in MDM workflow for the max number of records? Will there be significant performance issues if we have a workflow with such huge number of records in MDM?
Please share your inputs.
Thanks-RaviHi Ravi,
Yes it can cause performance overhead and you will also have to optimise MDIS parametrs for this.
Regarding WF i think normally it is 100 records per WF.I think you can set a particular threshold for records after which the WF will autolaunch.
It is difficult to say what optimum number of records should be fed in Max Records per WF so I would suggest having a test run of including 100/1000 records per WF.Import Manager guide say there are several performance implications of importing records in a WF,so it is better to try for different ranges.
Thanks,
Ravi -
Hi All,
I am asked to create a file to IDOC scenario in PI. The problem is, the file will have around 200,000 records, 96MB. That means I have to get the 200,000 records from the file and create 200,000 PO IDOC at once. I know this is not possible. Does any one have this experience? How did you solve the problem?
Thanks a lot!
CharlesFew ways to implement this.
Though the file has huge number of records, you can tweak or control the number of idocs creating at the reciever side.
Refer michal blog for edit the occurence of target idoc structure to sent the number of idocs as per the need.
The specified item was not found.
https://wiki.sdn.sap.com/wiki/display/XI/File%20to%20Multiple%20IDOC%20Splitting%20without%20BPM
if your sender side is flat file then in the content conversion you set the parameter Recordsets per message like 100 or so.. so that you create 100 idocs each time from the sender message structure. Refer SDN forum for fcc parameters and sender fcc adapter scenario.
Refer this thread
Recordsets per Message in File adapter -
SQ02 InfoSet Get Count of Total Number of Records that will be processed
I am developing a query (SQ01) and am currently working on building an InfoSet (SQ02).
The Infoset was set up using a 'Direct read of table'. Next, I'm adding some various fields and then going to Extras and trying to define some code to get the total number of records that my query will be processing. I'm not sure if SAP pulls a filtered result set into a temporary table (by default - if so how could I reference it?) that I can reference or is just pulling in a row at a time in the record processing code, but my question is in regards to getting a record count of how many records are returned in my result set PRIOR TO going through all of the records.
Overall, I'd like to be able to have a field that says Record X of Y. I can get the X part for each line, but cannot get 'Y' until the very end. Any help or ideas would be much appreciated. I've looked around a bunch, but haven't found anything like what I'm requesting.
Query Output would look something like:
Record X1 of Y | Title1 | Amount1
Record X2 of Y | Title2 | Amount2Hi Subin,
I have tossed around this idea in my head, but am trying to figure out how to get the values and selection options from the query screen to incorporate into my Select statement within my infoset. The problem I'm running into is that my user enters a group of account numbers and an ending date that has to be pulled from the SQ01 query screen to the SQ02 infoset code. I've looked around for examples on pulling the data from the query screen, but have been unsuccessful thus far. Say for instance I have 15 specific accounts that the user is entering in and they want any records that have been submitted prior to the end of the current month and the start of the business year.
On my query screen they would enter in something like
Business Year: 2011
Reporting End Date: <= 31.03.2011 (Which equates to all records between 01.01.2011 AND 31.03.2011)
Account #s: 0000, 0001, 0003, 0005, ..., 9999 (These are a variable amount of accounts entered and could include options such as not equal to or even between ranges etc)
In my START-OF-SELECTION code I would need a select like:
NOTE: This is just a pseudo code format, not checked for syntax here
SELECT count(*)
FROM TABLE
WHERE BusinessYear = '2011' AND
RecordDate Between 01.01.2011 AND 31.03.2011 AND
Accounts IN (0000, 0001, 0003, 0005, ..., 9999).
So In this select I need to reference the values in the SQ01. How would I reference the account #'s and whether or not the user has entered an account number and said Not Equal on it etc. This select statement would have to be built on the fly, since it's not guaranteed to be the same for each run.
Thanks,
Mark -
WLI problem when processing a high number of records - SQLException: Data e
Hi
I'm having some trouble with a process in WLI when processing a high number of records from a table. I'm using WLI 8.1.6 and Oracle 9.2.
The exception I'm getting is:
javax.ejb.EJBException: nested exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.
I think the problem is not with the table because it's pretty simple. I'll describe the steps in the JPD below.
1) A DBControl checks to see if the table has records with a specific value in a column.
select IND_PROCESSADO from VW_EAI_INET_ESTOQUE where IND_PROCESSADO = 'N'
2) If there is one or more records, we update the column to another value (in other DBControl)
update VW_EAI_INET_ESTOQUE set IND_PROCESSADO = 'E' where IND_PROCESSADO = 'N'
3) We then start a transaction with following steps:
3.1) A DBControl queries for records in a specific condition
select
COD_DEPOSITO AS codDeposito,
COD_SKU_INTERNO AS codSkuInterno,
QTD_ESTOQUE AS qtdEstoque,
IND_ESTOQUE_VIRTUAL AS indEstoqueVirtual,
IND_PRE_VENDA AS indPreVenda,
QTD_DIAS_ENTREGA AS qtdDiasEntrega,
DAT_EXPEDICAO_PRE_VENDA AS dataExpedicaoPreVenda,
DAT_INICIO AS dataInicio,
DAT_FIM AS dataFim,
IND_PROCESSADO AS indProcessado
from VW_EAI_INET_ESTOQUE
where IND_PROCESSADO = 'E'
3.2) We transform all the records found to and XML message (Xquery)
3.3) We update again update the same column as #2 to other value.
update VW_EAI_INET_ESTOQUE set IND_PROCESSADO = 'S' where IND_PROCESSADO = 'E'.
4) The process ends.
When the table has few records under the specified condition, the process works fine. But if we test it with 25000 records, the process fails with the exception below. Sometimes in the step 3.1 and other times in the step 3.3.
Can someone help me please?
Exception:
<A message was unable to be delivered from a WLW Message Queue.
Attempting to deliver the onAsyncFailure event>
<23/07/2007 14h33min22s BRT> <Error> <EJB> <BEA-010026> <Exception occurred during commit of transaction
Xid=BEA1-00424A48977240214FD8(12106298),Status=Rolled back. [Reason=javax.ejb.EJBException: nested
exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.],numRepliesOwedMe=0,numRepliesOwedOthers= 0,seconds since begin=118,seconds left=59,XAServerResourceInfo[JMS_cgJMSStore]=(ServerResourceInfo[JMS_cgJMSStore]=(state=rolledback,assigned=cgServer),xar=JMS_cgJMSStore,re-Registered =
false),XAServ erResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=
(state=rolledback,assigned=cgServer),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@d38a58,re-Registered =false),XAServerResourceInfo[CPCasaeVideoWISDesenv]=
(ServerResourceInfo[CPCasaeVideoWISDesenv]=(state=rolledback,assigned=cgServer),xar=CPCasaeVideoWISDesenv,re-Registered = false),SCInfo[integrationCV+cgServer]=(state=rolledback),
properties=({weblogic.jdbc=t3://10.15.81.48:7001, START_AND_END_THREAD_EQUAL=false}),
local properties=({weblogic.jdbc.jta.CPCasaeVideoWISDesenv=weblogic.jdbc.wrapper.TxInfo@9c7831, modifiedListeners=[weblogic.ejb20.internal.TxManager$TxListener@9c2dc7]}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=
(CoordinatorURL=cgServer+10.15.81.48:7001+integrationCV+t3+,
XAResources={JMS_FileStore, weblogic.jdbc.wrapper.JTSXAResourceImpl, JMS_cgJMSStore, CPCasaeVideoWISDesenv},NonXAResources={})],CoordinatorURL=cgServer+10.15.81.48:7001+integrationCV+t3+): javax.ejb.EJBException: nested exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.
at com.bea.wlw.runtime.core.bean.BMPContainerBean.ejbStore(BMPContainerBean.java:1844)
at com.bea.wli.bpm.runtime.ProcessContainerBean.ejbStore(ProcessContainerBean.java:227)
at com.bea.wli.bpm.runtime.ProcessContainerBean.ejbStore(ProcessContainerBean.java:197)
at com.bea.wlwgen.PersistentContainer_7e2d44_Impl.ejbStore(PersistentContainer_7e2d44_Impl.java:149)
at weblogic.ejb20.manager.ExclusiveEntityManager.beforeCompletion(ExclusiveEntityManager.java:593)
at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:744)
at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1069)
at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:118)
at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1202)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:2007)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:257)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:228)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:430)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:333)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:298)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2698)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2610)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
Caused by: javax.ejb.EJBException: nested exception is: java.sql.SQLException: Data exception -- Data -- Input Data length 1.050.060 is greater from the length 1.048.576 specified in create table.
at com.bea.wlw.runtime.core.bean.BMPContainerBean.doUpdate(BMPContainerBean.java:2021)
at com.bea.wlw.runtime.core.bean.BMPContainerBean.ejbStore(BMPContainerBean.java:1828)
... 18 moreHi Lucas,
Following is the information regarding the issue you are getting and might help you to resolve the issue.
ADAPT00519195- Too many selected values (LOV0001) - Select Query Result operand
For XIR2 Fixed Details-Rejected as this is by design
I have found that this is a limitation by design and when the values exceed 18000 we get this error in BO.
There is no fix for this issue, as itu2019s by design. The product always behaved in this manner.
Also an ER (ADAPT00754295) for this issue has already been raised.
Unfortunately, we cannot confirm if and when this Enhancement Request will be taken on by the developers.
A dedicated team reviews all ERs on a regular basis for technical and commercial feasibility and whether or not the functionality is consistent with our product direction. Unfortunately we cannot presently advise on a timeframe for the inclusion of any ER to our product suite.
The product group will then review the request and determine whether or not the functionality/feature will be included in a future release.
Currently I can only suggest that you check the release notes in the ReadMe documents of future service packs, as it will be listed there once the ER has been included
The only workaround which I can suggest for now is:
Workaround 1:
Test the issue by keep the value of MAX_Inlist_values parameter to 256 on designer level.
Workaround 2:
The best solution is to combine 'n' queries via a UNION. You should first highlight the first 99 or so entries from the LOV list box and then combine this query with a second one that selects the remaining LOV choices.
Using UNION between queries; which is the only possible workaround
Please do let me know if you have any queries related to the same.
Regards,
Sarbhjeet Kaur -
Huge number of unprocessed logging table records found
Hello Experts,
I am facing one issue where huge number of unprocessed logging table records were found in SLT system for one table. I have check all setting and error logs but not found any evidence that causing the unprocessed records. In HANA system also it shows in replicated status. Could you please suggest me something other than to replicate same table again, as that option is not possible at this moment.Hi Nilesh,
What are the performance impacts on the SAP ECC system when multiple large SAP tables like BSEG are replicated at the same time? Is there a guideline for a specific volume or kind of tables?
There is no explicit guideline since aspects as server performance as well as change rate of the tables are also relevant. As a rule of thumb, one dedicated replication job per large table is recommended.
from SLT
How to enable parallel replication before DMIS 2011 SP6 do not ignore its for SP06 == go through
How to improve the initial load
Regards,
V Srinivasan -
Slow due to huge number of tables
Hi,
unfortunately we have a really huge number of tables in the ( Advantage Server ) database.
About 18,000 + tables
Firing the acitveX preview thru RDC, or just running a preview in the designer slows down to a crawl.
Any hints? ( Besides get rid of that many tables )
Thanks
OskarHi Oskar
The performance of a report is related to:
External factors:
1. The amount of time the database server takes to process the SQL query.
( Crystal Reports send the SQL query to the database, the database process it, and returns the data set to Crystal Reports. )
2. Network traffics.
3. Local computer processor speed.
( When Crystal Reports receives the data set, it generates a temp file to further filter the data when necessary, as well as to group, sort, process formulas, ... )
4. The number of record returned
( If a SQL query returns a large number of records, it will take longer to format and display than if was returning a smaller data set.)
Report design:
1. Where is the Record Selection evaluated.
Ensure your Record Selection Formula can be translated in SQL, so the data can be filtered down on the Server, otherwise the filtering will be done in a temp file on the local machine which will be much slower.
They have many functions that cannot be translated in SQL because they may not have a standard SQL for it.
For example, control structure like IF THEN ELSE cannot be translated into SQL. It will always be evaluated in Crystal Reports. But if you use an IF THEN ELSE on a parameter, it will convert the result of the condition to SQL, but as soon as uses database fileds in the conditions it will not be translated in SQL.
2. How many subreports the report contains and in section they are located.
Minimise the number of subreports used, or avoid using subreports if possible because
subreports are reports within a report, and if you have a subreport in a details section, and the report returns 100 records, the subreport will be evaluated 100 times, so it will query the database 100 times. It is often the biggest factor why a report takes a long time to preview.
3. How many records will be returned to the report.
Large number of records will slow down the preview of the reports. Ensure you only returns the necessary data on the report, by creating a Record Selection Formula, or basing your report
off a Stored Procedure, or a Command Object that only returns the desired data set.
4. Do you use the special field "Page N of M", or "TotalPageCount"
When the special field "Page N of M" or "TotalPageCount" is used on a report, it will have to generate each page of the report before it displays the first page, therfore it will take more time to display the first page of the report.
If you want to improve the speed of a report, remove the special field "Page N of M" or "Total Page Count" or formula that uses the function "TotalPageCount". If those aren't used when you view a report it only format the page requested. It won't format the whole report.
5. Link tables on indexed fields whenever possible.
6. Remove unused tables, unused formulas, unused running totals from the report.
7. Suppress unnecessary sections.
8. For summaries, use conditional formulas instead of running totals when possible.
9. Whenever possible, limit records through selection, not suppression.
10. Use SQL expressions to convert fields to be used in record selection instead of using formula functions.
For example, if you need to concatenate 2 fields together, instead of doing it in a formula, you can create a SQL Expression Field. It will concatenate the fields on the database server, instead of doing in Crystal Reports.
SQL Expression Fields are added to the SELECT clause of the SQL Query send to the database.
11. Using one command as the datasource can be faster if you return only the desired data set.
It can be faster if the SQL query written only return the desired data.
12. Perform grouping on server
This is only relevant if you only need to return the summary to your report but not the details. It will be faster as less data will be returned to the reports.
Regards
Girish Bhosale -
How to find total number of records in a BDoc?
Dear all,
I have replicated about BP 1088 records from ISU into CRM system with block size 100. Technically on SMW01, for each successfully processed BDoc, there will be 100 records (corresponds to 100 block size). But due to some failed BDocs, not all "successfully" BDocs will have 100 records each, some may have only 1 record inside...or 30...or 88 for example. So, may i know how to find or is there a report i can look into to find the total number of records clearly shown for each of the successfully processed green status BDocs???
Please help and points will be rewards!!
Thank You
Best Regards,
CKI am just showing this to show how to get the rowcount along with the cursor, if the program has so much gap of between verifying the count(*) and opening the cursor.
Justin actually covered this, he said, oracle has to spend some resources to build this functionality. As it is not most often required, it does not makes much sence to see it as a built-in feature. However, if we must see the rowcount when we open the cursor, here is a way, but it is little bit expensive.
SQL> create table emp_crap as select * from emp where 1 = 2;
Table created.
SQL> declare
2 v_cnt number := 0;
3 zero_rows exception;
4 begin
5 for rec in (select * from (select rownum rn, e.ename from emp_crap e) order by 1 desc)
6 loop
7 if v_cnt = 0 then
8 v_cnt := rec.rn;
9 end if;
10 end loop;
11 if v_cnt = 0 then
12 raise zero_rows;
13 end if;
14 exception
15 when zero_rows then
16 dbms_output.put_line('No rows');
17 end;
18 /
No rows
PL/SQL procedure successfully completed.
-- Now, let us use the table, which has the data
SQL> declare
2 v_cnt number := 0;
3 zero_rows exception;
4 begin
5 for rec in (select * from
6 (select rownum rn, e.ename from emp e)
7 order by 1 desc)
8 loop
9 if v_cnt = 0 then
10 v_cnt := rec.rn;
11 dbms_output.put_line(v_cnt);
12 end if;
13 end loop;
14 if v_cnt = 0 then
15 raise zero_rows;
16 end if;
17 exception
18 when zero_rows then
19 dbms_output.put_line('No rows');
20 end;
21 /
14
PL/SQL procedure successfully completed.Thx,
Sri
Maybe you are looking for
-
All details shown under 'Question'
-
German special Characters ( umlaut) not displaying in WebI
Hi There, i'm working with on BO XI R2 SP3 installed on Solaris. I have problems with special german characters(umlaut) like ü, ö, ä. They are not being displayed correctly in Webi; actually they are displayed following this logic ü = u, ö = o, ä = a
-
I have Nokia 6630, model n.: 0518157 Slovenia and software verison 3.45.113. When I try to update software from Nokia Software Updater, it says: "No software updates avaible for your phone". Why is that and how do I update then? Any help would be app
-
Is there a way to avoid overriding a public method of a base class by the derived classes. I mean it should not allow the derived classes to override that public method. thanks and regards, mohan
-
Mustek A3 Scanner Leopard driver?
I have a Mustek A3 scanner, and although it says it is Mac OS X compliant, it isn't recognised by iPhoto. I resorted to installing its Windows driver and scanning software under Windows XP under VMWare, which works fine, but it would be nice if it wo