Post Goods Issue (VL06O) - taking more time approximate 30 to 45 minutes
Dear Sir,
While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
Kindly provide suitable solution for the same.
Regards,
Vijay Sanguri
Hi
See Note 113048 - Collective note on delivery monitor and search notes related with performance.
Do a trace with tcode ST05 (look for help from a basis consultant) and search the bottleneck. Search possible sources of performance problems in userexits, enhancements and so on.
I hope this helps you
Regards
Eduardo
Similar Messages
-
PGI Taking more time approximate 30 to 45 minutes
Dear Sir,
While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
Kindly provide suitable solution for the same.
Regards,
Vijay SanguriHello Vijay,
I've just found the SAP Note 1459217 which definitively refers to your issue. Please have a look on it (see below the respective SAP note text).
In case you have question let me know!
Best Regards,
Marcel Mizt
Symptom
Long runtimes occur when using transaction VL06G or VL06O in order to post goods issue (PGI) deliveries.
Poor response times occur when using transaction VL06G or VL06O in order to PGI deliveries.
Poor performance occurs with transaction VL06G / VL06O.
Performance issues occur with transaction VL06G / VL06O.
Environment
SAP R/3 All Release Levels
Reproducing the Issue
Execute transaction VL06O.
Choose "For Goods Issue". (Transaction VL06G).
Long runtimes occur.
Cause
There are too many documents in the database that need to be accessed.
The customising settings in the activity "set updating of partner index" are not activated.
(IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index).
Resolution
If there are too many documents in the database to access, archiving them improves the performance of VL06G.
The customising settings in the activity "set updating of partner index" can be updated to improve the performance of VL06G. (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index). In this transaction, check the entries for the transaction group 6 (= delivery). The effect of these settings is that the table VLKPA (SD index: deliveries by partner functions) is only filled with entries based on the partner functions listed (for example WE = ship-to party). In transaction VL060 the system is checking this customising in order to access the table VBPA or VLKPA.
If you change the settings of the activity "updating of partner index", run the report RVV05IVB to reorganize the index seleting only partner index in the delivery section of the screen. (see note 128947).
Flag the checkbox "display forwarding agent" (available in the display options section of the selection screen). When the list is generated, use the "set filter" functionality (menu path: edit -> set filter) in order to select the deliveries correspondng to one forwarding agent. -
hi
I'm doing a delivery using vl01n bt at post goods issue its showing run time error.the error message details gvn below.
Information on where terminated
Termination occurred in the ABAP program "SAPLMBWL" - in
"MB_POST_GOODS_MOVEMENT".
The main program was "SAPMV50A ".
In the source code you have the termination point in line 59
of the (Include) program "LMBWLU21".
Source Code Extract
Line SourceCde
29 * when a goods movement for an inbound or outbound delivery is posted
30 * directly from VL31N/ VL01N, XBLNR is not yet known when we call
31 * CKMV_AC_DOCUMENT_CREATE, but the number is supposed to be stored in
32 * BKPF as well. There is no other way to forward XBLNR to FI as not
33 * every document is posted by MB_CREATE -> a new function module in
34 * MBWL for transferring the information, called by FI, meant to load
35 * the complete function group for all MBxx postings when this isn't
36 * required (Performance). Would be the better way to transport the
37 * information after switching off MBxx in later release.
38 * corresponding IMPORT ... FROM MEMORY ... can be found in
39 * AC_DOCUMENT_POST (FORM FI_DOCUMENT_PREPARE (LFACIF5D))
40 l_mem_id = 'MKPF-XBLNR'. " 641365
41 EXPORT xblnr = xblnr_sd TO MEMORY ID l_mem_id. " 641365
42 ENDIF.
43 IF xmkpf-xabln IS INITIAL. "note 434093
44 CALL FUNCTION 'MB_XAB_NUMBER_GET'. "note 434093
45 ENDIF. "note 434093
46
47 ENHANCEMENT-POINT MB_POST_GOODS_MOVEMENTS_01 SPOTS ES_SAPLMBWL STATIC.
48
49 ENHANCEMENT-POINT MB_POST_GOODS_MOVEMENTS_02 SPOTS ES_SAPLMBWL.
50 CALL FUNCTION 'MB_CREATE_MATERIAL_DOCUMENT_UT'
51 EXCEPTIONS
52 error_message = 4.
53 * As soon as we have started to put things into UPDATE TASK, we must
54 * ensure that errors definitely terminate the transaction.
55 * MESSAGE A is not sufficient because it can be catched from
56 * external callers which COMMIT WORK afterwards, resulting in
57 * incomplete updates. Read note 385830 for the full story.
58 IF NOT sy-subrc IS INITIAL.
>>>>> MESSAGE ID sy-msgid TYPE x NUMBER sy-msgno WITH "385830
60 sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
61 * MESSAGE A263.
62 ENDIF.
plz tell me the configuration steps for post goods issue in SD if applicable.Hi Lakshmipati,
I have maintained movement type 601 at OMJJ. I hav also added the filed cost center in that (before it was not available).
I checked GL acct at fs00, it is already contain field status group g004 and the check box "post automaticallyt" is also activated. The cost element has also assigned to cost center. After assigning the cost center to cost element, the error is changed to fld selection for mvmt type 601/ acct 300021 differs for cost center
But i noticed one thing: If i chage the field status group to g006, the error has not appeared and giving the other error:
valuation area xxx2 (plant) not yet ptoductive with material ledger this error has been already posted by me but not yet received correct answer.
Kindly provide the solution for this.
Regads
Sabera -
CDP Performance Issue-- Taking more time fetch data
Hi,
I'm working on Stellent 7.5.1.
For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
public void getManager(final HashMap binderMap)
throws VistaInvalidInputException, VistaDataNotFoundException,
DataException, ServiceException, VistaTemplateException
String collectionID =
getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
long firstStartTime = System.currentTimeMillis();
HashMap resultSetMap = null;
String isNonRecursive = getStringLocal(VistaFolderConstants
.ISNONRECURSIVE_KEY);
if (isNonRecursive != null
&& isNonRecursive.equalsIgnoreCase(
VistaContentFetchHelperConstants.STRING_TRUE))
VistaLibraryContentFetchManager libraryContentFetchManager =
new VistaLibraryContentFetchManager(
binderMap);
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
resultSetMap = libraryContentFetchManager
.getFolderContentItems(m_workspace);
* used to add the resultset to the binder.
addResultSetToBinder(resultSetMap , true);
else
long startTime = System.currentTimeMillis();
* isStandard is used to decide the call is for Standard or
* Extended.
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
String isStandard = getTemplateInformation(binderMap);
long endTimeTemplate = System.currentTimeMillis();
binderMap.put(VistaFolderConstants.IS_STANDARD,
isStandard);
long endTimebinderMap = System.currentTimeMillis();
VistaContentFetchManager contentFetchManager
= new VistaContentFetchManager(binderMap);
long endTimeFetchManager = System.currentTimeMillis();
resultSetMap = contentFetchManager
.getAllFolderContentItems(m_workspace);
long endTimeresultSetMap = System.currentTimeMillis();
* used to add the resultset and the total no of content items
* to the binder.
addResultSetToBinder(resultSetMap , false);
long endTime = System.currentTimeMillis();
if (perfLogEnable.equalsIgnoreCase("true"))
Log.info("Time taken to execute " +
"getTemplateInformation=" +
(endTimeTemplate - startTime)+
"ms binderMap=" +
(endTimebinderMap - startTime)+
"ms contentFetchManager=" +
(endTimeFetchManager - startTime)+
"ms resultSetMap=" +
(endTimeresultSetMap - startTime) +
"ms getManager:getAllFolderContentItems = " +
(endTime - startTime) +
"ms overallTime=" +
(endTime - firstStartTime) +
"ms folderID =" +
collectionID);
Edited by: 838623 on Feb 22, 2011 1:43 AMHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM -
Post goods issue at the time of complete shipment
Hi Experts,
Please let me know the configuration steps to do post goods issue in the complete shipment. Appreciate your help.
Thanks.
with regards,
Muthu.Dear Muthu,
Please go through this IMG path then do the configuration settings.
SPRO>Logistics Execution>Transaportation>Shipments>Define and Assign activity profiles, here you maintain variant name in the At completion colomun select your shipment document then click on Maintain push buton now system will take you to the maintain variant screen in that screen you can find the tab Post goods issue for the deliveries of shipment under this tab you put radio button for carry out goods issue posting during save, then save the variant.
Now you create shipment document while saving the shipment document PGI will carry out for the deliveries which are avail in that shipment document.
If you more details go through this link it will help you,
http://help.sap.com/saphelp_47x200/helpdata/en/3d/42aa07074f11d2bf5d0000e8a7386f/frameset.htm
I hope it will help you,
Regards,
Murali. -
Issue with background job--taking more time
Hi,
We have a custom program which runs as the background job. It runs every 2 hours.
Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
But we need the program to fetch all the records.
But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
Here is the SQL statement:
EXEC SQL PERFORMING SAP_GET_EXT_PERF.
SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
SEGMENT_TYPE, TABLESPACE_NAME,
EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
FROM SYS.DBA_EXTENTS
WHERE OWNER LIKE 'SAP%'
INTO
: EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
:EXTENTS_TBL-PARTITION_NAME,
:EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
:EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
:EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
ENDEXEC.
Can somebody suggest what has to be done?
Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
Regards,
VivdhaHi,
there was an issue with LMT's but that was fixed in 10.2.0.4
besides missing system statistics:
But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
SELECT owner , segment_name , segment_type
FROM dba_extents
WHERE file_id = &AFN
AND &BLOCKNO BETWEEN block_id AND block_id + blocks -1
Not sure what you want to achieve with it.
There are monitoring tools (OEM ?) around that may cover your needs.
Bye
yk -
Profit Centre Document issue at the time of Post Goods Issue on SD.
Dear Experts,
While SD person is posting Post Goods Issue, System is generating Profit Centre Document along with other documents. We are facing an issue with Profit Centre document here and the issue is:
Some Profit Centre Documents are populating Delivery Order Number on respective field while some are not. Ideally all profit centre line items should show Delivery Order Numbers.
Please guide any solution to resolve this issue, its urgent.
Thanks in Advance.
Regards,
Zain BashirHi Zain
This is a tricky issue
Are you saying that the PCA document has the Delivery Number (VL01N) updated in some cases and not updated in some cases?
The Delivery Number is updated in PCA document when you create Delivery from VL01N and do PGI from VL02N
But, if you do PGI from VL01N itself, then the delivery number is not updated in the PCA document
SAP recommends to post PGI from VL02N
Br. Ajay M -
Error at the time of Post Goods Issue
Dear All,
I am getting an error when I am doing the post goods issue the error says that Field Selection Movement Type 601 Differs for Business Area.Though we went to the respective table RM07CUFA and rectified the same what could be the cause of this error and why does this happen.
Best Regards
Atul KeshavHi,
The reason for this error is because of field group in movement type and accounts might be differ.
You can check the same as below :
Go to transaction OMWB ---> click on simulation ---> mention plant and material and movement type
and click on account assignments
Now click on check screen layout
Now check for Business area in additional account assignment whether entry is required in movement type and supress in GL accounts.
Changes the fields status then.
This will solve the issue -
Oracle 11g: Oracle insert/update operation is taking more time.
Hello All,
In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
Query1: How to find actually oracle is taking more time in insert/updates operation.
Query2: How to rectify the problem.
We are having multithread environment.
Thanks
With Regards
Hemant.Liron Amitzi wrote:
Hi Nicolas,
Just a short explanation:
If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
The solution here is simply revuild the index once in a while.
Hope it is clear.
Liron Amitzi
Senior DBA consultant
[www.dbsnaps.com]
[www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
This is a misconception of how indexes are working.
I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
"+Index Rebuild Summary+
+•*The vast majority of indexes do not require rebuilding*+
+•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
+•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
+•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
+•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
+•To improve performance, indexes need to be regularly rebuilt is a myth+"
Good reading,
Nicolas. -
When I Press the Post Goods Issue Button for a delivery in SD the error dump appears.
Runtime Errors MESSAGE_TYPE_X
Date and Time 10/15/2008 06:38:52
Short dump has not been completely stored (too big)
Short text
The current application triggered a termination with a short dump.
What happened?
The current application program detected a situation which really
should not occur. Therefore, a termination with a short dump was
triggered on purpose by the key word MESSAGE (type X).
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
Short text of error message:
No RFC destination is defined for SAP Global Trade Services
Long text of error message:
Technical information about the message:
Message class....... "/SAPSLL/PLUGINR3"
Number.............. 002
Variable 1.......... " "
Variable 2.......... " "
Variable 3.......... " "
Variable 4.......... " "
How to correct the error
Probably the only way to eliminate the error is to correct the program.
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"MESSAGE_TYPE_X" " "
"SAPLMBWL" or "LMBWLU21"
"MB_POST_GOODS_MOVEMENT"
If you cannot solve the problem yourself and want to send an error
notification to SAP, include the following information:
1. The description of the current problem (short dump)
To save the description, choose "System->List->Save->Local File
(Unconverted)".
2. Corresponding system log
Display the system log by calling transaction SM21.
Restrict the time interval to 10 minutes before and five minutes
after the short dump. Then choose "System->List->Save->Local File
(Unconverted)".
3. If the problem occurs in a problem of your own or a modified SAP
program: The source code of the program
In the editor, choose "Utilities->More
Utilities->Upload/Download->Download".
4. Details about the conditions under which the error occurred or which
actions and input led to the error.
System environment
SAP-Release 700
Application server... "biw7sap"
Network address...... "68.88.249.38"
Operating system..... "Windows NT"
Release.............. "5.2"
Hardware type........ "4x Intel 801586"
Character length.... 16 Bits
Pointer length....... 32 Bits
Work process number.. 1
Shortdump setting.... "full"
Database server... "BIW7SAP"
Database type..... "ORACLE"
Database name..... "ECS"
Database user ID.. "SAPSR3"
Char.set.... "C"
SAP kernel....... 700
created (date)... "Mar 20 2007 00:45:21"
create on........ "NT 5.0 2195 Service Pack 4 x86 MS VC++ 13.10"
Database version. "OCI_10201_SHARE (10.2.0.1.0) "
Patch level. 102
Patch text.. " "
Database............. "ORACLE 9.2.0.., ORACLE 10.1.0.., ORACLE 10.2.0.."
SAP database version. 700
Operating system..... "Windows NT 5.0, Windows NT 5.1, Windows NT 5.2"
Memory consumption
Roll.... 8176
EM...... 26130600
Heap.... 0
Page.... 57344
MM Used. 19070624
MM Free. 785208
User and Transaction
Client.............. 800
User................ "CUSER14"
Language key........ "E"
Transaction......... "VL02N "
Program............. "SAPLMBWL"
Screen.............. "SAPMV50A 1000"
Screen line......... 39
Information on where terminated
Termination occurred in the ABAP program "SAPLMBWL" - in
"MB_POST_GOODS_MOVEMENT".
The main program was "SAPMV50A ".
In the source code you have the termination point in line 59
of the (Include) program "LMBWLU21".
Source Code Extract
Line
SourceCde
29
when a goods movement for an inbound or outbound delivery is posted
30
directly from VL31N/ VL01N, XBLNR is not yet known when we call
31
CKMV_AC_DOCUMENT_CREATE, but the number is supposed to be stored in
32
BKPF as well. There is no other way to forward XBLNR to FI as not
33
every document is posted by MB_CREATE -> a new function module in
34
MBWL for transferring the information, called by FI, meant to load
35
the complete function group for all MBxx postings when this isn't
36
required (Performance). Would be the better way to transport the
37
information after switching off MBxx in later release.
38
corresponding IMPORT ... FROM MEMORY ... can be found in
39
AC_DOCUMENT_POST (FORM FI_DOCUMENT_PREPARE (LFACIF5D))
40
l_mem_id = 'MKPF-XBLNR'. " 641365
41
EXPORT xblnr = xblnr_sd TO MEMORY ID l_mem_id. " 641365
42
ENDIF.
43
IF xmkpf-xabln IS INITIAL. "note 434093
44
CALL FUNCTION 'MB_XAB_NUMBER_GET'. "note 434093
45
ENDIF. "note 434093
46
47
ENHANCEMENT-POINT MB_POST_GOODS_MOVEMENTS_01 SPOTS ES_SAPLMBWL STATIC.
48
49
ENHANCEMENT-POINT MB_POST_GOODS_MOVEMENTS_02 SPOTS ES_SAPLMBWL.
50
CALL FUNCTION 'MB_CREATE_MATERIAL_DOCUMENT_UT'
51
EXCEPTIONS
52
error_message = 4.
53
As soon as we have started to put things into UPDATE TASK, we must
54
ensure that errors definitely terminate the transaction.
55
MESSAGE A is not sufficient because it can be catched from
56
external callers which COMMIT WORK afterwards, resulting in
57
incomplete updates. Read note 385830 for the full story.
58
IF NOT sy-subrc IS INITIAL.
>>>>>
MESSAGE ID sy-msgid TYPE x NUMBER sy-msgno WITH "385830
60
sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
61
MESSAGE A263.
62
ENDIF.
63
Optische Archivierung
64
Spaete Erfassung mit Barcode
65
Redesign of barcode handling -> note 780365
66
PERFORM barcode_update(sapmm07m) USING xmkpf-mblnr
67
xmkpf-mjahr
68
barcode.
69
70
MOVE-CORRESPONDING xmkpf TO emkpf.
71
CALL FUNCTION 'MB_MOVEMENTS_REFRESH'
72
EXCEPTIONS
73
error_message = 4.
74
MOVE-CORRESPONDING xmkpf TO emkpf.
75
CALL FUNCTION 'MB_MOVEMENTS_REFRESH'
76
EXCEPTIONS
77
error_message = 4.
78
IF NOT sy-subrc IS INITIAL.
Contents of system fields
Name
Val.
SY-SUBRC
4
SY-INDEX
0
SY-TABIX
1
SY-DBCNT
1
SY-FDPOS
6
SY-LSIND
0
SY-PAGNO
0
SY-LINNO
1
SY-COLNO
1
SY-PFKEY
W0
SY-UCOMM
WABU_T
SY-TITLE
Delivery 80015203 Change: Overview
SY-MSGTY
X
SY-MSGID
/SAPSLL/PLUGINR3
SY-MSGNO
002
SY-MSGV1
SY-MSGV2
SY-MSGV3
SY-MSGV4
SY-MODNO
0
SY-DATUM
20081015
SY-UZEIT
063852
SY-XPROG
SAPLBPFC
SY-XFORM
CONVERSION_EXIT
Active Calls/Events
No. Ty. Program Include Line
Name
10 FUNCTION SAPLMBWL LMBWLU21 59
MB_POST_GOODS_MOVEMENT
9 FORM SAPMV50A FV50XF0B_BELEG_SICHERN 769
BELEG_SICHERN_POST
8 FORM SAPMV50A FV50XF0B_BELEG_SICHERN 86
BELEG_SICHERN_01
7 FORM SAPMV50A FV50XF0B_BELEG_SICHERN 16
BELEG_SICHERN
6 FORM SAPMV50A MV50AF0F_FCODE_SICH_OHNE_CHECK 10
FCODE_SICH_OHNE_CHECK
5 FORM SAPMV50A MV50AF0F_FCODE_WABU 11
FCODE_WABU
4 FORM SAPLV00F LV00FF0F 92
FCODE_BEARBEITEN
3 FUNCTION SAPLV00F LV00FU02 44
SCREEN_SEQUENCE_CONTROL
2 FORM SAPMV50A MV50AF0F_FCODE_BEARBEITEN 62
FCODE_BEARBEITEN
1 MODULE (PAI) SAPMV50A MV50AI0F 52
FCODE_BEARBEITEN
Chosen variables
Name
Val.
No. 10 Ty. FUNCTION
Name MB_POST_GOODS_MOVEMENT
XBLNR_SD
0080015203
3333333333
0080015203
0000000000
0000000000
EMKPF
000000000000000000 ##
2222222222333333333333333333222222222200222222222222222222222222222222222222222222222222222222
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
XMSSA[]
Table IT_589[0x196]
FUNCTION-POOL=MBWLDATA=XMSSA[]
Table reference: 307
TABH+ 0(20) = 00000000200D869300000000330100004D020000
TABH+ 20(20) = 00000000C4000000FFFFFFFF04090300E01B0000
TABH+ 40( 8) = 08000000C1108001
store = 0x00000000
ext1 = 0x200D8693
shmId = 0 (0x00000000)
id = 307 (0x33010000)
label = 589 (0x4D020000)
fill = 0 (0x00000000)
leng = 196 (0xC4000000)
loop = -1 (0xFFFFFFFF)
xtyp = TYPE#000134
occu = 8 (0x08000000)
access = 1 (ItAccessStandard)
idxKind = 0 (ItIndexNone)
uniKind = 2 (ItUniqueNon)
keyKind = 1 (default)
cmpMode = 8 (cmpManyEq)
occu0 = 0
groupCntl = 0
rfc = 0
unShareable = 0
mightBeShared = 0
sharedWithShmTab = 0
isShmLockId = 0
gcKind = 0
isUsed = 1
isCtfyAble = 1
>>>>> Shareable Table Header Data <<<<<
tabi = Not allocated
pghook = Not allocated
idxPtr = Not allocated
shmTabhSet = Not allocated
id = Not allocated
refCount = Not allocated
tstRefCount = Not allocated
lineAdmin = Not allocated
lineAlloc = Not allocated
shmVersId = Not allocated
shmRefCount = Not allocated
shmIsReadOnly = Not allocated
>>>>> 1st level extension part <<<<<
regHook = 0x00000000
collHook = 0x00000000
ext2 = 0xD80C8693
>>>>> 2nd level extension part <<<<<
tabhBack = 0x980C8693
delta_head = 0000000000000000000000000000000000000000000000000000000000000000000000000000000
pb_func = 0x00000000
pb_handle = 0x00000000
RRESWK
22222222222
00000000000
00000000000
00000000000
%_SPACE
2
0
0
0
L_MEM_ID
MKPF-XBLNR
4454254445
DB06D82CE2
0000000000
0000000000
RSJOBINFO
00000000000000 ##
2222222222222222222222222222222233333333333333222222222222222222222222222222222200
0000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000
RNAME
R
52222
20000
00000
00000
SCREEN
BT_UALL
4555444222222222222222222222222222222222222222222222222222222222222222222222222222222222222222
24F51CC000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
IT_RSTRUCT
Table[initial]
XMKPF-XABLN
2222222222
0000000000
0000000000
0000000000
%_DUMMY$$
2222
0000
0000
0000
SYST-REPID
SAPLMBWL
5454445422222222222222222222222222222222
310CD27C00000000000000000000000000000000
0000000000000000000000000000000000000000
0000000000000000000000000000000000000000
ODM07M[]
Table[initial]
GT_GOCOMP
000000000000 #### 00000000000000
3333333333332222222222222222222222222222222222222222222220000222222222222223333333333333322222
000000000000000000000000000000000000000000000000000000000000C000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
SY-SUBRC
4
0000
4000
L_ATPCB
22222
00000
00000
00000
XMKPF-MBLNR
4900035075
3333333333
4900035075
0000000000
0000000000
XMKPF-MJAHR
2008
3333
2008
0000
0000
BARCODE
2222222222222222222222222222222222222222
0000000000000000000000000000000000000000
0000000000000000000000000000000000000000
0000000000000000000000000000000000000000
%_ARCHIVE
2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
T003
800WL49AMS XX XXXXXH
3335433445222222222222222222222222225522555554222222
8007C491D3000000000000000000000000008800888888000000
0000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000
KNVV
00000000 000
2222222222222222222222222222222223333333322222222222222222222233322222222222222222222222222222
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
T064B
2222222222222222222222
0000000000000000000000
0000000000000000000000
0000000000000000000000
VGMSEG[]
Table[initial]
No. 9 Ty. FORM
Name BELEG_SICHERN_POST
MAT_AUF_HINWEIS_COPY
2
0
0
0
XLIKP[]
Table IT_392[1x2360]
PROGRAM=SAPMV50ADATA=XLIKP[]
Table reference: 197
TABH+ 0(20) = 082D1E94E0C8119300000000C500000088010000
TABH+ 20(20) = 01000000380900003000000004FC0100D0070000
TABH+ 40( 8) = 04000000C1108001
store = 0x082D1E94
ext1 = 0xE0C81193
shmId = 0 (0x00000000)
id = 197 (0xC5000000)
label = 392 (0x88010000)
fill = 1 (0x01000000)
leng = 2360 (0x38090000)
loop = 48 (0x30000000)
xtyp = TYPE#000027
occu = 4 (0x04000000)
access = 1 (ItAccessStandard)
idxKind = 0 (ItIndexNone)
uniKind = 2 (ItUniqueNon)
keyKind = 1 (default)
cmpMode = 8 (cmpManyEq)
occu0 = 0
groupCntl = 0
rfc = 0
unShareable = 0
mightBeShared = 0
sharedWithShmTab = 0
isShmLockId = 0
gcKind = 0
isUsed = 1
isCtfyAble = 1
>>>>> Shareable Table Header Data <<<<<
tabi = 0x18081E94
pgHook = 0x00000000
idxPtr = 0x00000000
shmTabhSet = 0x00000000
id = 1541 (0x05060000)
refCount = 0 (0x00000000)
tstRefCount = 0 (0x00000000)
lineAdmin = 4 (0x04000000)
lineAlloc = 4 (0x04000000)
shmVersId = 0 (0x00000000)
shmRefCount = 1 (0x01000000)
>>>>> 1st level extension part <<<<<
regHook = 0xA87F1B94
collHook = 0x00000000
ext2 = 0x28940F93
>>>>> 2nd level extension part <<<<<
tabhBack = 0x70A22B93
delta_head = 0000000000000000000000000000000000000000000000000000000000000000000000000000000
pb_func = 0x00000000
pb_handle = 0x00000000
XLIKP
8000080015203CUSER14 06580920081014 Z0011000LF X20081016200810152008101420081017200
3333333333333455453322222333333333333332222225333333344222533333333333333333333333333333333333
800008001520335352140000006580920081014000000A0011000C6000820081016200810152008101420081017200
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
<%_L195>
CVBAP
8000000012132000020T-AS301 T-AS301 0201 Su
3333333333333333333524533322222222222524533322222222222222222222222222222222222222233332222257
80000000121320000204D13301000000000004D1330100000000000000000000000000000000000000002010000035
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
MAT_AUF_HINWEIS_GEPRUEFT
2
0
0
0
<%_L195>-UPDKZ
UPDKZ_DELETE
D
4
4
0
0
IVBPA52_PAGIND
0.0.1.
000
00C
VBUK_KEIN_KREDITCHECK
2
0
0
0
XLIPS[]
Table IT_83[1x3552]
PROGRAM=SAPMV50ADATA=XLIPS[]
Table reference: 50
TABH+ 0(20) = E8F54D93F0282893000000003200000053000000
TABH+ 20(20) = 01000000E00D0000FFFFFFFF0400000080330000
TABH+ 40( 8) = 04000000C1108401
store = 0xE8F54D93
ext1 = 0xF0282893
shmId = 0 (0x00000000)
id = 50 (0x32000000)
label = 83 (0x53000000)
fill = 1 (0x01000000)
leng = 3552 (0xE00D0000)
loop = -1 (0xFFFFFFFF)
xtyp = TYPE#000260
occu = 4 (0x04000000)
access = 1 (ItAccessStandard)
idxKind = 0 (ItIndexNone)
uniKind = 2 (ItUniqueNon)
keyKind = 1 (default)
cmpMode = 8 (cmpManyEq)
occu0 = 0
groupCntl = 0
rfc = 0
unShareable = 0
mightBeShared = 1
sharedWithShmTab = 0
isShmLockId = 0
gcKind = 0
isUsed = 1
isCtfyAble = 1
>>>>> Shareable Table Header Data <<<<<
tabi = 0xD0896993
pgHook = 0x00000000
idxPtr = 0x00000000
shmTabhSet = 0x00000000
id = 410 (0x9A010000)
refCount = 1 (0x01000000)
tstRefCount = 0 (0x00000000)
lineAdmin = 4 (0x04000000)
lineAlloc = 4 (0x04000000)
shmVersId = 0 (0x00000000)
shmRefCount = 2 (0x02000000)
>>>>> 1st level extension part <<<<<
regHook = 0xC8402493
collHook = 0x00000000
ext2 = 0x50E41293
>>>>> 2nd level extension part <<<<<
tabhBack = 0xC8532493
delta_head = 0000000000000000000000000000000000000000000000000000000000000000000000000000000
pb_func = 0x00000000
pb_handle = 0x00000000
VBUK_KREDIT_NEUAUFBAU
2
0
0
0
LT_INB_CIFEXT
Table IT_2692[1x476]
PROGRAM=SAPMV50AFORM=BELEG_SICHERN_POSTDATA=LT_INB_CIFEXT
Table reference: 1104
TABH+ 0(20) = C095F993B8301E940000000050040000840A0000
TABH+ 20(20) = 01000000DC010000FFFFFFFF0400000090AC0100
TABH+ 40( 8) = 10000000C1308401
store = 0xC095F993
ext1 = 0xB8301E94
shmId = 0 (0x00000000)
id = 1104 (0x50040000)
label = 2692 (0x840A0000)
fill = 1 (0x01000000)
leng = 476 (0xDC010000)
loop = -1 (0xFFFFFFFF)
xtyp = TYPE#002271
occu = 16 (0x10000000)
access = 1 (ItAccessStandard)
idxKind = 0 (ItIndexNone)
uniKind = 2 (ItUniqueNon)
keyKind = 1 (default)
cmpMode = 8 (cmpManyEq)
occu0 = 1
groupCntl = 0
rfc = 0
unShareable = 0
mightBeShared = 1
sharedWithShmTab = 0
isShmLockId = 0
gcKind = 0
isUsed = 1
isCtfyAble = 1
>>>>> Shareable Table Header Data <<<<<
tabi = 0xF077F993
pgHook = 0x00000000
idxPtr = 0x00000000
shmTabhSet = 0x00000000
id = 1542 (0x06060000)
refCount = 1 (0x01000000)
tstRefCount = 0 (0x00000000)
lineAdmin = 16 (0x10000000)
lineAlloc = 16 (0x10000000)
shmVersId = 0 (0x00000000)
shmRefCount = 2 (0x02000000)
>>>>> 1st level extension part <<<<<
regHook = 0x00000000
collHook = 0x00000000
ext2 = 0x70E43294
>>>>> 2nd level extension part <<<<<
tabhBack = 0x70041E94
delta_head = 0000000000000000000000000000000000000000000000000000000000000000000000000000000
pb_func = 0x00000000
pb_handle = 0x00000000
V50AGL-DISPLAY_FROM_ARCHIVE
2
0
0
0
XLIKP-VBELN
0080015203
3333333333
0080015203
0000000000
0000000000
EMKPF
4900035075200800000000000000 ##
3333333333333333333333333333222222222200222222222222222222222222222222222222222222222222222222
4900035075200800000000000000000000000010000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
VBMUE
2222222222222222222
0000000000000000000
0000000000000000000
0000000000000000000
VBSK
0000000000000000000000#### ####
2222222222222222222222222233333333333333333333330000222000022222222222222222222222222222222222
000000000000000000000000000000000000000000000000000C000000C00000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
PACKDATEN_VERBUCHEN
2
0
0
0
No. 8 Ty. FORM
Name BELEG_SICHERN_01
IF_FINAL_CHECK
2
0
0
0
LF_ONLY_FINAL
2
0
0
0
LF_ONLY_PREPARE
2
0
0
0
LF_FLAG_DOCNUM_NEW
2
0
0
0
CF_SUBRC
0
0000
0000
T683V
22222222222222222222222222222222222
00000000000000000000000000000000000
00000000000000000000000000000000000
00000000000000000000000000000000000
SY-XPROG
SAPLBPFC
5454454422222222222222222222222222222222
310C206300000000000000000000000000000000
0000000000000000000000000000000000000000
0000000000000000000000000000000000000000
IF_RENUMBER
X
5
8
0
0
SPACE
2
0
0
0
XVBPA_FIRSTIND
0.0.1.
000
00C
SY-REPID
SAPMV50A
5454533422222222222222222222222222222222
310D650100000000000000000000000000000000
0000000000000000000000000000000000000000
0000000000000000000000000000000000000000
IF_POST
X
5
8
0
0
SYST-REPID
SAPMV50A
5454533422222222222222222222222222222222
310D650100000000000000000000000000000000
0000000000000000000000000000000000000000
0000000000000000000000000000000000000000
XVBPA_AKTIND
0.0.1.
000
00C
No. 7 Ty. FORM
Name BELEG_SICHERN
%_DUMMY$$
2222
0000
0000
0000
IVBPA2KEY
000000
22333333
00000000
00000000
00000000
IV_FINAL_CHECK_DURCHFUEHREN
2
0
0
0
CHARX
X
5
8
0
0
LF_SUBRC
0
0000
0000
No. 6 Ty. FORM
Name FCODE_SICH_OHNE_CHECK
IVBPA1KEY
000000
33333322
00000000
00000000
00000000
T180-AKTYP
V
5
6
0
0
AKTYP-CREATE
H
4
8
0
0
CVBFA
000000 000000 #### #### 00000000000000 0
2222222222222333333222222222233333320000222000022222222333333333333332222222222222222222222223
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000C000000C00000000000000000000000000000000000000000000000
XSDCTRLFLAG
22
00
00
00
SYST
| ######################################T#######################################µ#########XP####
000000000000000000000000000000800000005000000000000000000000000000000000000010B0001000AF550000
0000100000000000000010601000104000000040006050407000000000000000000000000000C0500010300F800300
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000AF000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000BF00000C
GC_FCODE_PODCANC
ABBR
4445222222222222222222222222222222222222222222222222222222222222222222
1222000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
XSDCTRLFLAG-PROTSAVE
2
0
0
0
YES
X
5
8
0
0
GC_FCODE_PODQUIT
PODQ
5445222222222222222222222222222222222222222222222222222222222222222222
0F41000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
PROTOCOLCALLER-SDL
SDL
5442
34C0
0000
0000
GC_FCODE_PODSTOR
PODS
5445222222222222222222222222222222222222222222222222222222222222222222
0F43000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
LIKP-VBELN
0080015203
3333333333
0080015203
0000000000
0000000000
LIPS-POSNR
000010
333333
000010
000000
000000Hi
As soon as we have started to put things into UPDATE TASK, we must
54 ensure that errors definitely terminate the transaction.
55 MESSAGE A is not sufficient because it can be catched from
56 external callers which COMMIT WORK afterwards, resulting in
57 incomplete updates. Read note 385830 for the full story.
From the above message I think u need to read the not 385830 in tcode snote.
Thx. -
VL02N:Post Goods Issue Error: PXA_NO_FREE_SPACE
Hi gurus,
i am testing post goods issue under vl02n after posting&delivering a sales order with type OR. the ' PXA_NO_FREE_SPACE' error occured after my clicking post good issue.
PFB the error log, can anyone help to solve it out. TIA.
No PXA storage space available at the moment.
What happened?
The current ABAP/4 program had to be terminated because there
was no space available to load it.
Each ABAP/4 program to be executed is stored in a central
storage area that is divided between all users.
This area was too small to hold all currently active programs for all
users.
Resource bottleneck
The current program "SAPMV50A" had to be terminated because
a capacity limit has been reached.
What can you do?
Since this could have resulted in a temporary bottleneck, you should
try to restart the program.
Ask your system administrator to increase the size of the area (PXA)
used to store the ABAP/4 programs.
Note which actions and input led to the error.
For further help in handling the problem, contact your SAP administrator
You can use the ABAP dump analysis transaction ST22 to view and manage
termination messages, in particular for long term reference.
Error analysis
Unable to load a program of 1048576 bytes.
The PXA ('program execution area') was too small to hold all
currently active programs for all users.
At present, the size of the PXA is set at 144868 Kbytes.
The largest contiguous and unlocked memory chunk has 1014784 bytes.
How to correct the error
The current size of the PXA was set at 144868 kilobytes.
You can increase or decrease the PXA in the SAP profile. When
doing this, please refer to the relevent instructions in the
installation manual.
You can use the utility program 'ipclimits' to display the
available system resources.
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"PXA_NO_FREE_SPACE" " "
"SAPMV50A" or "FV50XF0B_BELEG_SICHERN"
"BELEG_SICHERN_POST"
If you cannot solve the problem yourself and want to send an error
notification to SAP, include the following information:
1. The description of the current problem (short dump)
To save the description, choose "System->List->Save->Local File
(Unconverted)".
2. Corresponding system log
Display the system log by calling transaction SM21.
Restrict the time interval to 10 minutes before and five minutes
after the short dump. Then choose "System->List->Save->Local File
(Unconverted)".
3. If the problem occurs in a problem of your own or a modified SAP
program: The source code of the program
In the editor, choose "Utilities->More
Utilities->Upload/Download->Download".
4. Details about the conditions under which the error occurred or which
actions and input led to the error.
any reponses will be awarded,
regards,
samsonCheck these links where the same issue was discussed
[PXA_NO_FREE_SPACE|http://www.sapfans.com/forums/viewtopic.php?f=12&t=304020&p=917783]
[PXA_NO_FREE_SPACE Error|http://sap.ittoolbox.com/groups/technical-functional/sap-basis/pxa_no_free_space-error-858115]
thanks
G. Lakshmipathi -
Log applying service is taking more time in phy. Standby
Hi Gurus,
My Database version as follows
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
We have datagaurd setup as well - Huge archive logs are generating in our primary database - Archive logs are shipping to standby with no dealy - But applying the archive logs are taking more in our physical standby database - Can you please help me why it was taking more time to apply archivlogs (sync) in standby ? - What could be possible reasons..?
Note : Size of standby redo logs are same as redo log file of primary database - Also standy by redo one or more than online redo log primary.
I also confirmed from network guy for network issue - He said that network is good.
Please let me know if any other information required? - Since i need to report my higer leve stating this is cause for delay in applying archive logs.
ThanksNo we don't have delay option in log_archive_dest
here is alert log
edia Recovery Waiting for thread 1 sequence 42017 (in transit)
Thu Sep 19 09:00:09 2013
Recovery of Online Redo Log: Thread 1 Group 6 Seq 42017 Reading mem 0
Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0601.log
Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0601.log
Thu Sep 19 09:00:49 2013
RFS[1]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
Thu Sep 19 09:00:54 2013
Primary database is in MAXIMUM PERFORMANCE mode
RFS[2]: Successfully opened standby log 7: '/xyz/u002/oradata/xyz/stb_redo/redo0701.log'
Thu Sep 19 09:00:58 2013
Media Recovery Waiting for thread 1 sequence 42018 (in transit)
Thu Sep 19 09:00:58 2013
Recovery of Online Redo Log: Thread 1 Group 5 Seq 42018 Reading mem 0
Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0501.log
Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0501.log
Media Recovery Waiting for thread 1 sequence 42019 (in transit)
Thu Sep 19 09:01:08 2013
Recovery of Online Redo Log: Thread 1 Group 7 Seq 42019 Reading mem 0
Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0701.log
Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0701.log
Thu Sep 19 09:01:08 2013
RFS[1]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
Thu Sep 19 09:01:22 2013
Primary database is in MAXIMUM PERFORMANCE mode
RFS[2]: Successfully opened standby log 6: '/xyz/u002/oradata/xyz/stb_redo/redo0601.log'
Thu Sep 19 09:01:26 2013
RFS[1]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
Thu Sep 19 09:01:26 2013
Media Recovery Log /xyz/u002/oradata/xyz/arch/ARCH1_42020_821334023.LOG
Media Recovery Waiting for thread 1 sequence 42021 (in transit)
Thu Sep 19 09:01:30 2013
Recovery of Online Redo Log: Thread 1 Group 5 Seq 42021 Reading mem 0
Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0501.log
Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0501.log
Thu Sep 19 09:01:51 2013
Media Recovery Waiting for thread 1 sequence 42022 (in transit)
Thu Sep 19 09:01:51 2013
Recovery of Online Redo Log: Thread 1 Group 6 Seq 42022 Reading mem 0
Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0601.log
Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0601.log
Thu Sep 19 09:01:57 2013
Primary database is in MAXIMUM PERFORMANCE mode
RFS[2]: Successfully opened standby log 5: '/xyz/u002/oradata/xyz/stb_redo/redo0501.log'
Thu Sep 19 09:02:01 2013
Media Recovery Waiting for thread 1 sequence 42023 (in transit)
Thu Sep 19 09:02:01 2013
Recovery of Online Redo Log: Thread 1 Group 5 Seq 42023 Reading mem 0
Mem# 0: /xyz/u002/oradata/xyz/stb_redo/redo0501.log
Mem# 1: /xyz/u200/oradata/xyz/stb_redo/redo0501.log -
Credit block in delivery document (not in posting goods issue)
Good afternoon!
When I try to deliver an sales order with exceeded credit limit, the following occurs:
I enter in tcode VL01N, input the order and press enter, then the system issues a warning "Credit user check 1 unsuccessful" because some condition in the user exit LVKMPFZ1 was not satisfied. The problem is that at this point, no document has been sent to VKM1, and neither when I save the delivery.
It is only sent when I enter in the same delivery again and save it one more time, which is a undesireable behavior. Important: the post goods issue doesn´t work at all (which is correct).
Is there any way of sending the delivery document to the VKM1 when I save delivery document in the first time?
Best Regards,
Adriano CardosoGood afternoon!
When I try to deliver an sales order with exceeded credit limit, the following occurs:
I enter in tcode VL01N, input the order and press enter, then the system issues a warning "Credit user check 1 unsuccessful" because some condition in the user exit LVKMPFZ1 was not satisfied. The problem is that at this point, no document has been sent to VKM1, and neither when I save the delivery.
It is only sent when I enter in the same delivery again and save it one more time, which is a undesireable behavior. Important: the post goods issue doesn´t work at all (which is correct).
Is there any way of sending the delivery document to the VKM1 when I save delivery document in the first time?
Best Regards,
Adriano Cardoso -
Update query which taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank you
Edited by: 960991 on Apr 16, 2013 7:11 AM960991 wrote:
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
Some things to look at ...
1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
2. Unnecessary "(" ")" in the subquery "(sum" are confusing
3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
5. Is tqtr5 part of an index? It is a bad idea to update indexed columns -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75
Maybe you are looking for
-
How can I set a graphic to link to a specific detail region?
I have a master/detail region set. Works fine. I'd like to place a graphic below the menu(master region) that would link to a certain detail region. I'm using HTML data set. Any thoughts?
-
I'm using a WRT54GS router with a Dell Inspiron Laptop that's connecting using a wireless adapter and using an ethernet port to connect the Dell Desktop computer in my office. I do not have a problem maintaining a network connection, or access to the
-
Can't get vector indexOf working
I have a small little method that is trying to search a vector for a specific string to get its index in the vector. Below is the code for the method... public String getMinRoundTrip(){ String min = "Minimum"; int minIndex = -1;
-
Help in generating the same random numbers at every time of executuion
dear friends i am facing a problem that the random numbers generated at time of each exectuion of the program are not the same, i want to generate the same random numbers every time time i run the program. i need your help. i am giving the code in c+
-
BES and Blackberry Bridge conflict
I have a blackberry 9810. I have configured the bridge and the data is appearing on my playbook but it is not opening. The issue was the IP policy when trying to open messages and accepting the BBM licence. I tried the reset to factory setting and th