Performance for table BKFP
Hi,
I would like to enquire is there anyway that i can improve the performance for table BKPF from the ABAP code point of view.
Because we have customise one program to generate report for the asset master listing.
one of the select statement are show as below:
SELECT SINGLE * FROM BKPF WHERE BUKRS = ANEP-BUKRS
AND GJAHR = ANEP-GJAHR
AND AWKEY = AWKEYUS.
I would like to know how it different from the select statemene below:
SELECT SINGLE * FROM BKPF INTO CORRESPONDING FIELDS OF T_BKPF
WHERE
BUKRS = ANEP-BUKRS
AND GJAHR = ANEP-GJAHR
AND AWKEY = AWKEY.
Which of the select statements above can enhance report,because currently we have face quite bad issue on this report.
Can i post the ABAP code on this forum.
Hope someone can help me on this. thank you.
Hi,
As much as possible use the primary keys of BKPF which is BUKRS, BELNR and GJAHR. Also, select only the records which are needed so to increase performance. Please look at the code below:
DATA: lv_age_of_rec TYPE p.
FIELD-SYMBOLS: <fs_final> LIKE LINE OF it_final.
LOOP AT it_final ASSIGNING <fs_final>.
get records from BKPF
SELECT SINGLE bukrs belnr gjahr budat bldat xblnr bktxt FROM bkpf
INTO (bkpf-bukrs, bkpf-belnr, bkpf-gjahr, <fs_final>-budat,
<fs_final>-bldat, <fs_final>-xblnr, <fs_final>-bktxt)
WHERE bukrs = <fs_final>-bukrs
AND belnr = <fs_final>-belnr
AND gjahr = <fs_final>-gjahr.
if <fs_final>-shkzg = 'H', multiply dmbtr(amount in local currency)
by negative 1
IF <fs_final>-shkzg = 'H'.
<fs_final>-dmbtr = <fs_final>-dmbtr * -1.
ENDIF.
combine company code(bukrs), accounting document number(belnr),
fiscal year(gjahr) and line item(buzei) to get long text.
CONCATENATE: <fs_final>-bukrs <fs_final>-belnr
<fs_final>-gjahr <fs_final>-buzei
INTO it_thead-tdname.
CALL FUNCTION 'READ_TEXT'
EXPORTING
client = sy-mandt
id = '0001'
language = sy-langu
name = it_thead-tdname
object = 'DOC_ITEM'
ARCHIVE_HANDLE = 0
LOCAL_CAT = ' '
IMPORTING
HEADER =
TABLES
lines = it_lines
EXCEPTIONS
id = 1
language = 2
name = 3
not_found = 4
object = 5
reference_check = 6
wrong_access_to_archive = 7
OTHERS = 8.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
if successful, split long text into start and end date
IF sy-subrc = 0.
READ TABLE it_lines TRANSPORTING tdline.
IF sy-subrc = 0.
SPLIT it_lines-tdline AT '-' INTO
<fs_final>-s_dat <fs_final>-e_dat.
ENDIF.
ENDIF.
get vendor name from LFA1
SELECT SINGLE name1 FROM lfa1
INTO <fs_final>-name1
WHERE lifnr = <fs_final>-lifnr.
lv_age_of_rec = p_budat - <fs_final>-budat.
condition for age of deposits
IF lv_age_of_rec <= 30.
<fs_final>-amount1 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 30 AND lv_age_of_rec <= 60.
<fs_final>-amount2 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 60 AND lv_age_of_rec <= 90.
<fs_final>-amount3 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 90 AND lv_age_of_rec <= 120.
<fs_final>-amount4 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 120 AND lv_age_of_rec <= 180.
<fs_final>-amount5 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 180.
<fs_final>-amount6 = <fs_final>-dmbtr.
ENDIF.
CLEAR: bkpf, it_lines-tdline, lv_age_of_rec.
ENDLOOP.
Hope this helps...
P.S. Please award points for useful answers.
Similar Messages
-
HOW TO USE A SINGLE PERFORM FOR VARIOUS TABLES ?
perform test TABLES t_header.
select
KONH~KNUMH
konh~datab
konh~datbi
konp~kbetr
konp~konwa
konp~kpein
konp~kmein
KONP~KRECH
FROM konh INNER JOIN konp
ON konpknumh = konhknumh
into table iTABXXX
"ANY TEMPERARY INTERNAL TABLE.
for all entries in t_header
where
konh~kschl = t_header-kschl
AND konh~knumh = t_header-knumh.
endform.
how can I use above perform for various internal tables of DIFFERENT LINE TYPES but having the fields KSCHL & KNUMH.u can use single perform....
just see this example......hope this is what u r expecting....
tables : pa0001.
parameters : p_pernr like pa0001-pernr.
data : itab1 like pa0001 occurs 0 with header line.
data : itab2 like pa0002 occurs 0 with header line.
perform get_data tables itab1 itab2.
if not itab1[] is initial.
loop at itab1.
write :/ itab1-pernr.
endloop.
endif.
if not itab2[] is initial.
loop at itab2.
write :/ itab2-pernr.
endloop.
endif.
*& Form get_data
text
-->P_ITAB1 text
-->P_ITAB2 text
form get_data tables itab1 structure pa0001
itab2 structure pa0002.
select * from pa0001 into table itab1 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
select * from pa0002 into table itab2 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
endform. " get_data
Regards
vasu -
Performance for ALTER TABLE statements
Hi,
I'd like to improve performance for scripts running several ALTER TABLE statements. I have two questions regarding this.
This is the original code:
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL );
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_INVOICED NUMBER NULL );
1. Would I gain any performance by making the following changes?
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL,
QTY_INVOICED NUMBER NULL );
These columns are later on filled with values and then made NOT NULL.
2. Would I gain anything by make these columns NOT NULL with a DEFAULT value in the first statement and then later on insert the values?
/Roland Bali
null1. It wud definitely increase the performance.
2. You can only have NOT NULL columns added to an existing table if the table is empty.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Roland Bali ([email protected]):
Hi,
I'd like to improve performance for scripts running several ALTER TABLE statements. I have two questions regarding this.
This is the original code:
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL );
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_INVOICED NUMBER NULL );
1. Would I gain any performance by making the following changes?
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL,
QTY_INVOICED NUMBER NULL );
These columns are later on filled with values and then made NOT NULL.
2. Would I gain anything by make these columns NOT NULL with a DEFAULT value in the first statement and then later on insert the values?
/Roland Bali<HR></BLOCKQUOTE>
Naveen -
Hello folks,
I had successfully created a table and I tried inserting some records in to the table using table maintenance.
I received the following error.
'The maintenance dialog for table zxxx is incomplete or not defined'.
What does this mean?? And what should I do??
I am pasting the detail help over this error.
The maintenance dialog for ZTV_DBS_RANGE_NO is incomplete or not defined
Message no. SV 037
Diagnosis
The called function with the view/table ZTV_DBS_RANGE_NO needs a special maintenance dialog which, however, either does not exist at all, or does not exist completely.
System Response
The called function cannot be performed.
Procedure
Generate the required maintenance dialog.
Please helpYou have to create the maintenance dialog for table you would like to use with transaction SM30 / SM31.
To do this look the transaction SE55.
(or SE11, select the table, MODIFY, (menu) Utilities -> Maintain dialog ... )
Regards
Frédéric
Message was edited by: Frédéric Girod -
Error in creating Table Maintenance for TABLES: J_2IRG1BAL ,Dump error show
Dear Gurus
I have Created one Table Maintenance for TABLES: J_2IRG1BAL ,
1) I have take function group as same as table name like J_2IRG1BAL
3) Authorization Group &NC&
4) Authorization object S_TABU_DIS
5) Function group J_2IRG1BAL
6) Package J1I5
7) Maintenance type one step.
8) Maint. Screen No. Overview screen 2009.
9) Recording routine STD recording routines .
I have assign screen single screen 1 but it will not take the screen so ill put screen 2009 ,the system will not not accept the screen but I proposed to Save and activate the table maintenance generator and created and save the function group ,the table is ready to save the entries
We have to go to SM30 then put 4-5 entries ,after that a dump error shows
Which i have attachéd in this mail kindly help me out
Its effect in production server.
I have found that the error is given below I have mentioned all the details
Kindly read this
I have created function group J_2IRG1BAL and assign Package J1I5 and function group,J1I5 is already a Function group so problem occurs a dump error message displays
Kindly give me the right solution all the Clients are effected
110,100 150,250 these all are effected development clients
300 is prod client this is also affected because I have created a table maintenance generator and send the req to production .
*Dump Error Shows in Production*
Runtime Errors SYNTAX_ERROR
Date and Time 11.12.2008 09:26:30
What happened?
| Error in ABAP application program.
Error analysis
In program "SAPLJ1I5 ", the following syntax error occurred:
"The program "SAPLJ1I5" is not Unicode-compatible, according to its pro"
"gram attributes."
The current ABAP program "SAPLSVIM" had to be terminated because one of the
statements could not be executed.
This is probably due to an error in the ABAP program.
In program "SAPLJ1I5 ", the following syntax error occurred
in the Include "SAPLJ1I5 " in line 0:
"The program "SAPLJ1I5" is not Unicode-compatible, according to its pro"
"gram attributes."
rigger Location of Runtime Error |
Program SAPLSVIM
Include LSVIMU01
Row 107
Module type (FUNCTION)
| Module Name VIEW_MAINTENANCE
Author and last person to change the Include are:
Author "TTLABAP2 "
Last changed by "TTLABAP2 "
105
Initialisierung des Abgleichsmandanten zum View *
106
>>>>>
vim_default_upgr_clnt-viewname = x_header-viewname.
108
vim_default_upgr_clnt-client = client_for_upgrade.
109
PERFORM vim_set_global_field_value IN PROGRAM (fpool)
| 110| USING 'VIM_DEFAULT_UPGR_CLNT' 'C' vim_default_upgr_clnt rc.I have send you all the details regarding the table maintenance generator
error is shown below
I have Created one Table Maintenance for TABLES: J_2IRG1BAL ,
1) I have take function group as same as table name like J_2IRG1BAL
3) Authorization Group &NC&
4) Authorization object S_TABU_DIS
5) Function group J_2IRG1BAL
6) Package J1I5
7) Maintenance type one step.
8) Maint. Screen No. Overview screen 2009.
9) Recording routine STD recording routines .
I have assign screen single screen 1 but it will not take the screen so ill put screen 2009 ,the system will not not accept the screen but I proposed to Save and activate the table maintenance generator and created and save the function group ,the table is ready to save the entries
We have to go to SM30 then put 4-5 entries ,after that a dump error shows
Which i have attachéd in this mail kindly help me out
Its effect in production server.
I have found that the error is given below I have mentioned all the details
Kindly read this
I have created function group J_2IRG1BAL and assign Package J1I5 and function group,J1I5 is already a Function group so problem occurs a dump error message displays
Kindly give me the right solution all the Clients are effected
110,100 150,250 these all are effected development clients
300 is prod client this is also affected because I have created a table maintenance generator and send the req to production .
Runtime Errors SYNTAX_ERROR
Date and Time 11.12.2008 09:26:30
ShrtText
Syntax error in program "SAPLJ1I5 ".
What happened?
Error in ABAP application program.
The current ABAP program "SAPLSVIM" had to be terminated because one of the
statements could not be executed.
This is probably due to an error in the ABAP program.
In program "SAPLJ1I5 ", the following syntax error occurred
in the Include "SAPLJ1I5 " in line 0:
"The program "SAPLJ1I5" is not Unicode-compatible, according to its pro"
"gram attributes."
Author and last person to change the Include are:
Author "TTLABAP2 "
Last changed by "TTLABAP2 "
Error analysis
In program "SAPLJ1I5 ", the following syntax error occurred:
"The program "SAPLJ1I5" is not Unicode-compatible, according to its pro"
"gram attributes."
Trigger Location of Runtime Error
Program SAPLSVIM
Include LSVIMU01
Row 107
Module type (FUNCTION)
Module Name VIEW_MAINTENANCE
Source Code Extract
Line
SourceCde
77
TRANSPORTING NO FIELDS.
78
IF sy-subrc NE 0.
79
SELECT SINGLE * FROM tfdir WHERE funcname EQ <function_name>.
80
IF sy-subrc NE 0.
81
RAISE no_editor_function.
82
ELSE.
83
length = strlen( function_name1 ).
84
ASSIGN function_name1(length) TO <function_name>.
85
SELECT SINGLE * FROM tfdir WHERE funcname EQ <function_name>.
86
IF sy-subrc NE 0.
87
RAISE no_database_function.
88
ENDIF.
89
ENDIF.
90
INSERT x_header-viewname INTO alr_checked_views INDEX sy-tabix.
91
ELSE.
92
length = strlen( function_name1 ).
93
ASSIGN function_name1(length) TO <function_name>.
94
ENDIF.
95
96
Initialisierung der RFC-Destination zum View *
97
98
FPOOL+4 = X_HEADER-AREA.
99
fpool = x_header-fpoolname.
100
vim_default_rfc_dest-viewname = x_header-viewname.
101
vim_default_rfc_dest-rfcdest = rfc_destination_for_upgrade.
102
PERFORM vim_set_global_field_value IN PROGRAM (fpool)
103
USING 'VIM_DEFAULT_RFC_DEST' 'C' vim_default_rfc_dest rc.
104
105
Initialisierung des Abgleichsmandanten zum View *
106
>>>>>
vim_default_upgr_clnt-viewname = x_header-viewname.
108
vim_default_upgr_clnt-client = client_for_upgrade.
109
PERFORM vim_set_global_field_value IN PROGRAM (fpool)
110
USING 'VIM_DEFAULT_UPGR_CLNT' 'C' vim_default_upgr_clnt rc.
111
112
set flag if complex selection conditions in sellist *
113
114
IF complex_selconds_used NE space.
115
READ TABLE dba_sellist INDEX 1.
116
IF sy-subrc EQ 0 AND dba_sellist-cond_kind EQ space.
117
dba_sellist-cond_kind = 'C'. MODIFY dba_sellist INDEX 1.
118
ENDIF.
119
ENDIF.
120
121
direkter Vergleich: Flagge setzen usw. *
122
123
IF view_action EQ vim_direct_upgrade.
124
view_action = aendern.
125
PERFORM vim_set_global_field_value IN PROGRAM (fpool)
126
USING 'VIM_SPECIAL_MODE' 'C' vim_direct_upgrade rc. -
Syntax check for tables and function modules
Hi,
I am writing a program that perform syntax check on object such as executable programs , function modules and tables.
Syntax check works fine for programs, but not for tables.
How can I perform syntax check on my tables or structures?
I get my data from the table TADIR. But I don't get my function modules from there. What is the table for this.
Thanks 4 ur replies.
Parvezhi
good
generally in sap while creating a table or structure we get the error and we solved them,but like reports during runtime it is not possible to check the syntax of a table or structure.
thanks
mrutyun^ -
Performance for the below code
Can any one help me in improving the performance for the below code.
FORM RETRIEVE_DATA .
CLEAR WA_TERRINFO.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
CLEAR SORT2.
*To retrieve the territory information from ZPSDSALREP
SELECT ZZTERRMG
ZZSALESREP
NAME1
ZREP_PROFILE
ZTEAM
INTO TABLE GT_TERRINFO
FROM ZPSDSALREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
ENDLOOP.
ENDFORM. " RETRIEVE_DATAHi
The code is easy so I don't think you can do nothing, only u can try to limit the reading of KNA1:
FORM RETRIEVE_DATA .
CLEAR WA_TERRINFO.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
CLEAR SORT2.
*To retrieve the territory information from ZPSDSALREP
SELECT ZZTERRMG
ZZSALESREP
NAME1
ZREP_PROFILE
ZTEAM
INTO TABLE GT_TERRINFO
FROM ZPSDSALREP.
SORT GT_TERRINFO BY SALESREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
IF KNA1-KUNNR <> WA_KNA1-KUNNR.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
IF SY-SUBRC <> 0.
CLEAR: WA_KNA1, WA_ADRC.
ELSE.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
ENDIF.
ENDIF.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
* MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
ENDLOOP.
ENDFORM. " RETRIEVE_DATA
If program takes many times to upload the data from ZPSDSALREP, you can try to split in sevaral packages:
SELECT ZZTERRMG ZZSALESREP NAME1 ZREP_PROFILE ZTEAM
INTO TABLE GT_TERRINFO PACKAGE SIZE <...>
FROM ZPSDSALREP.
SORT GT_TERRINFO BY SALESREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
IF KNA1-KUNNR <> WA_KNA1-KUNNR.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
IF SY-SUBRC <> 0.
CLEAR: WA_KNA1, WA_ADRC.
ELSE.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
ENDIF.
ENDIF.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
* MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
ENDLOOP.
ENDSELECT.
Max -
Import dumpfile with seperate tablespaces for table and index
Hi,
We have a schema for which the its tables are stored in seperate tablespace and indexes are stored in different tablespace. Now we have take full schema export. Now we want to import it on another schema. Now I want to know if the we have difference in the tablespace name we use REMAP_TABLESPACE clause of the impdp command but what about the seperate tablespace for table and indexes. How would Oracle handle this.
Regards,
AbbasiHi,
I hope you created the same tablespace structure on the target side if not so remap_tablespace option you have to use for specifying different tablespaces.Oracle will take care of putting data and index.Any how if a index is moved from one tablespace to other you have to rebuild them,once you rebuild them than only stattistics are gathered otherwise you
might face some performance issue.
Better option is to keep same tablespace structures in source and target environment.
Best regards,
Rafi.
http://rafioracledba.blogspot.com
Edited by: Rafi (Oracle DBA) on May 9, 2011 7:07 AM -
Data service for table in Oracle 8.0.6
Hi,
Using WebLogic 8.1.4 and LiquidData 8.5 I am trying to create physical data services for tables in a DB in Oracle 8.0.6. I am aware that that Oracle version is not supported by Oracle anymore, but I need to work with that version anyway (you know how it is sometimes).
I managed to create a connection pool for this through the WebLogic Server Console by providing the JDBC driver for 8.0.6., but when I want to create a data source using the new connection pool and WebLogic tries to get the metadata, I get pop up windows with messages like:
"Bigger type length than maximum"
and
"OALL8 in an inconsistent state"
and
"Protocol violation"
One more thing to mention: I also added the Oracle 8.0.6 JDBC driver to the WebLogic Server classpath (Tools -> WebLogic Server -> Server Properties ... -> WebLogic Server: added classes12.zip to Server classpath additions) and restarted WebLogic Workshop and Server. Still I get those error messages.
Is there a special procedure how to provide/configure a specific driver for a DBMS that is not natively supported by WebLogic?
Any help is appreciated.
Thanks,
WilkoHi Mike,
Thanks for the quick reply. Below the contents of the console window from starting the Workshop and Server. I'll try your next hint next and let you know about the outcome. As far as I see there were no errors issued by the Server while I tried to connect to Oracle 8.0.6 to upload metadata. (I am not sure whether anything was printed out while I started the server). My address is w.eschebach at vsnlinternational dot com.
Thanks,
Wilko
This is how my workshop.cfg looks like:
C:\bea\weblogic81\workshop
C:\bea\jdk142_05\jre\bin\java.exe
-XX:-UseThreadPriorities -Xmx256m -Xms64m -Xss256k -client -Dsun.io.useCanonCaches=false -Dsun.java2d.noddraw=true -Dsun.java2d.d3d=false -Djava.system.class.loader="workshop.core.AppClassLoader" -cp "C:\bea\weblogic81\workshop\wlw-ide.jar" workshop.core.Workshop
Console output:
DEBUG: extensions=C:\bea\weblogic81\workshop\\extensions
INFO: Registering extension com.bea.portal.ide.CommonServices
INFO: Service com.bea.portal.ide.findrefs.FindRefsSvc registered
INFO: Handler for urn:com-bea-portal-ide:ref-finders registered
INFO: Registering extension workshop.control.ControlServices
INFO: Service com.bea.ide.control.ControlSvc registered
INFO: Registering extension com.crystaldecisions.integration.weblogic.workshop.r
eport.Bootstrap
INFO: Registering extension workshop.debugger.DebuggerServices
INFO: Exit Handler found
INFO: Service com.bea.ide.debug.DebugSvc registered
INFO: Handler for urn:com-bea-ide:debugExpressionViews registered
INFO: Registering extension workshop.jspdesigner.JspDesignerServices
INFO: Service com.bea.ide.ui.browser.BrowserSvc registered
INFO: Service com.bea.ide.jspdesigner.PaletteActionSvc registered
INFO: Handler for urn:com-bea-ide-jspdesigner:tags registered
INFO: Registering extension workshop.liquiddata.LiquidDataExtension
INFO: Registering extension workshop.pageflow.services.PageFlowServices
INFO: Exit Handler found
INFO: Service workshop.pageflow.services.PageFlowSvc registered
INFO: Service com.bea.ide.ui.palette.DataPaletteSvc registered
INFO: Handler for urn:workshop-pageflow-wizard:extension registered
INFO: Registering extension com.bea.portal.ide.portalbuilder.PortalBuilderServic
es
INFO: Service com.bea.portal.ide.portalbuilder.laf.LookAndFeelSvc registere
d
INFO: Service com.bea.portal.ide.portalbuilder.laf.css.CssSvc registered
INFO: Service com.bea.portal.codegen.CodeGenSvc registered
INFO: Registering extension com.bea.portal.ide.PortalServices
INFO: Service com.bea.portal.ide.cache.CacheInfoSvc registered
INFO: Registering extension workshop.process.ProcessExtension
INFO: Service workshop.process.ProcessSvc registered
INFO: Service workshop.process.broker.channel.ChannelManagerSvc registered
INFO: Handler for urn:com-bea-ide-process:process registered
INFO: Registering extension workshop.shell.ShellServices
INFO: Exit Handler found
INFO: Service com.bea.ide.ui.frame.FrameSvc registered
INFO: Service com.bea.ide.core.datatransfer.DataTransferSvc registered
INFO: Service com.bea.ide.actions.ActionSvc registered
INFO: Service com.bea.ide.document.DocumentSvc registered
INFO: Service com.bea.ide.core.HttpSvc registered
INFO: Service com.bea.ide.ui.help.HelpSvc registered
INFO: Service com.bea.ide.ui.output.OutputSvc registered
INFO: Service com.bea.ide.core.navigation.NavigationSvc registered
INFO: Service com.bea.ide.filesystem.FileSvc registered
INFO: Service com.bea.ide.filesystem.FileSystemSvc registered
INFO: Service com.bea.ide.refactor.RefactorSvc registered
INFO: Service com.bea.ide.security.SecuritySvc registered
INFO: Handler for urn:com-bea-ide:actions registered
INFO: Handler for urn:com-bea-ide:document registered
INFO: Handler for urn:com-bea-ide:frame registered
INFO: Handler for urn:com-bea-ide:encoding registered
INFO: Handler for urn:com-bea-ide:help registered
INFO: Registering extension workshop.sourcecontrol.SCMServices
INFO: Service com.bea.ide.sourcecontrol.SourceControlSvc registered
INFO: Handler for urn:com-bea-ide:sourcecontrol registered
INFO: Registering extension workshop.sourceeditor.EditorServices
INFO: Service com.bea.ide.sourceeditor.EditorSvc registered
INFO: Service com.bea.ide.sourceeditor.compiler.CompilerSvc registered
INFO: Handler for urn:com-bea-ide:sourceeditor:sourceinfo registered
INFO: Registering extension com.bea.wls.J2EEServices
INFO: Service com.bea.wls.ejb.EJBSvc registered
INFO: Service com.bea.wls.DBSvc registered
INFO: Registering extension workshop.workspace.WorkspaceServices
INFO: Exit Handler found
INFO: Service com.bea.ide.workspace.WorkspaceSvc registered
INFO: Service com.bea.ide.workspace.ServerSvc registered
INFO: Service com.bea.ide.workspace.SettingsSvc registered
INFO: Service com.bea.ide.build.AntSvc registered
INFO: Service com.bea.ide.workspace.RunSvc registered
INFO: Handler for urn:com-bea-ide:settings registered
INFO: Handler for urn:com-bea-ide:project registered
INFO: Registering extension workshop.xml.XMLServices
INFO: Service com.bea.ide.xml.types.TypeManagerSvc registered
INFO: Service com.bea.ide.xml.types.TypeResolverSvc registered
INFO: Service com.bea.ide.xmlmap.XMLMapSvc registered
DEBUG: Workshop temp dir: C:\DOCUME~1\TR003137\LOCALS~1\Temp\wlw-temp-18920
DEBUG: ExtensionsLoaded: 8329ms
DEBUG: UI Displayed: 11563ms
DEBUG: Time to load XQuery Functions (in seconds) - 0
DEBUG: Time to load repository (in seconds) - 0
DEBUG: LdBuildDriver loaded
DEBUG: project ProvisioningDataServices activated
DEBUG: Setting active project to: ProvisioningDataServices
DEBUG: Workspace Activated: 17126ms
DEBUG: Document Panel initialized: 17501ms
DEBUG: *** CompilerProject constructor 1
DEBUG: WorkspaceLoaded: 17594ms
DEBUG: getClasspathMapping initiated with 29 item list.
DEBUG: getClasspathMapping returning 29 item map.
INFO: Startup Complete
DEBUG: Time to load repository (in seconds) - 1
DEBUG: Loading template file wsrp-producer-project.zip
DEBUG: Loading template file wli-tutorial.zip
DEBUG: Loading template file wli-schemas.zip
DEBUG: Loading template file wli-newprocess.zip
DEBUG: Loading template file wli-helloworld.zip
DEBUG: Loading template file webflow-project.zip
DEBUG: Loading template file tutorial-webservice.zip
DEBUG: Loading template file tutorial-pageflow.zip
DEBUG: Loading template file tutorial-jbc.zip
DEBUG: Loading template file tutorial-ejb.zip
DEBUG: Loading template file portal-project.zip
DEBUG: Loading template file portal-application.zip
DEBUG: Loading template file pipeline-application.zip
DEBUG: Loading template file oag-schemas.zip
DEBUG: Loading template file netui-webapp.zip
DEBUG: Loading template file liquiddata-project.zip
DEBUG: Loading template file liquiddata-application.zip
DEBUG: Loading template file ejb-template.zip
DEBUG: Loading template file default-workshop.zip
DEBUG: Loading template file datasync-template.zip
DEBUG: Loading template file crystalreports.zip
DEBUG: Loading template file commerce-project.zip
DEBUG: Loading template file commerce-application.zip
DEBUG: URI is null. Delete Version will not show up in the menu
DEBUG: URI is null. Delete Version will not show up in the menu
DEBUG: GCThread: performing gc while idle -
To improve performance for report
Hi Expert,
i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
SELECT vbeln
auart
submi
vkorg
vtweg
spart
knumv
vdatu
vprgr
ihrez
bname
kunnr
FROM vbak
APPENDING TABLE itab_vbak_vbap
FOR ALL ENTRIES IN l_itab_temp
*BEGIN OF change 17/Oct/2008.
WHERE erdat IN s_erdat AND
submi = l_itab_temp-submi AND
*End of Changes 17/Oct/2008.
auart = l_itab_temp-auart AND
*BEGIN OF change 17/Oct/2008.
submi = l_itab_temp-submi AND
*End of Changes 17/Oct/2008.
vkorg = l_itab_temp-vkorg AND
vtweg = l_itab_temp-vtweg AND
spart = l_itab_temp-spart AND
vdatu = l_itab_temp-vdatu AND
vprgr = l_itab_temp-vprgr AND
ihrez = l_itab_temp-ihrez AND
bname = l_itab_temp-bname AND
kunnr = l_itab_temp-sap_kunnr.
DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
ENDDO.
Please give me suggession for improving performance for the programmes.hi,
you try like this
DATA:BEGIN OF itab1 OCCURS 0,
vbeln LIKE vbak-vbeln,
END OF itab1.
DATA: BEGIN OF itab2 OCCURS 0,
vbeln LIKE vbap-vbeln,
posnr LIKE vbap-posnr,
matnr LIKE vbap-matnr,
END OF itab2.
DATA: BEGIN OF itab3 OCCURS 0,
vbeln TYPE vbeln_va,
posnr TYPE posnr_va,
matnr TYPE matnr,
END OF itab3.
SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
START-OF-SELECTION.
SELECT vbeln FROM vbak INTO TABLE itab1
WHERE vbeln IN s_vbeln.
IF itab1[] IS NOT INITIAL.
SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
FOR ALL ENTRIES IN itab1
WHERE vbeln = itab1-vbeln.
ENDIF. -
Estimating range of performance for a query
I wonder if this is an intractable problem:
Using information available from the data dictionary and from the tables referenced in a query, estimate actual minimum and maximum run times for a given query, predict performance for various combinations of realistic values for the variables that influence performance.
Variables would include:
what kinds of IO are used
how much of each type of IO is used
atpr - average time per read for various types of IO
data relationships - min/max/avg #child records per parent record
server caching - how many gets require IO
clustering factor of indexes
I think a related item would be client caching
continued rows - how many per N primary rows
Type of plan - initally I think perhaps all NL and all hash joins are simple starting points
Some of the variables are observable from target systems ( atpr, data relationships, clustering factor, .. ).
I know, the optimizer already does this.
I know, it's faster to just run test queries.
Repeated work with such ideas would cause refinement of the method and
eventually might result in reasonably accurate estimates and improved understanding
of the variables involved in query performance, the latter being more important
than the former.
Please tell me why this is a bad or good idea. I already mentioned a couple of counter-arguments above,
please don't bother elaborating on them unless you're shedding light and not flaming. I think this would be
somewhat like the index evaluation methods in Lahdenmaki/Leach's book, but more Oracle-centric.
Or maybe I'm kidding myself..Justin Cave wrote:
Could you possibly invert the problem a bit and use some statistical process control techniques to set some baselines and to identify statements that are moving outside of their control limits?
For example, a control chart could be built for each SQL statement based on actual execution performance data in the AWR-- you just need to know the average and the standard deviation execution time for that which should be relatively easy to derive. You can then gather the performance data for every snapshot interval, add a new data point to the chart. There are a number of different sets of rules for determining a "signal" from this data as opposed to normal random variation. That would generally be a reasonable way for everyone to agree on what performance should really be expected for a SQL statement and it would be a good early warning system that "something has changed" when you see a particular query start to run consistently slower (or faster) than it had been. That, in turn, might lead to better discussions, i.e. "sql_id Y is starting to run more slowly than we were expecting based on prior testing because we just introduced Process X that is generating a pile of I/O in the same window your query runs in. We can adjust Y's baseline to incorporate this new reality. Or we can move when Y runs so that it isn't competing with X. Or we can try to optimize Y further. Or we can get the folks that own X and Y into a room and determine how to meet everyone's requirements". Particularly if your performance testing can identify issues in test before the new Process X code goes into prod.
JustinThose are interesting ideas. Better discussions would be a good thing.
Re inverting the problem from prediction to reaction:
I have done some work with the script at http://kerryosborne.oracle-guy.com/2008/10/unstable-plans/ which of course work only for as much AWR data as you keep. I've had mixed results. I haven't tried to set it up to alert me about problems or to monitor a specific set of sql_ids. I've found it to be useful when users/developers are complaining about general slowness but won't give you any useful details about what is slow.
Here are a few complicating factors re identifying significant divergences in query performance or resource use - There are queries that accept different inputs that rightly generate completely different workloads for the same query, e.g., a product/location/time query whose inputs allow wide variations in the selectivity for each of the dimensions. There are applications that never heard of a bind variable, and there are queries that rightly do not use bind variables ( yes, even in the age of sql injection ).
In general , aside from the usual Grid Control and Nagios alerts re CPU/Memory/IO thresholds, and some blocking-locks alert programs, it's up to our users/developers to report performance problems.
Re my original question - I'll admit I was pretty sleep deprived when I wrote it. Sleep deprivation isn't usually conducive to clear thinking, so it will be interesting to see how this all looks in a month. Still, given that so much testing is affected by previous test runs ( caching ), I thought it made sense to try to understand the worst-performing case for a given execution plan. It's not a good thing to find that the big query that was tested to death and gave users a certain expectation of run time runs in production on a system where the caches have been dominated for the last N hours by completely unrelated workloads, and that when the query runs every single block must be read from a spindle, giving the users a much slower query than was seen in test.
Maybe the best way to predict worst-case performance is to work from the v$ metrics from test, manipulating the metrics to simulate different amounts of caching and different IO rates, thus generating different estimates for run-time. -
Regarding Database performance for report creation
Hi,
Currently i have one database in that two schema one for table data and second for reports creation.But it is taking more time to display data because there is more load on first schema.
I want to create second database for report schema .But i have to access table data from first database .
1) There is one option to fetch data from first database is through DB Link . But i think it also takes more time to fetch data.
Is there any way i can access data from first database and i get more performance with creating new database.
Kindly give me suggestion . What should i do to improve reports performance?user647572 wrote:
Hi,
Currently i have one database in that two schema one for table data and second for reports creation.But it is taking more time to display data because there is more load on first schema.
I want to create second database for report schema .But i have to access table data from first database .
1) There is one option to fetch data from first database is through DB Link . But i think it also takes more time to fetch data.
Is there any way i can access data from first database and i get more performance with creating new database.
Kindly give me suggestion . What should i do to improve reports performance?You have more two options:
1. Use Oracle Streams and replicate tables between databases. WHile using reporting, you'll refer to the second database
2. Create Standby database, it's the clone of your database where you can update it by adding archived redo log files from primary database -
Standard Report check whether Goods Issue has been performed for DN
Hi All,
My users is having a list of DNs where they would like to check if Post Goods Issue has been performed for them. I would like to ask is there any standard reports as to check if a Delivery Note has PGI performed?
I have tried report VL06G but in this report we cannot give the DN No. as the key.
Thanks.There is no standard reports to check GI by DN number.
You have to find out the Outbound Number first then check by Outbound number.
In SE16N, key VBFA table
Pump in your DN number under "Follow-on doc."
Under "Prec.doc.categ." select "J".
with the outboud number, you can proceed to check using above report suggested.
Alternatively, using same VBFA table. Pump in list of outbound number under "Preceding Doc." and select "Subs.doc.categ." as "R".
If you can set-up query reports, you can also get all the information directly with the DN Numbers. -
Making data in tables online and offline - Backup/Recovery for tables.
Hi All,
I'm working on a project where the functionality is similar to 'Backup and Recovery' for database tables.
Lets say we have a set of 6 tables T1, T2.... T6. They do have relationship between them. The tables in questions are simple standard tables, which are not table partitioned.
- I want remove the records from live tables based on user entered date ranges and store it in some offline medium.
- I might want make the data online again from the offline medium; Application should be able use that data without any modifications.
- Different offline mediums can be
a) Flat file
b) Different table space
c) Any other secondary medium (like XML, tape..Etc,)
The total number of records will run in millions.
The proposed solution should consider,
1. Performance - Java solutions not feasible. Anything in SQL, PL/SQL or runs in DB itself(tools) are OK.
2. Reliability - Should be highly reliable, Data corruption simply unacceptable.
3. Security - Users should not be able to make out of the file.
Few options include:
1. Use partition
2. Use SQL*Loader
3. Export and import of tables.
My main targets:
1. Reduce space.
2. Increase performance for queries.
Please pass on your suggestions, any help is highly appreciated!
Thanks In Advance!If you truely need to get the data out of the Oracle database into flat files, partitioning is pretty useless. Partitioning the table, though, strikes me as by far the most efficient, reliable, and secure solution.
What sort of security do you need with the flat file? Do you need the data in the file to be encrypted, or is the binary file format of an Oracle export file sufficiently obfuscated?
I'd stay away from UTL_FILE here, just because performance is rather poor.
I always get pretty nervous about recoverability when people start moving data out of the database to flat files for long-term archival.
- If you ever change the data model, even slightly, you may not be able to move the data back into the database without modifying your loader. A few years down the line, after a few data model changes, it can be almost impossible to find the documentation to make that change.
- If you do things like change lookup tables over time, reloading data can cause major problems. When you load the data back in, your applications and reports may not treat the old codes properly.
- Making sure that the flat files get stored and tagged properly can be a challenge. When the database gets moved to new system, you have to make sure that all the flat files also get moved around and the the loader scripts are modified appropriately.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
No database table exists for table ZTAB1 after ORA-01438
Dear friends,
We have a problem in our test system.
One developer performed a transport which tried to change column from length 11 to 10 and after the transport, table is missing from database. We observed ABAP dumps DBIF_RSQL_SQL_ERROR with ORA-01438 which suggested us that there was some problem with field lenght change. After discussin, we have find out the culprate filed.
But, with me now problem is Partially active table which Does not exist in the database.
So now, what i should do to fix this problem.. Shuld i again increase filed length manually in test system and then try to perform SE14>Continue adjustment or what ?
In SE14 : Analyze Adjustment provide this information :
Seq of steps
1 Initialization
2 Dropping of indexes, renaming of original table
3 No action
4 Activation of transparent table
5 Conversion of data to new table, deletion of renamed table
6 Creation of sec. indexes
Adjustment terminated in step 5
Initial table Table type Status
ZTAB1 Transparent Renamed
Renamed original table
QCMZTAB1 transparent Existent
Tgt.table
ZTAB1 Transparent Renamed
thanks
ashishI had tried to reactivate table.. but it will not work as I had Terminated conversion : Adjustment terminated in step 5.
database object itself was not existent.. so consistency check is not helpful.
We recognized the problem.. at present, we have DDIC object for table with a field length 10(new lenght), a QCMtable which contains original data with filed lenght 11(Old value) and a conversion table QCM8table which has filed lenght 10(new value).
Now, in my case, original table had some Rows for that column lenght 11 and it was unable to transfer them to QCM8table.
I had raised an OSS to SAP.
thanks
ashish
Maybe you are looking for
-
How do I find my iTunes Library on my iPhone 4S?
From my PC, I purchased a movie. It is in my iTunes Library and I can play the movie on my computer. I can't find my iTunes Library on my iPhone. Can anyone help? I'm a brand new user.
-
..
-
Hi Experts, Crystal report mutiple value selection using checkbox in 8.81
Hi, I am using Crystal report multiple value selection option using checkbox in 8.81 sap b1 pl 05.but i want default all check box value should be selected. How do we achieve same. Thanks Rajkumar Gupta Edited by: Rajkumar Gupta on Jul 11, 2011 5:45
-
Hi All I am stuck at an issue which is to remove duplication. Following is scenario. I have 4 tables as follows. 1.Deploybase(DeployId) 2.LicenseHistory(DeployId,UserId) 3.Users(UserId,User_Detail_id) 4.UserDetails(User_Detail_id) Tables are joined u
-
So my contract is up in the new year and was thinking of going for the Lumia 800 but after reading some of the problems on hear now I am not to sure. plus I have also learnt that you can not send sms to muliple people why thats so a step backwards wh