Will it improve performance if PACKAGE SIZE added to a select query?
Hi Gurus
While doing the performance optimization I could see that in the code it is splitting the BKPF entries selected to a package of 500 and selecting from BSEG cluster table. In effect the remaining tables are also getting hit based on the no. of entries(packets) in the selected BKPF internal table.
Cant we use the package size 500 in the select statement instead of this spliting and hiting the tables in a do loop.
eg:-
SELECT x v z
FROM bseg
APPENDING TABLE lt_bseg
PACKAGE SIZE 500
FOR ALL ENTRIES IN lt_coep
WHERE bukrs = lt_coep-bukrs
AND belnr = lt_coep-belnr_fi
AND gjahr = lt_coep-gjahr.
When I read the keyword documentation I saw a note in that as " If the addition PACKAGE SIZE is specified together with FOR ALL ENTRIES, it is not passed to the database system, but is applied to the result set on the application server, after all selected rows have been read" I did not understand the meaning of this.
So I am confused whether I can go ahead with this approach or will go along with the already used BKPF split logic and selct from BSEG and other tables in DO loop.
Can you please help me << removed >>.
Thanks & Regards,
Shinimol.
Edited by: Rob Burbank on Sep 23, 2010 9:09 AM
Hi,
regarding your second doubt:
> Second doubt is on the note I saw in key word documentation. " If the addition PACKAGE SIZE is specified together with FOR ALL ENTRIES, it is not passed to the database system, but is applied to the result set on the application server, after all selected rows have been read. " I couldn't understnd this. When I debugged the select with F5 it was selecting 500 each till it reaches th end.
> So what does it mean, we can use this option or we should not use?
the FAE does your select in pieces and collects the result in the database interface in an inermediate buffer and
eliminates the duplicates there. AFTER this it gives you the number of rows back to your ABAP program that are specified with package size. It simply means that the package size addition has no effect on the memory consumptionfor the intermediate
buffer in the database interface.
Kind regards,
Hermann
Similar Messages
-
The following packages were added to your selection to satisfy dependencies
I'm going to install Oracle R12 and a few other things. I've applied all the required linux packages by hand previously and wanted to try the ULN to use the oracle validate package. I bought a network subscriiption, registered my server after installing OEL 5.5. It's a Dell R410.
Using the oracle_validate package does not apply all the dependent packages. What am I missing or doing wrong? I get the message I am missing dependencies no matter what I do. I thought that was the entire point of the packge to not have to apply all these packages by hand.
[root@vicr410 sysconfig]# up2date nox show-channels
el5_x86_64_addons
el5_x86_64_latest
el5_u5_x86_64_patch
ol5_x86_64_latest
el5_x86_64_oracle
I tried: up2date --install oracle-validated to install the required database packages. The resulting message is "The following packages were added to your selection to satisfy dependencies".
Testing package set / solving RPM inter-dependencies...
There was a package dependency problem. The message was:
Unresolvable chain of dependencies:
glibc-headers 2.5-49.el5_5.7 requires kernel-headers
glibc-headers-2.5-49.el5_5.7 requires kernel-headers >= 2.2.1
oracle-validated 1.1.0-6.el5 requires kernel-headers
The following packages were added to your selection to satisfy dependencies:
Package Required by
elfutils-libelf-devel-0.137-3.el5.x86_64oracle-validated-1.1.0-6.el5 elfutils-libelf-devel
gcc-4.1.2-48.el5.x86_64 oracle-validated-1.1.0-6.el5 gcc
gcc-c++-4.1.2-48.el5.x86_64 oracle-validated-1.1.0-6.el5 gcc-c++
gdb-7.0.1-23.el5_5.2.x86_64 oracle-validated-1.1.0-6.el5 gdb
glibc-devel-2.5-49.el5_5.7.i386 oracle-validated-1.1.0-6.el5 /usr/lib/libc.so
glibc-devel-2.5-49.el5_5.7.x86_64 oracle-validated-1.1.0-6.el5 /usr/lib/libc.so
glibc-devel-2.5-49.el5_5.7.x86_64 oracle-validated-1.1.0-6.el5 /usr/lib64/libc.so
glibc-headers-2.5-49.el5_5.7.x86_64 oracle-validated-1.1.0-6.el5 glibc-headers
libaio-devel-0.3.106-5.x86_64 oracle-validated-1.1.0-6.el5 libaio-devel
libgomp-4.4.0-6.el5.x86_64 gcc-4.1.2-48.el5 libgomp
sysstat-7.0.2-3.el5_5.1.x86_64 oracle-validated-1.1.0-6.el5 sysstat
unixODBC-devel-2.2.11-7.1.x86_64 oracle-validated-1.1.0-6.el5 unixODBC-devel
I tried to add the rpm oracle-validated-1.0.0-22.el5.x86_64.rpm using
rpm -ivh oracle-validated-1.0.0-22.el5.x86_64.rpm.
[root@vicr410 Server]# rpm -ivh oracle-validated-1.0.0-22.el5.x86_64.rpm
error: Failed dependencies:
/usr/lib/gcc/x86_64-redhat-linux/4.1.1/libstdc++.a is needed by oracle-validated-1.0.0-22.el5.x86_64
/usr/lib/libaio.so is needed by oracle-validated-1.0.0-22.el5.x86_64
/usr/lib/libc.so is needed by oracle-validated-1.0.0-22.el5.x86_64
/usr/lib/libodbc.so.1 is needed by oracle-validated-1.0.0-22.el5.x86_64
/usr/lib/libodbccr.so is needed by oracle-validated-1.0.0-22.el5.x86_64
/usr/lib64/libc.so is needed by oracle-validated-1.0.0-22.el5.x86_64
compat-gcc-34 is needed by oracle-validated-1.0.0-22.el5.x86_64
compat-gcc-34-c++ is needed by oracle-validated-1.0.0-22.el5.x86_64
elfutils-libelf-devel is needed by oracle-validated-1.0.0-22.el5.x86_64
gcc is needed by oracle-validated-1.0.0-22.el5.x86_64
gcc-c++ is needed by oracle-validated-1.0.0-22.el5.x86_64
gdb is needed by oracle-validated-1.0.0-22.el5.x86_64
glibc-headers is needed by oracle-validated-1.0.0-22.el5.x86_64
kernel-headers is needed by oracle-validated-1.0.0-22.el5.x86_64
libXp.so.6 is needed by oracle-validated-1.0.0-22.el5.x86_64
libaio-devel is needed by oracle-validated-1.0.0-22.el5.x86_64
libdb-4.2.so()(64bit) is needed by oracle-validated-1.0.0-22.el5.x86_64
libodbc.so.1()(64bit) is needed by oracle-validated-1.0.0-22.el5.x86_64
sysstat is needed by oracle-validated-1.0.0-22.el5.x86_64
unixODBC-devel is needed by oracle-validated-1.0.0-22.el5.x86_64Cool. That worked. All the missing packages were installed
[root@vicr410 ~]# up2date nox show-channels
el5_ia64_latest
el5_i386_latest
[root@vicr410 ~]# up2date --install oracle-validated
Fetching Obsoletes list for channel: el5_ia64_latest...
Fetching Obsoletes list for channel: el5_i386_latest...
Fetching rpm headers...
Name Version Rel
oracle-validated 1.0.0 24.el5 i386
Testing package set / solving RPM inter-dependencies...
compat-db-4.2.52-5.1.i386.r ########################## Done.
compat-gcc-34-3.4.6-4.i386. ########################## Done.
compat-gcc-34-c++-3.4.6-4.i ########################## Done.
elfutils-libelf-0.137-3.el5 ########################## Done.
elfutils-libelf-devel-0.137 ########################## Done.
elfutils-libelf-devel-stati ########################## Done.
gcc-4.1.2-48.el5.i386.rpm: ########################## Done.
gcc-4.1.2-48.el5.i386.rpm: ########################## Done.
gcc-4.1.2-48.el5.i386.rpm: ########################## Done.
gcc-c++-4.1.2-48.el5.i386.r ########################## Done.
gdb-7.0.1-23.el5_5.2.i386.r ########################## Done.
glibc-devel-2.5-49.el5_5.7. ########################## Done.
glibc-headers-2.5-49.el5_5. ########################## Done.
libXp-1.0.0-8.1.el5.i386.rp ########################## Done.
libaio-devel-0.3.106-5.i386 ########################## Done.
libstdc++-devel-4.1.2-48.el ########################## Done.
oracle-validated-1.0.0-24.e ########################## Done.
sysstat-7.0.2-3.el5_5.1.i38 ########################## Done.
unixODBC-2.2.11-7.1.i386.rp ########################## Done.
unixODBC-devel-2.2.11-7.1.i ########################## Done.
libgomp-4.4.0-6.el5.i386.rp ########################## Done.
Preparing ########################################### [100%]
Installing...
1:unixODBC ########################################### [100%]
2:libgomp ########################################### [100%]
3:sysstat ########################################### [100%]
4:libXp ########################################### [100%]
5:gdb ########################################### [100%]
6:elfutils-libelf ########################################### [100%]
7:compat-db ########################################### [100%]
8:libstdc++-devel ########################################### [100%]
9:glibc-headers ########################################### [100%]
10:glibc-devel ########################################### [100%]
11:unixODBC-devel ########################################### [100%]
12:libaio-devel ########################################### [100%]
13:compat-gcc-34 ########################################### [100%]
14:gcc ########################################### [100%]
15:gcc-c++ ########################################### [100%]
16:compat-gcc-34-c++ ########################################### [100%]
17:elfutils-libelf-devel ########################################### [100%]
18:elfutils-libelf-devel-s########################################### [100%]
19:oracle-validated ########################################### [100%]
The following packages were added to your selection to satisfy dependencies:
Name Version Release
compat-db 4.2.52 5.1
compat-gcc-34 3.4.6 4
compat-gcc-34-c++ 3.4.6 4
elfutils-libelf 0.137 3.el5
elfutils-libelf-devel 0.137 3.el5
elfutils-libelf-devel-static 0.137 3.el5
gcc 4.1.2 48.el5
gcc 4.1.2 48.el5
gcc 4.1.2 48.el5
gcc-c++ 4.1.2 48.el5
gdb 7.0.1 23.el5_5.2
glibc-devel 2.5 49.el5_5.7
glibc-headers 2.5 49.el5_5.7
libXp 1.0.0 8.1.el5
libaio-devel 0.3.106 5
libstdc++-devel 4.1.2 48.el5
sysstat 7.0.2 3.el5_5.1
unixODBC 2.2.11 7.1
unixODBC-devel 2.2.11 7.1
libgomp 4.4.0 6.el5 -
Will Partitioning improve performance on Global Temporary Table
Dear Guru,
In one of complicated module I am using Global Temporary Table (GTT) for intermediate processing i.e. It will store required data in it but the rows of the data goes into 10,00,000 - 20,00,000.
Can Partitioning or Indexing on Global Temporary Table improve the performance?
Thanking in Advance
SanjeevSounds like an odd use of a GTT to me, but I'm sure there are valid reasons...
Presumably you are going to be processing all of these rows in some way? In which case I can't see how partitioning, even if it's possible (And I don't think it is) would help you.
Indexes - sure, that might help, but again, if you are reading all/most of these rows anyway they might not help or even get used.
Can you give a bit more detail about exactly what you are doing?
Edited by: Carlovski on Nov 24, 2010 12:51 PM -
Improving performance in a merge between local and remote query
If you try to merge two queries, one local (e.g. table in Excel) and one remote (table in SQL Server), the entire remote table is loaded in memory in order to apply the NestedJoin condition. This could be very slow. In my case, the goal is to import only
those rows that have a product name listed in a local Excel table.
I used the SelectRows by using the list of values of the local query (having only one column) in order to apply an "IN ('value1', 'value2', ...)" condition in the SQL statement generated by Power Query (see examples below).
Questions:
Is there another way to do that in "M"?
Is there a way to build such a query (filter a table by using values obtained in another query) by using the user interface?
Is this a scenario that could be better optimized in the future by improving query folding made by Power Query?
Thanks for the feedback!
Local Query
let
LocalQuery = Excel.CurrentWorkbook(){[Name="LocalTable"]}[Content]
in
LocalQuery
Remote Query
let
Source = Sql.Databases("servername"),
Database = Source{[Name="databasename"]}[Data],
RemoteQuery = OasisDataMart{[Schema="schemaname",Item="tablename"]}[Data]
in
RemoteQuery
Merge Query (from Power Query user interface)
let
Merge = Table.NestedJoin(LocalQuery,{"ProductName"},RemoteQuery,{"ProductName"},"NewColumn",JoinKind.Inner),
#"Expand NewColumn" = Table.ExpandTableColumn(Merge, "NewColumn", {"Description", "Price"}, {"NewColumn.Description", "NewColumn.Price"})
in
#"Expand NewColumn"
Alternative merge approach (editing M - is it possible in user interface?)
let
#"Filtered Rows" = Table.SelectRows(RemoteQuery, each List.Contains ( Table.ToList(LocalQuery), [ProductName] ))
in
#"Filtered Rows"
Marco Russo (Blog,
Twitter,
LinkedIn) - sqlbi.com:
Articles, Videos,
Tools, Consultancy,
Training
Format with DAX Formatter and design with
DAX Patterns. Learn
Power Pivot and SSAS Tabular.Bingo! You've find a serious performance issue!
The very same result can be produced in a fast or slow way.
Slow technique: do the RemoveColumns before the SelectRows (maybe you don't have to apply any transformations to the table you want to filter before the SelectRows - I haven't tested this):
let
Source = Sql.Databases(".\k12"),
AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
#"Removed Columns" = Table.RemoveColumns(dbo_FactInternetSales,{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
"SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
"FactInternetSalesReason"}),
#"Filtered Rows" = Table.SelectRows(#"Removed Columns", each List.Contains(Selection[ProductKey],[ProductKey]))
in
#"Filtered Rows"
Fast technique: do the RemoveColumns after the SelectRows
let
Source = Sql.Databases(".\k12"),
AdventureWorksDW2012 = Source{[Name="AdventureWorksDW2012"]}[Data],
dbo_FactInternetSales = AdventureWorksDW2012{[Schema="dbo",Item="FactInternetSales"]}[Data],
#"Filtered Rows" = Table.SelectRows(dbo_FactInternetSales, each List.Contains(Selection[ProductKey],[ProductKey])),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"SalesOrderLineNumber", "RevisionNumber", "OrderQuantity", "UnitPrice", "ExtendedAmount", "UnitPriceDiscountPct", "DiscountAmount", "ProductStandardCost", "TotalProductCost",
"SalesAmount", "TaxAmt", "Freight", "CarrierTrackingNumber", "CustomerPONumber", "OrderDate", "DueDate", "ShipDate", "DimCurrency", "DimCustomer", "DimDate(DueDateKey)", "DimDate(OrderDateKey)", "DimDate(ShipDateKey)", "DimProduct", "DimPromotion", "DimSalesTerritory",
"FactInternetSalesReason"})
in
#"Removed Columns"
I think that Power Query team should take a look at this.
Thanks!
Marco Russo (Blog,
Twitter,
LinkedIn) - sqlbi.com:
Articles, Videos,
Tools, Consultancy,
Training
Format with DAX Formatter and design with
DAX Patterns. Learn
Power Pivot and SSAS Tabular. -
Hi All,
I am using select stmt on MARA and ZTSDNETPR.
In ZTSDNETPR table, pltyp,konda,MATNR,DATAB,DATBI are indexes in same order as i am using in my select stmt.
But it is taking 1 hrs 45 min to take execute. Please give me any suggestion.
SELECT MATNR MTART MATKL ZPLINE ZREL ZLANG ZUSG_TYPE
ZDEPLOY ZLIC_TYPE ZSER_TYPE ZSVC_SALE_TYPE
ZSUB_LEVEL ZSUB_REL ZASM ZREF_TYPE ZUPD_FRM
FROM MARA
INTO CORRESPONDING FIELDS OF TABLE IT_MATERIAL_INFO
WHERE MATNR IN S_MATNR
AND MATKL IN S_MATKL
AND ERSDA IN S_ERSDA.
IF SY-SUBRC = 0.
sort it_material_info by matnr.
***Fetching the price details from ZTSDNETPR table
select pltyp konda MATNR datab datbi
from ZTSDNETPR
into TABLE IT_ZTSDNETPR
for all entries in it_material_info
where
pltyp <> space
and konda <> space
AND MATNR = it_material_info-matnr
AND DATAB <= sy-datum
AND DATBI >= sy-datum.
Regards,
Shiv.shiv,
SELECT MATNR MTART MATKL ZPLINE ZREL ZLANG ZUSG_TYPE
ZDEPLOY ZLIC_TYPE ZSER_TYPE ZSVC_SALE_TYPE
ZSUB_LEVEL ZSUB_REL ZASM ZREF_TYPE ZUPD_FRM
FROM MARA
INTO CORRESPONDING FIELDS OF TABLE IT_MATERIAL_INFO
<b>WHERE MATNR IN S_MATNR
AND MATKL IN S_MATKL
AND ERSDA IN S_ERSDA.</b>
changed to
SELECT MATNR MTART MATKL ZPLINE ZREL ZLANG ZUSG_TYPE
ZDEPLOY ZLIC_TYPE ZSER_TYPE ZSVC_SALE_TYPE
ZSUB_LEVEL ZSUB_REL ZASM ZREF_TYPE ZUPD_FRM
FROM MARA
INTO CORRESPONDING FIELDS OF TABLE IT_MATERIAL_INFO
WHERE MATNR IN S_MATNR.
loop at it_material_info.
if ( it_material_info-matkl not in s_matkl ) or ( it_material_info-ersda not in s_ersda )
delete it_material_info.
endif.
endloop.
In your previous query the fields given in the where clause are not primary keys.Always make it a point to use only primary keys or index in the where clause.This will enhance the performance.
So,I had changed the select query accordingly and then filtered it w.r.t the selection-screen matkl and ersda.
The same you apply for the ztable also.This will surely enhance the performance.
K.Kiran.
Message was edited by:
Kiran K -
PACKAGE SIZE n in SELECT query
Hi,
When using PACKAGE SIZE n option with SELECT queries, how to determine the best/optimum value of n ? Especially when we use this for querying tables like EKPO, EKKO etc.
Regards,
Anand.> When using PACKAGE SIZE n option with SELECT queries, how to determine the best/optimum value of n ?
The 'package size' option to the select specifies how many
rows are returned in one chunk.
According to ABAP-Doku, it is best to use it with an internal table:
DATA: itab TYPE STANDARD TABLE OF SCARR WITH NON-UNIQUE
DEFAULT KEY INITIAL SIZE 10.
FIELD-SYMBOLS: <FS> TYPE scarr.
SELECT * INTO TABLE itab PACKAGE SIZE 20 FROM scarr.
LOOP AT itab ASSIGNING <FS>.
WRITE: / <FS>-carrid, <FS>-carrname.
ENDLOOP.
ENDSELECT.
But, basically, your application's requirements determine
what's the best value for n.
If you don't want a lot of DB-access, you choose a high
value for n. If you don't want a lot of data in memory, you adjust it to a lower value.
You can also use the 'up to n rows' construct in the select to limit the number of rows fetched from the db.
thomas -
Improving performance while adding groups
Hello,
I've been monitoring my crystal reports from a week or so and the report performance is going for a toss. I would like to narrate this in little detail. I have created 3 groups to select dynamic parameters and each group has a formula for itself. In my parameters I have added one parameter with 7 entities (which is hard coded), now a user can select any 3 entity out of those seven when initiallly refreshing the document, each of the parameter entity is bundeled in a conditional formula (mentioned under formula fields) for each entity. The user may select any entity and may get the respective data for that entity.
For all this i have created 3 groups and same formula is pasted under all the 3 groups. I have then made the formula group to be selected under Group expert. The report works fine and yields me correct data. However, during the grouping of the formula's crystal selects all the database tables from the database field as these tables are mentioned under the group formula. Agreed all fine.
But when I run the report the "Show SQL query" selects all the database tables under Select clause which should not be the case. Due to this even if i have selected an entity which has got only 48 to 50 records, crystal tends to select all the 16,56,053 records from the database fields which is hampering the crystal performance big time. When I run the same query in SQL it retrives the data in just 8 seconds but as crystal selecting all the records gives me data after 90 seconds which is frustrating for the user.
Please suggest me a workaround for this. Please help.
Thank you.Hi,
I suspect the problem isn't necessarily just your grouping but with your Record Selection Formula as well. If you do not see a complete Where clause is because your Record Selection Formula is too complicated for Crystal to translate to SQL.
The same would be said for your grouping. There are two suggestions I can offer:
1) Instead of linking the tables in Crystal, use a SQL Command and generate your query in SQL directly. You can use parameters and at the very least, get a working WHERE clause.
2) Create a Stored Procedure or view that can use the logic you need to retrieve the records.
At the very least you want to be able to streamline the query to improve performance. Grouping may not be possible but my guess it's more with the Selection formula than the grouping.
Good luck,
Brian -
Will native compiling improve performance?
Hi,
I've got familiar with Excelsior Native Compiler since a week ago. They claim on their website that compiling Java classes to native code (machine code) will improve the performance of the application. However, JAlbum (http://jalbum.net) says that its JAR files of the application run "basically at the same speed" compared to the native compiled one for windows.
Will really compiling Java classes to native code improve performance? I'm not talking about the startup speed, I mean the whole application performance.
Thanks...depends on what the app is doing
many things in java run as fast as native code, especially if you're using a later version of java
i guess there's one way to find out :-) -
What will I lose if I reset network settings? Was advised to try this to improve performance
What will I lose if I reset my network settings? Was advised to try this to improve performance. Will it make significant changes that I cant recover?
Resetting network settings clears out all your wi-if passwords as well as any other associations with wi-networks you've made. For example, if you connect to your coffee shop's open wi-fi network automatically you will need to manually rejoin it like you did the first time to re-establish that association. It will also erase any Bluetooth pairings you've setup.
-
I have a MP (2009, 3.3 Nehalem Quad and 16GB RAM) and wanted to improve performance in APERTURE (see clock wheel processing all the time) with edits, also using Lightroom, and sometimes CS5.
Anyone with experience that can say upgrading from the GT120 would see a difference and how much approximately?
Next, do I need to buy the 5870 or can I get the 5770 to work?
I am assuming I have to remove the GT120 for the new card to fit?
ThanksTerrible marketing. ALL ATI 5xxx work in ALL Mac Pro models. With 10.6.5 and later.
It really should be yours to just check AMD and search out reviews that compare these to others. You didn't look at the specs of each or Barefeats? He has half a dozen benchmark tests, but the GT120 doesn't even show up or in the running on most.
From AMD 5870 shows 2x the units -
TeraScale 2 Unified Processing Architecture
1600 Stream Processing Units
80 Texture Units
128 Z/Stencil ROP Units
32 Color ROP Units
ATI Radeon™ HD 5870 graphics
That should hold up well.
Some are on the fence or don't want to pay $$
All they or you (and you've been around for more than a day!) is go to Apple Store:
ATI Radeon HD 5870 Graphics Upgrade Kit for Mac Pro (Mid 2010 or Early 2009)
ATI Radeon HD 5870 Upgrade -
Performance : how to detrmine the package size during a select
Hi,
When you do a select using package size, how to determine the value of the package size.
Thanks for your help
MarieHi marie,
1. When you do a select using package size
Its done when number of records is very high.
and we don't want to fetch all records,
IN ONE SHOT.
2. At that time, we fetch records in BUNCHES / PACKAGES.
3. Just copy paste to get a taste of it.
REPORT abc.
DATA : t001 LIKE TABLE OF t001 WITH HEADER LINE.
DATA : ctr TYPE i.
selection screen.
PARAMETERS : a TYPE c.
START-OF-SELECTION.
SELECT * FROM t001
INTO TABLE t001
PACKAGE SIZE 5 .
ctr = ctr + 1.
WRITE :/ '----
Loop Pass # ' , ctr.
LOOP AT t001.
WRITE :/ t001-bukrs , t001-butxt.
ENDLOOP.
ENDSELECT.
regards,
amit m. -
Data Package size will be detemined dynamically.
Dear SDNers,
I have seen for some DTPs in my projects the data packet size in DTP is determined dynamically .How do we get this.
I am getting this message in DTP->Extraction Tab
The package size corresponds to package size in source.
It is determined dynamically at runtime.
Thanks,
SwathiHello,
You would get this when semantic keys are not defined in the DTP.
Regards..
Balaji -
FI-CA events to improve performance
Hello experts,
Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
It seems that this specific exits associated to BW events have been developped especially to improve performance.
Any documentation, guide should be appreciate.
Thanks.
Thibaud.Thanks to all for the replies
@Sybrand
Please answer first whether the column is stored in a separate lobsegment.
No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
@Hemant
There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
Is this the one you are referring to
http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
@Billy
We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
Regarding binding of LOB from client side, will check on that side also to reduce round trips.
By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
Thanks,
Arun -
HI ALL ,,,
MY SELECT STATEMENT IS LIKE THIS. IN SM30 ITS SHOWING THAT ITS HAS TAKE THE MAXIMUM TIME. SO HOW CAN I REDUCE THE HITING DB TABLE TIME OR IMPROVE THE PERFORMANCE?
IF LT_JCDS[] IS NOT INITIAL.
SELECT OBJNR
WKGBTR
BELNR
WRTTP
BEKNZ
PERIO
GJAHR
VERSN
KOKRS
VRGNG
GKOAR
BUKRS
REFBZ_FI
MBGBTR
FROM COEP
INTO CORRESPONDING FIELDS OF TABLE LT_COEP
FOR ALL ENTRIES IN LT_JCDS
WHERE KOKRS EQ 'DXES'
AND OBJNR EQ LT_JCDS-OBJNR
AND GJAHR <= SO_GJAHR-LOW
AND VERSN eq '000'
AND ( VRGNG EQ 'COIN' OR VRGNG EQ 'RKU1' OR VRGNG EQ 'RKL').
IF SY-SUBRC <> 0.
MESSAGE e000(8i) WITH 'DATA NOT FOUND IN "CO Object: Line Items (by Period)"'.
ENDIF.
ENDIF.Hi
see these points
Ways of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one Internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
Points # 1/2
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4. For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
5. Use Select Single if all primary key fields are supplied in the Where condition .
Point # 1
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
Point # 2
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Point # 3
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
Point # 4
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
Point # 5
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements contd.. SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
2. For all frequently used Select statements, try to use an index.
3. Using buffered tables improves the performance considerably.
Point # 1
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
Point # 2
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
Point # 3
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements contd Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements contd For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements contd Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3. Instead of using nested Select loops it is often better to use subqueries.
Point # 1
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
Point # 2
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Point # 3
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
Point # 2
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
Point # 3
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
Point # 5
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
Point # 6
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
Point # 7
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
Point # 8
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
Point # 9
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 10
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
Point # 11
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
Point # 12
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 13
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Split CLOB column to improve performance
Hi All,
We have a transactional table which has 3 columns and one among those is CLOB which holds XML data.Inserts are coming at 35K/hr to this table and data will be deleted as soon as job is completed. So anytime the total records in this table will be less than 1000.
The XML data contains binary info of images and the size of each XML file ranges any where between 200KB to 600KB and the elapsed time for each insert varies from 1 to 2 secs depending upon the concurrency. As we need to achieve 125K/hour soon we were planning to do few modifications on table level.
1. Increase the CHUNK size from 8KB to 32KB.
2. Disabling logging for table,clob and index.
3. Disable flashback for database.
4. Move the table to a non default blocksize of 32KB. Default is 8KB
5. Increase the SDU value.
6. Split the XML data and store it on multiple CLOB columns.
We don't do any update to this table. Its only INSERT,SELECT and DELETE operations.
The major wait events I'm seeing during the insert is
1. direct path read
2. direct path write
3. flashback logfile sync
4. SQL*Net more data from client
5. Buffer busy wait
My doubt over here is ,
1. If I allocate a 2G memory for the non default block size and change the clob to CACHE, will my other objects in buffer_cache gets affected or gets aged out fast?
2. And moving this table to a SECUREFILE from BASICFILE will help?
3. Splitting the XML data to insert into different columns in the same table will give a performance boost?
Oracle EE 11.2.0.1,ASM
Thanks,
ArunThanks to all for the replies
@Sybrand
Please answer first whether the column is stored in a separate lobsegment.
No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
@Hemant
There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
Is this the one you are referring to
http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
@Billy
We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
Regarding binding of LOB from client side, will check on that side also to reduce round trips.
By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
Thanks,
Arun
Maybe you are looking for
-
I have a playlist that I only sync to my iPhone once a year (Christmas Music). I checked it on the list to sync, along with all my other playlists but when I Apply it and Sync it, the Christmas Music Playlist does not show up. It showed up one tim
-
Reinstallation of OS and iPhoto 6.0.2 hang solution
Recently I got a G4 PowerBook. I zeroed the drive, reinstalled the OS and transferred info from my iBook using the transfer utility ( firewire to firewire with Target disc mode). After this iPhoto would not work properly. Photos would not open in edi
-
Getting self-signed certificates working with mail
Hi all, I am having trouble getting email certificates created with keychain access to work in mail. According to the Leopard help file, you simply have to go to Keychain access and create the certificate, which I did. After that if you create a mess
-
Hi, how do I enable non-us characters? I'd prefer the application supported unicode. Currently, all local characters (šđčćž) change to invalid chars. Regards Jernej
-
It won't let me submit a form without checking all the boxes.
It won't let me submit the form without checking every box on question #3. I just upgraded to 6.02 and it worked before without checking every one.