Numbers Import and Load Performance Problems
Some initial results of converting a single 1.9MB Excel spreadsheet to Numbers:
_Results using Numbers v1.0_
Import 1.9MB Excel spreadsheet into Numbers: 7 minutes 3.5 seconds
Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 11.7 seconds
_Results using Numbers v1.0.1_
Import 1.9MB Excel spreadsheet into Numbers: 6 minutes 36.1 seconds
Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 5.8 seconds
_Comparison to Excel_
Excel loads the original 1.9MB spreadsheet in 4.2 seconds.
Summary
Numbers v1.0 and v1.0.1 exhibit severe performance problems with loading (of it's own files) and importing of Excel V.x files.
Hello
It seems that you missed a detail.
When a Numbers document is 1.9MB on disk, it may be a 7 or 8 MB file to load.
A Numbers document s not a file but a package which is a disguised folder.
The document itself is described in an WML extremely verbose file stored in a gzip archive.
Opening such a document starts with an unpack sequence which is a fast one (except maybe if the space available on the support is short).
The unpacked file may easily be 10 times larger than the packed one.
Just an example, the xml.gz file containing the report of my bank operations for 2007 is a 300Kb one but the expanded one, the one which Numers must read, is a 4 MB one, yes 13,3 times the original.
And, loading it is not sufficient, this huge file must be "interpreted" to build the display.
As it is very long, Apple treats it as the TRUE description of the document and so, each time it must display something, it must work as the interpreters that old users like me knew when they used the Basic available in Apple // machines.
Addind a supplemetary stage would have add time to the opening sequence but would have fasten the usage of the document.
Of course, it would also had added a supplementary stage duringthe save it process.
I hope that they will adopt this scheme but of course I don't know if they will do that.
Of course, the problem is quite the same when we import a document from Excel or from AppleWorks.
The app reads the original which is stored in a compact shape then it deciphers it to create the XML code. Optimisation would perhaps reduce a bit these tasks but it will continue to be a time consuming one.
Yvan KOENIG (from FRANCE dimanche 27 janvier 2008 16:46:12)
Similar Messages
-
Import and directory synchronize problem
I'm having a problem importing new pictures into an existing file directory which appears in my Lightroom catalog. The photos import normally but show up as a new directory entry off the root of the Lightroom catalog. When I attempt to move/drag them to the existing Lightroom directory I want, I'm told they can NOT be move because they are already there which I can confirm in the Windows file system. Strangely, synchronizing the directory in Lightroom does not find the files even though they are physically there but not showing up in the catalog. This is a new problem but I can't associate it with anything I've changed in Lightroom lately. Can anyone suggest what might be going on and more important how to fix it? It is very irritating. I have another problem and I don’t know if it is related or not. Lightroom will NOT remember changes I make to calalog location I make through the edit, calalog settings menu or my watch directory for auto import through file, auto import, auto import settings. It just ignores them and uses what is already there. Again, any suggestions would be greatly appreciated.
The idle-timeout on DSEE was set to none, which I believe is the default. I tried setting it to 1200 and 2400 seconds without success.
h3. get-ldap-data-source-pool-prop
<pre>
client-affinity-bind-dn-filters : any
client-affinity-criteria : connection
client-affinity-ip-address-filters : any
client-affinity-policy : write-affinity-after-write
client-affinity-timeout : 20s
description : -
enable-client-affinity : false
load-balancing-algorithm : proportional
minimum-total-weight : 100
proportion : 100
sample-size : 100
</pre>
h3. get-ldap-data-source-prop
<pre>
bind-dn : none
bind-pwd : none
client-cred-mode : use-client-identity
connect-timeout : 10s
description : -
down-monitoring-interval : inherited
is-enabled : true
is-read-only : false
ldap-address : localhost
ldap-port : ldap
ldaps-port : ldaps
monitoring-bind-dn : none
monitoring-bind-pwd : none
monitoring-bind-timeout : 5s
monitoring-entry-dn : ""
monitoring-entry-timeout : 5s
monitoring-inactivity-timeout : 2m
monitoring-interval : 30s
monitoring-mode : proactive
monitoring-retry-count : 3
monitoring-search-filter : (objectClass=*)
monitoring-search-scope : base
num-bind-incr : 10
num-bind-init : 2
num-bind-limit : 1024
num-read-incr : 10
num-read-init : 2
num-read-limit : 1024
num-write-incr : 10
num-write-init : 2
num-write-limit : 1024
proxied-auth-use-v1 : false
ssl-policy : never
use-read-connections-for-writes : false
use-tcp-keep-alive : true
use-tcp-no-delay : true
</pre> -
Macbook cpu problem extreme frequent slowdown and subs performance problems
Hello
Since summer 2007 we have been experiencing speed problems with one of our Macs a Macbook bought in 2006. Initially we thought it was just because we were asking too much of the computer. But unfortunately this is not the case. What happens is that while working normally (normal performance) with one or more applications, the computer starts to slow down to a near halt. You notice this because instead of for example opening a new window right away it slowly unrolls the window until it is open. At the same time the fan starts to accelerate and often the little color wheel appears instead of the mouse arrow. The computer is warm but not hot. Usually after a while the speed of the computer comes back to normal and the fan will stop turning at high speed. When checking cpu usage during such an "episode" most of the time cpu usage by Safari or Firefox Activity monitor indicates cpu usage by Safari and Firefox 100-110 % which strikes me as not being right . Quitting these programs seems (seems !) to return the computer to normal faster. The Apple store has replaced the memory, no effect, I did a total clean install of OS, updated it to the latest version of Tiger no effect. What can be the problem ? Is it the logic board as a whole or the graphics portion ? Or is something blocking the vent and it does not cool sufficiently? We are at a loss what to do.
Any advise would be appreciated. Right now all I can think of is to bring the computer in to an Apple Care Center probably resulting in a high bill even though in retrospect this problem started before the expiry of the Apple warranty.
Many thanks for any help.
H van Esnlesh wrote:
Hello thanks,
We use websites like Wall Street Journal online, viamichelin.fr, yahoo, NCBI. I may have given the wrong impression that it only happens with internet use, it also happens with iMovie or Nikon capture NX but I guess we just notice it more with internet use because this computer is mostly used for that plus e-mail.
H van Es
I took a look at those sites. I noticed several ad/content displays on online.wsj.com which probably used Flash or Java animation/video. Yahoo! typically has tons of animation features (especially stuff like falling snow on holidays), and viamichelin.fr has a bunch of animated ads. NBCI looks to be the nastiest of the bunch.
If your fan is turning on from having multiple websites like those on, I'm not surprised. -
CS3 - Import and Export Quality Problems with Canon S90 .mov files
Hi Folks,
I have spent several hours searching with no results in the forums, and I am a beginner with Premiere, so I apologize for any annoying questions I may post.
Here is the problem I am having. The very first step! Go figure.
I have videos taken with a Canon S90 point and and shoot camera, in .mov format, 640x480, 30 frames per second.
When I play these videos in Quick Time or Windows Media Player they look great. Very sharp and smooth video.
When I import these videos into Premiere Pro CS3 - and don't do anything to them besides play them in either the source window or or Program window, the quality is much lower. They are not sharp, and have an unusual texture, almost like there is a tiny bit of water on the lens distorting the view as it moves.
I imported the video under the DV-NTSC Standard 48kHZ preset. I have the most updated Quick Time.
I also attempted to improve the view settings to their maximum with no results. I also exported the video to see if it was just the viewing, but the exported video is even worse. Very low quality and not sharp.
Thanks for any suggestions on the correct settings to import these videos without loosing quality.Well, I have read and understand the Codec aspect a bit. I have discovered that my
files are avc1 H.264 Codec. Doing more research it appears that CS3 Premiere Pro has some trouble editing H.264 files.
So from what I gather so far, it looks like my best bet would be to convert the file into another format that is easier to edit by Premiere.
Is that correct?
If so, I am still unsure what to convert it to.
What is the program that is best to use to convert with?
Thanks again for your help. -
Import and export performance issue
Hi,
There is #CLIENT_TEST# table having 2170178 records.
I am using following command to exp and imp, which is taking up more time, around 4 to 5 hrs.
Please help to improve the exp/imp performance. I'm using Oracle 10.2 version.
_Export:_
exp user/paswd@orcl file=<path> tables=(CLIENT_TEST) log=<path> buffer=409600 grants=Y rows=Y compress=N direct = Y
_Import:_
imp user/paswd@orcl file=<path> tables=(CLIENT_TEST) log=<path>commit=Y IGNORE=YES indexes=N Thanks in advance.Hi,
you can have a look at the newer kind of export/import: EXPDP and IMPDP. See documentation: http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_overview.htm#SUTIL100
It is faster than the old EXP/IMP.
Herald ten Dam
http://htendam.wordpress.com -
Sql loader performance problem with xml
Hi,
i have to load a 400 mb big xml file into mz local machine's free oracle db
i have tested a one record xml and was able to load succesfully, but 400 mb freeying for half an hour and does not even started?
it is normal? is there any chance i will be able to load it, just need to wait?
are there any faster solution?
i ahve created a table below
CREATE TABLE test_xml
COL_ID VARCHAR2(1000),
IN_FILE XMLTYPE
XMLTYPE IN_FILE STORE AS CLOB
and control file below
LOAD DATA
CHARACTERSET UTF8
INFILE 'test.xml'
APPEND
INTO TABLE product_xml
col_id filler CHAR (1000),
in_file LOBFILE(CONSTANT "test.xml") TERMINATED BY EOF
anything i am doing wrong? thanks for advicesSQL*Loader: Release 11.2.0.2.0 - Production on H. Febr. 11 18:57:09 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Control File: prodxml.ctl
Character Set UTF8 specified for all input.
Data File: test.xml
Bad File: test.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 5000
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table PRODUCT_XML, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
COL_ID FIRST 1000 CHARACTER
(FILLER FIELD)
IN_FILE DERIVED * EOF CHARACTER
Static LOBFILE. Filename is bv_test.xml
Character Set UTF8 specified for all input.
SQL*Loader-605: Non-data dependent ORACLE error occurred -- load discontinued.
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
Table PRODUCT_XML:
0 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 256 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records rejected: 0
Total logical records discarded: 0
Run began on H. Febr. 11 18:57:09 2013
Run ended on H. Febr. 11 19:20:54 2013
Elapsed time was: 00:23:45.76
CPU time was: 00:05:05.50
this is the log
i have truncated everything i am not able to load 400 mega into 4 giga i cannot understand
windows is not licensed 32 bit -
RH Linux and AQ performance problem
I am running RH Linux 7.1 with Oracle 8.1.7. My web application
works fine until a specific point when I begin using AQ. Part of
the AQ initialization in my application is to create 8 DB
connections which pole the DB every 500 ms. These connections
cause the web application to begin processing extremely slowly
(about 30 seconds between pages). There has been a proposal that
this might be due to a Linux threading issue. Any ideas?
As a side note, I first noticed the problem after recreating a DB
with the JServer option included (as I am writing Java stored
procedures too). To eliminate this from concern I have executed
jvmrm.sql to remove the java classes in the database. My
original problems continues to exist.
Thanks,
DavidI am assuming you are polling the database for messages in
queues to be dequeued every few seconds on these eigh
connections.
Suggestions -
a) why don't use notification mechanism and let the database
tell you if there is a message for you to dequeue.
b) why do you need eight connections? -
Golden Gate Initial Load - Performance Problem
Hello,
I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
Why does it take so long using Golden Gate? Am I missing something?
I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
Regards
PawelHi Bobby,
It's Extract / Replicat using SQL Loader.
Created with following commands
ADD EXTRACT initial-load_Extract, SOURCEISTABLE
ADD REPLICAT initial-load_Replicat, SPECIALRUN
The Extract parameter file:
USERIDALIAS {:GGEXTADM}
RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
TABLE Schema.Table_name;
The Replicat parameter file:
REPLICAT {:REP_INIT_NAME}_0
SETENV (ORACLE_SID='{:REPLICAT_SID}')
USERIDALIAS {:GGREPADM}
BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
ASSUMETARGETDEFS
MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
COLMAP(USEDEFAULTS),
KEYCOLS(PKEY),
INSERTAPPEND;
Regards,
Pawel -
990 FXA GD 65 VRM Temp and load voltage problem
Hi sorry my bad english
I have 990 fxa gd 65 and Fx 8320 cpu my motherboard is VRM heat and voltage under load has dropped ı use occt test program and 1.4 voltage 4.2 ghz 1 gives an error in the minutes I see that the voltage drop and the heat from the VRM s.VRMs the default settings Astor exceeds 90 degrees of stress tests
How can I solve it?
Bios verion photo:http://i.hizliresim.com/n7ozEB.jpgWhat are the ambient temperatures in your room? How many case fans does you case have? What kind of CPU cooler are you using?
More air flow in the case = cooler running components. -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Performance Problem - MS SQL 2K and PreparedStatement
Hi all
I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
for (int i = 0; i < 10; i ++) {
try {
con = DBConnection.getInstance();
statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
// statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
// statement.setString(1, cardNo);
rs = statement.executeQuery();
if (rs.next()) {
catch(SQLException e) {
e.printStackTrace();
finally {
try {
rs.close();
statement.close();
catch(SQLException e) {
e.printStackTrace();
Iteration Time (ms)
1 961
10 1061
200 1803
for (int i = 0; i < 10; i ++) {
try {
con = DBConnection.getInstance();
// statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
statement.setString(1, cardNo);
rs = statement.executeQuery();
if (rs.next()) {
catch(SQLException e) {
e.printStackTrace();
finally {
try {
rs.close();
statement.close();
catch(SQLException e) {
e.printStackTrace();
Iteration Time (ms)
1 1171
10 2754
100 18817
200 36443
The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
Best regards
EdmondEdmond,
Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
Alin. -
How do we improve master data load performance
Hi Experts,
Could you please tell me how do we identify the master data load performance problem and what can be done to improve the master data load performance .
Thanks in Advance.
NityaHi,
-Alpha conversion is defined at infoobject level for objects with data type CHAR.
A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
-The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
- SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
Regards,
rvc -
How to improve query & loading performance.
Hi All,
How to improve query & loading performance.
Thanks in advance.
Rgrds
shobaHi Shoba
There are lot of things to improve the query and loading performance.
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
weblogs:
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
and the some threads:
how can i increse query performance other than creating aggregates
How to improve query performance ?
Query performance - bench marking
may be helpful
Regards
C.S.Ramesh
[email protected] -
Database migration to MAXDB and Performance problem during R3load import
Hi All Experts,
We want to migrate our SAP landscape from oracle to MAXDB(SAPDB). we have exported database of size 1.2 TB by using package and table level splitting method in 16 hrs.
Now I am importing into MAXDB. But the import is running very slow (more than 72 hrs).
Details of import process as per below.
We have been using distribution monitor to import in to target system with maxdb database 7.7 release. We are using three parallel application servers to import and with distributed R3load processes on each application servers with 8 CPU.
Database System is configured with 8CPU(single core) and 32 GB physical RAM. MAXDB Cache size for DB instance is allocated with 24GB. As per SAP recommendation We are running R3load process with parallel 16 CPU processes. Still import is going too slow with more that 72 hrs. (Not acceptable).
We have split 12 big tables in to small units using table splitting , also we have split packages in small to run in parallel. We maintained load order in descending order of table and package size. still we are not able to improve import performance.
MAXDB parameters are set as per below.
CACHE_SIZE 3407872
MAXUSERTASKS 60
MAXCPU 8
MAXLOCKS 300000
CAT_CACHE_SUPPLY 262144
MaxTempFilesPerIndexCreation 131072
We are using all required SAP kernel utilities with recent release during this process. i.e. R3load ,etc
So Now I request all SAP as well as MAXDB experts to suggest all possible inputs to improve the R3load import performance on MAXDB database.
Every input will be highly appreciated.
Please let me know if I need to provide more details about import.
Regards
SantoshHello,
description of parameter:
MaxTempFilesPerIndexCreation(from version 7.7.0.3)
Number of temporary result files in the case of parallel indexing
The database system indexes large tables using multiple server tasks. These server tasks write their results to temporary files. When the number of these files reaches the value of this parameter, the database system has to merge the files before it can generate the actual index. This results in a decline in performance.
as for max value, I wouldn't exceed the max valuem for 26G value 131072 should be sufficient. I used same value for 36G CACHE SIZE
On the other side, do you know which task is time consuming? is it table import? index creation?
maybe you can run migtime on import directory to find out
Stanislav -
Performance problems when running PostgreSQL on ZFS and tomcat
Hi all,
I need help with some analysis and problem solution related to the below case.
The long story:
I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
Within a non-global zone Im running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
NPROC USERNAME SWAP RSS MEMORY TIME CPU
49 postgres 749M 669M 4,7% 7:14:38 13%
1 jboss 2519M 2536M 18% 50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
NPROC USERNAME SWAP RSS MEMORY TIME CPU
1 jboss 3104M 913M 6,4% 0:22:48 0,1%
#sar -g 5 5
SunOS vbn-back 5.10 Generic_142901-03 i86pc 05/28/2010
07:49:08 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
07:49:13 27.67 316.01 318.58 14854.15 0.00
07:49:18 61.58 664.75 668.51 43377.43 0.00
07:49:23 122.02 1214.09 1222.22 32618.65 0.00
07:49:28 121.19 1052.28 1065.94 5000.59 0.00
07:49:33 54.37 572.82 583.33 2553.77 0.00
Average 77.34 763.71 771.43 19680.67 0.00Making more memory available to tomcat seemed to worsen the problem or at least didnt prove to have any positive effect.
My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
An unofficial performance evaluation on the database with vacuum analyze took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
The short story:
Im trying different steps but running out of ideas. Weve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didnt find much information on the matter so if any can help please recommend how to make this change
Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
Any help appreciated and I will try to provide additional information on request if needed
Thanks in advance,
Kasperraidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
You can change the record size by "zfs set recordsize=8k <dataset>"
It will only take effect for newly written data. Not existing data.
Maybe you are looking for
-
No longer able to buy items on iTunes after upgrading to 7.6.2
Just want to know if I am the only one having that problem: I upgraded today to iTunes 7.6.2. I can access the store, I can access my account, but I am unable either to make any change to my account settings or to buy new items: each time I am about
-
My HP Printer HP Officejet Pro 8600 Problem
My printer is not scanning, faxing and copying. Once I press the copy option it give me an error message stating that "unable to scan, fax and copy".
-
Stupid question, but i need to know nonetheless
if i have written a program according to the following general format, where i declare an instance of the class W from the main method and then send the program directly into the method body to let it bounce around from method to method for awhile...
-
Google chrome allows me to access my E mails, the problem is with Firefox. I have tried system restore, no change. A privacy tab in the tools, options, system seems to be different, hope this isn't a red herring.
-
What changed in 7.1? TDIXSupport error in Toast 8
Just wondering if anyone out there knows the the latest Lion would stop my Toast 8 from mounting Toast images by giving me the error: You can't open the application TDIXSupport because PowerPC applications are no longer supported. Toast 8 was working