Poor performance reading MBEWH table
Hi,
I'm getting serious performance problems when reading MBEWH table directly.
I did the following tests:
GET RUN TIME FIELD t1.
SELECT mara~matnr
FROM mara
INNER JOIN mbewh ON mbewh~matnr = mara~matnr
INTO TABLE gt_mbewh
WHERE mbewh~lfgja = '2009'.
GET RUN TIME FIELD t2.
GET RUN TIME FIELD t3.
SELECT mbewh~matnr
FROM mbewh
INTO TABLE gt_mbewh
WHERE mbewh~lfgja = '2009'.
GET RUN TIME FIELD t4.
t2 = t2 - t1.
t4 = t4 - t3.
write: 'With join: ', t2.
write /.
write: 'Without join: ', t4.
And as result I got:
With join: 27.166
Without join: 103970.297
All MATNR in MBEWH are in MARA.
MBEWH has 71.745 records and MARA has 705 records.
I created an index for lfgja field in MBEWH.
Why I have better performance using inner join?
In production client, MBEW has 68 million records, so any selection takes too much time.
Thanks in advance,
Frisoni
Guilherme, Hermann, Siegfried,
I have just seen this thread and read it from top to bottom, and I would say now is a good time to make a summary..
This is want I got from Guilherme's comments:
1) MBEWH has 71.745 records
2) There are two hidden clients in the same server with 50 million rocords.
3) Count Distinct mandt = 6
4) In production client, MBEW has 68 million records
First measurement
With join : 27.166
Without join : 103970.297
Second measurement
With join : 96.217
Without join : 93.781 << now with hint
The original question was to understand why using the JOIN made the query much faster.
So the conclusions are:
1) Execution times are really now much better (comparing only the not using join case, which is the one we are working on), and the original "mystery" is gone
2) In this client, MANDT is actually much more selective that the optimizer thinks it is (and it's because of this uneven distrubution, as Hermann mentioned, that forcing the index made such a difference)
4) Bad news is that this solution was good because of the special case of your development system, but will probably not help in the production system
5) I suppose the index that Hermann suggested is the best possible thing to do (the table won't be read, assuming you really only want only MATNR from MBEWH, and that it wasn't a simplification for illustration purposes); anyway, noone can really expect that getting all entries from MBEWH for a given year will be a fast thing...
Rui Dantas
Similar Messages
-
Siebel Prod App poor performance during EIM tables data load
Hi Experts,
I have a situation, Siebel Production application performance is becoming poor when I start loading data into Siebel EIM tables during business hours. I'm not executing any EIM jobs during business hours so how come the database is becoming slow and application is getting affected.
I understand that Siebel Application fetches data from siebel base tables. In that case why is the application getting very slow when EIM tables are only loaded.
FYI - Siebel production Application server has good hardware configuration.
Thanks,
ShaikYou have to talk with your DBA.
Let's say your DB is running from one hard disk (HD).
I guess you can imagine things will slow down when multiple processes start accessing the DB which is running from one HD.
When you start loading the EIM tables, your DB will use a lot of time for writing and has less time to serve the data to the Siebel application server.
The hardware for the Siebel application servers is not really relevant here.
See if you can put the EIM tables on their own partition/hard disks. -
Poor performance after altering tables to InnoDB
I have an application using CF MX, IIS, and MySQL 5.0.37
running on Microsoft Windows Server 2003.
When I originally built the application, access from login to
start page and page to page was very good. But, I started getting
errors because tables were sometimes getting records added or
deleted and sometimes not. I thought the "cftransaction" statements
were protecting my transactions. Then I found out about MyISAM (the
default) vs InnoDB.
So, using MySQLAdmin, I altered the tables to InnoDB. Now,
the transactions work correctly on commits and rollbacks, but the
performance of the application stinks. It now takes 20 seconds to
log in.
The first page involves a fairly involved select statement,
but it hasn't changed at all. It just runs very slowly. Updates
also run slowly.
Is there something else I was supposed to do in addition to
the "alter table" in this environment? The data tables used to be
in /data/saf_data. Now the ibdata file and log files are in /data
and only the ".frm" files are still in saf_data.
I realize I'm asking this question in a CF forum. But, people
here are usually very knowledgable and helpful and I'm desperate.
This is a CF application. Is there anything I need to do for a CF
app to work well with MySQL InnoDB tables? Any configuration or
location stuff to know about?
Help, and Thanks!The programs was ported also in earlier versions 1,5 year ago we use forte 6.2 and the performance was o.k.
Possibly the program design was based on Windows
features that are inappropriate for Unix. So the principal design didn't change, the only thing is, that we switch to the boost libraries, where we use the thread, regex, filesystem and date-time libraries
Have you tried any other Unix-like system? Linux, AIX, HPUX,
etc? If so, how does the performance compare to
Solaris?Not at the moment, because the order is customer driven, but HP and Linux is also an option.
Also consider machine differences. For example, your
old Ultra-80 system at 450Mhz will not keep up with a
modern x86 or x64 system at 3+ GHz. The clock speed
could account for a factor of 6. That was my first thought, but how I have wrote in an earlier post, the performance testcase need the same time on a 6x1GHz (some Sun-Fire-T1000) machine
Also, how much realmemory does the sparc system have? 4 GB! And during the testrun the machine use less than 30% of this memory.
If the program is not multithreaded, the additional
processors on the Ultra-80 won't help. But it is!
If it is multithreaded, the default libthread or libpthread on
Solaris 8 does not give the best performance. You can
link with the alternative lwp-based thread library on
Solaris 8 by adding the link-time option
-R /usr/lib/lwp (for 32-bit applications)
-R /usr/lib/lwp/64 (for 64-bit applications)The running application use both, the thread and the pthread library can that be a problem? Is it right, that the lwp path include only the normal thread library?
Is there a particular reason why you are using the
obsolete Solaris 8 and the old Sun Studio 10?Because we have customer which do not upgrade? Can we develop on Solaris 10 with SunStudio 11and deploy on 5.8 without risk?
regards
Arno -
Poor performances on tables FAGL_011ZC, FAGL_011PC and FAGL_011VC
We note from some time on a very very poor performances of some programme accessing these tables.
The tables are very small, still standard.
We see these tables does not have particular indexes, and their buffering is "allowed but disabled".
In you opinion can we activate the buffering of these tables ? Are there controindications ?
Our system is ECC5 quite aligned as SP level.Client is looking upon TDMS (Test Data Migration Server) to replicate the PRD data into lower systems.
-
Performance problems due to sequential read on tables WBCROSSGT and CROSS
Hello all,
got the SAPNW2004s Sneak Preview ABAP installed. Performance is quite ok. But with certain dictionary operations like creating new attributes for a class I experience exceptional long runtimes and timeout dumps. In SM50 I see a sequential read on table WBCROSSGT. In OSS I can't find anything applicable yet for this release (SAP_BASIS 700, support level 5).
Any suggestions appreciated.
SimonHello,
i had exactly the same problem after upgrading from MS SQL 2005 to MS SQl 2008 R2.
Our DEV system was almost completely exhausted and normal operation wasn't possible anymore.
SAP Note 1479008 solved the issue, even it is only "released" for MaxDB.
Cheers, Christoph -
Adobe Reader XI Shows Poor Performance.
Hi there!
I've been using acrobat for a few years now , but the Acrobat Reader XI , its showing poor performance rendering a Big PDF file Like 2.8MB (its a MS Proyect Exported Document)..and I have tested it with another PDF viewers and it works fine. Any AR Bug maybe. FYI it complete freezes the Computer. (Im using a Core i3 W7 Pro x64 bits , 4GB , so the PC is not the problem) . I have tested in 2 pcs and i got the same results.I just posted the below on another discussion. But read below and give it a try.
I am running Windows 7 64-bit with Adobe Reader 11.0.3 and am experiencing the same issue. Opening PDFs takes a long time and there is a delay and it freezes for a period of time and then it will unfreeze and becomes available.
What I have found to be the issue in my case is that Protected Mode is enabled. If you open Adobe Reader and click on Edit and Preferences then go to the section Security (Enhanced). You can turn "Enable Protected Mode at Startup" off by unchecking the check box.
After that give it a try and see if that doesn't clear up the issue.
I hope Adobe provides and update or new release to address this issue as it seems to be a problem with quite a lot of folks. Not sure what is causing it but it shouldn't be that way. Our users are chomping at the bit to turn it off but I am telling them not to for now as I hope there is an update soon to fix and address the problem.
And don't tell me to turn it off and keep it off either as that is NOT a solution.
Anyway - hope it helps. -
DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record
Hi,
Please read this message in the [ABAP Performance and Tuning|DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record; section and see if you have any advise.
Best Regards,
MarjoHi,
I met this issue when I tried to write vaule to massive fields of excel range.
And I solve this issue by using CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_EXPORT.
So, I think you may fix it in the same way.
1. Select range in by the I_OI_SPREADSHEET
2. Call method I_OI_DOCUMENT_PROXY->COPY_SELECTION
3. Call method CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_IMPORT
Cheers, -
Poor performance of the BDB cache
I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
Overview
Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
The Database
Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
sequences (maintains record IDs for all other tables)
urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
downloads (p), downloads.values (s), downloads.xfer (s)
agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
search (p), search.values (s), search.hits (s)
users (p), users.values (s), users.hits (s), users.groups.hits (sf)
errors (p), errors.values (s), errors.hits (s)
dhosts (p), dhosts.values (s)
statuscodes (HTTP status codes)
totals.daily (31 days)
totals.hourly (24 hours)
totals (one record)
countries (a couple of hundred countries)
system (one record)
visits.active (active visits - variable length)
downloads.active (active downloads - variable length)
All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
Database Size
One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
urls (p):
8192 Underlying database page size
2031 Overflow key/data size
1471636 Number of unique keys in the tree
1471636 Number of data items in the tree
193 Number of tree internal pages
577738 Number of bytes free in tree internal pages (63% ff)
55312 Number of tree leaf pages
145M Number of bytes free in tree leaf pages (67% ff)
2620 Number of tree overflow pages
16M Number of bytes free in tree overflow pages (25% ff)
urls.hits (s)
8192 Underlying database page size
2031 Overflow key/data size
2 Number of levels in the tree
823 Number of unique keys in the tree
1471636 Number of data items in the tree
31 Number of tree internal pages
201970 Number of bytes free in tree internal pages (20% ff)
45 Number of tree leaf pages
243550 Number of bytes free in tree leaf pages (33% ff)
2814 Number of tree duplicate pages
8360024 Number of bytes free in tree duplicate pages (63% ff)
0 Number of tree overflow pages
The Testbed
I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
I also used a code profiler to analyze SSW and BDB performance.
The Problem
Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
The Tests
SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
Some of the other things I tried/observed:
* I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
* I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
* I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
* The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
* I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec. -
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
Story
I've been using arch for the past four months, dual booting with my old Windows XP. As I'm very fond of Flash games and make my own programs with a cross-platform language, I've found few problems with the migration. One of them was the Adobe Flash Player performance, which was stunningly bad. But everyone was saying that was normal, so I left it as is.
However, one special error always worried me: a seemingly randomly started siren sound coming from the motherboard speaker. Thinking it was a alarm about some fatal kernel error, I had been solving it mostly with reboots.
But then it happened. While playing a graphics intensive game on Windows quickly after rebooting from Arch, the same siren sound started. It felt like a slap across the face: it was not a kernel error, it was an motherboard overheat alarm.
The Problem
Since the computer was giving overheat signs, I started looking at things from another angle. I noticed that some tasks take unusually long times in Arch (i.e.: building things from source; Firefox / OpenOffice startup; any graphics intensive program), specially from the Flash Player.
A great example is the game Penguinz, that runs flawlessly in Windows but is unbearably slow in Arch. So slow that it alone caused said overheat twice. And trying to record another flash game's record using XVidCap things went so bad that the game halved the FPS and started ignoring key presses.
Tech Info
Dual Core 3.2 processor
1 gb RAM
256 mb Geforce FX 5500 video card
Running Openbox
Using proprietary NVIDIA driver
TL;DR: poor performance on some tasks. Flash Player is so slow that overheats CPU and makes me cry. It's fine on Windows.
Off the top of my head I can think up some reasons: bad video driver, unwanted background application messing up, known Flash Player performance problems and Actionscript Linux/Arch-only bug.
Where do you think is the problem?jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage? That seems like the logical place to start. You shouldn't be getting poor performance in anything with that system. I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash. It's much better than it used to be.
Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
%CPU
Firefox: 80%~100%
X: 0~20%
Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
EDIT:
Did a Javascript benchmark to test both systems and browsers.
Windows XP + Firefox = 4361.4ms
Arch + Firefox = 5146.0ms
So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
EDIT2:
Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
Last edited by BoppreH (2009-05-03 04:25:20) -
Poor performance and voltage fluctuation.
I'm running two 280x in crossfire which I upgraded from two 6970's. When I'm playing DayZ my FPS never go above 30 FPS. With my 6970's I was well into the 70's. I never changed my video settings when I went from the 6970's to the 280X's. For the most part I have a lot of the graphics low or disabled. Borderlands 2 is a nightmare on the 280X. I drop down into 10 fps area and this has never happened with on my 6970's.
During games my core voltage fluctuates between 500mhz and 1020mhz during game play. I have ULPS disable as well.
My power supply is an Antec HCG-900.
Happened on all these drivers 14.4, 14.6 Beta, and 14.6 RC (currently installed).jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage? That seems like the logical place to start. You shouldn't be getting poor performance in anything with that system. I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash. It's much better than it used to be.
Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
%CPU
Firefox: 80%~100%
X: 0~20%
Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
EDIT:
Did a Javascript benchmark to test both systems and browsers.
Windows XP + Firefox = 4361.4ms
Arch + Firefox = 5146.0ms
So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
EDIT2:
Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
Last edited by BoppreH (2009-05-03 04:25:20) -
Poor performance of BLOB queries using ODBC
I'm getting very poor performance when querying a BLOB column using ODBC. I'm using an Oracle 10g database and the Oracle 10g ODBC driver on Windows XP.
I create two tables:
create table t1 ( x int primary key, y raw(2000) );
create table t2 ( x int primary key, y blob );
Then I load both tables with the same data. Then I run the following queries using ODBC:
SELECT x, y FROM t1;
SELECT x, y FROM t2;
I find that the BLOB query takes about 10 times longer than the RAW query to execute.
However, if I execute the same queries in SQL*Plus, the BLOB query is roughly as fast as the RAW query. So the problem seems to be ODBC-related.
Has anyone else come across this problem ?
Thanks.Hi Biren,
By GUID, are you referring to the Oracle Portal product? -
Performance issues involving tables S031 and S032
Hello gurus,
I am having some performance issues. The program involves accessing data from S031 and S032. I have pasted the SELECT statements below. I have read through the forums for past postings regarding performance, but I wanted to know if there is anything that stands out as being the culprit of very poor performance, and how it can be corrected. I am fairly new to SAP, so I apologize if I've missed an obvious error. From debugging the program, it seems the 2nd select statement is taking a very long time to process.
GT_S032: approx. 40,000 entries
S031: approx. 90,000 entries
MSEG: approx. 115,000 entries
MKPF: approx. 100,000 entries
MARA: approx. 90,000 entries
SELECT
vrsio "Version
werks "Plan
lgort "Storage Location
matnr "Material
ssour "Statistic(s) origin
FROM s032
INTO TABLE gt_s032
WHERE ssour = space AND vrsio = c_000 AND werks = gw_werks.
IF sy-subrc = 0.
SELECT
vrsio "Version
werks "Plant
spmon "Period to analyze - month
matnr "Material
lgort "Storage Location
wzubb "Valuated stock receipts value
wagbb "Value of valuated stock being issued
FROM s031
INTO TABLE gt_s031
FOR ALL ENTRIES IN gt_s032
WHERE ssour = gt_s032-ssour
AND vrsio = gt_s032-vrsio
AND spmon IN r_spmon
AND sptag = '00000000'
AND spwoc = '000000'
AND spbup = '000000'
AND werks = gt_s032-werks
AND matnr = gt_s032-matnr
AND lgort = gt_s032-lgort
AND ( wzubb <> 0 OR wagbb <> 0 ).
ELSE.
WRITE: 'No data selected'(m01).
EXIT.
ENDIF.
SORT gt_s032 BY vrsio werks lgort matnr.
SORT gt_s031 BY vrsio werks spmon matnr lgort.
SELECT
p~werks "Plant
p~matnr "Material
p~mblnr "Document Number
p~mjahr "Document Year
p~bwart "Movement type
p~dmbtr "Amount in local currency
t~shkzg "Debit/Credit indicator
INTO TABLE gt_scrap
FROM mkpf AS h
INNER JOIN mseg AS p
ON hmblnr = pmblnr
AND hmjahr = pmjahr
INNER JOIN mara AS m
ON pmatnr = mmatnr
INNER JOIN t156 AS t
ON pbwart = tbwart
WHERE h~budat => gw_duepr-begda
AND h~budat <= gw_duepr-endda
AND p~werks = gw_werks.
Thanks so much for your help,
JayeshIssue with table s031 and with for all entries.
Hi,
I have following code in which select statement on s031 is
taking long time and after that it shows a dump. What should I do instead of
exceeding the time limit of execution of an abap program.
TYPES:
BEGIN OF TY_MTL, " Material Master
MATNR TYPE MATNR, " Material Code
MTART TYPE MTART, " Material Type
MATKL TYPE MATKL, " Material Group
MEINS TYPE MEINS, " Base unit of Measure
WERKS TYPE WERKS_D, " Plant
MAKTX TYPE MAKTX, " Material description (Short Text)
LIFNR TYPE LIFNR, " vendor code
NAME1 TYPE NAME1_GP, " vendor name
CITY TYPE ORT01_GP, " City of Vendor
Y_RPT TYPE P DECIMALS 3, "Yearly receipt
Y_ISS TYPE P DECIMALS 3, "Yearly Consumption
M_OPG TYPE P DECIMALS 3, "Month opg
M_OPG1 TYPE P DECIMALS 3,
M_RPT TYPE P DECIMALS 3, "Month receipt
M_ISS TYPE P DECIMALS 3, "Month issue
M_CLG TYPE P DECIMALS 3, "Month Closing
D_BLK TYPE P DECIMALS 3, "Block Stock,
D_RPT TYPE P DECIMALS 3, "Today receipt
D_ISS TYPE P DECIMALS 3, "Day issues
TL_FL(2) TYPE C,
STATUS(4) TYPE C,
END OF TY_MTL,
BEGIN OF TY_OPG , " Opening File
SPMON TYPE SPMON, " Period to analyze - month
WERKS TYPE WERKS_D, " Plant
MATNR TYPE MATNR, " Material No
BASME TYPE MEINS,
MZUBB TYPE MZUBB, " Receipt Quantity
WZUBB TYPE WZUBB,
MAGBB TYPE MAGBB, " Issues Quantity
WAGBB TYPE WAGBB,
END OF TY_OPG,
DATA :
T_M TYPE STANDARD TABLE OF TY_MTL INITIAL SIZE 0,
WA_M TYPE TY_MTL,
T_O TYPE STANDARD TABLE OF TY_OPG INITIAL SIZE 0,
WA_O TYPE TY_OPG.
DATA: smonth1 TYPE spmon.
SELECT
a~matnr
a~mtart
a~matkl
a~meins
b~werks
INTO TABLE t_m FROM mara AS a
INNER JOIN marc AS b
ON a~matnr = b~matnr
* WHERE a~mtart EQ s_mtart
WHERE a~matkl IN s_matkl
AND b~werks IN s_werks
AND b~matnr IN s_matnr .
endif.
SELECT spmon
werks
matnr
basme
mzubb
WZUBB
magbb
wagbb
FROM s031 INTO TABLE t_o
FOR ALL ENTRIES IN t_m
WHERE matnr = t_m-matnr
AND werks IN s_werks
AND spmon le smonth1
AND basme = t_m-meins. -
XMLTable performance vs. TABLE(XMLSequence())
Hi everybody,
I've faced an issue that I hope I can find help with.
There's a task to parse a large (~15 MB) XML text and update a table based on it. So I've created a procedure that takes one XMLType parameter and does the job. Pretty straightforward. However, the thing is that, if I use XMLTable, which is the preferred method, to parse the XML, it runs for hours. Once I replace XMLTable with TABLE(XMLSequence()) method, the procedure completes in under two minutes on the same machine. Looks very strange to me.
Any ideas what could be causing such a poor performance of XMLTable?
Oracle version is 11.2.0.2.0
h1. Table structure
create table Points (
member_id int not null,
point_id int,
country_numeric character(3),
place national character varying(50),
name national character varying(255),
currencies national character varying(255),
address national character varying(255),
contact national character varying(40),
contact_phone national character varying(40),
details national character varying(255),
enabled_date date,
works national character varying(255),
active character default 1 check (active in (0, 1)) not null,
unique (member_id, point_id)
);h1. XMLTable method
runs for many hours if input parameter is ~15 MB long
create procedure update_Points (
p_Points in xmltype
) as
begin
insert into Points (member_id, point_id, country_numeric, place, name, currencies, address, contact, contact_phone, details, enabled_date, works)
select
ap.member_id,
ap.point_id,
ap.country_numeric,
ap.place,
ap.name,
ap.currencies,
ap.address,
ap.contact,
ap.contact_phone,
ap.details,
to_date(ap.enabled_date, 'DD.MM.YYYY'),
ap.works
from
xmltable('for $Point in /document/directory/reply[@type=''Points'']/data/row return element Point { attribute member_id { $Point/column[@col=''1'']/@value }, attribute point_id { $Point/column[@col=''2'']/@value }, attribute country_numeric { $Point/column[@col=''3'']/@value }, attribute place { $Point/column[@col=''4'']/@value }, attribute name { $Point/column[@col=''5'']/@value }, attribute currencies { $Point/column[@col=''6'']/@value }, attribute address { $Point/column[@col=''7'']/@value }, attribute contact { $Point/column[@col=''8'']/@value }, attribute contact_phone { $Point/column[@col=''9'']/@value }, attribute details { $Point/column[@col=''10'']/@value }, attribute enabled_date { $Point/column[@col=''11'']/@value }, attribute works { $Point/column[@col=''12'']/@value } }'
passing p_Points
columns
member_id int path '@member_id',
point_id int path '@point_id',
country_numeric character(3) path '@country_numeric',
place national character varying(50) path '@place',
name national character varying(255) path '@name',
currencies national character varying(255) path '@currencies',
address national character varying(255) path '@address',
contact national character varying(40) path '@contact',
contact_phone national character varying(40) path '@contact_phone',
details national character varying(255) path '@details',
enabled_date character(10) path '@enabled_date',
works national character varying(255) path '@works') ap;
end;h1. TABLE(XMLSequence()) method
runs for 2 minutes with the same input parameter
create procedure update_Points (
p_Points in xmltype
) as
begin
insert into Points (member_id, point_id, country_numeric, place, name, currencies, address, contact, contact_phone, details, enabled_date, works)
select
value(x).extract('row/column[@col=''1'']/@value').getStringVal() member_id,
value(x).extract('row/column[@col=''2'']/@value').getStringVal() point_id,
value(x).extract('row/column[@col=''3'']/@value').getStringVal() country_numeric,
value(x).extract('row/column[@col=''4'']/@value').getStringVal() place,
extractValue(value(x), '/row/column[@col=''5'']/@value') name,
value(x).extract('row/column[@col=''6'']/@value').getStringVal() currencies,
value(x).extract('row/column[@col=''7'']/@value').getStringVal() address,
value(x).extract('row/column[@col=''8'']/@value').getStringVal() contact,
value(x).extract('row/column[@col=''9'']/@value').getStringVal() contact_phone,
value(x).extract('row/column[@col=''10'']/@value').getStringVal() details,
to_date(value(x).extract('row/column[@col=''11'']/@value').getStringVal(), 'DD.MM.YYYY') enabled_date,
value(x).extract('row/column[@col=''12'']/@value').getStringVal() works
from
table(xmlsequence(extract(p_Points, '/document/directory/reply[@type=''Points'']/data/row'))) x;
end;h1. Small XML sample
<?xml version="1.0"?>
<document>
<directory>
<reply type="Points">
<data>
<row>
<column col="1" value="0"></column>
<column col="2" value=""></column>
<column col="3" value="643"></column>
<column col="4" value="Something"></column>
<column col="5" value=""Sample""></column>
<column col="6" value=""></column>
<column col="7" value="Blah"></column>
<column col="8" value="Bar"></column>
<column col="9" value="0123456789"></column>
<column col="10" value=""></column>
<column col="11" value="01.01.2010"></column>
<column col="12" value=""></column>
</row>
</data>
</reply>
</directory>
</document>Edited by: 999663 on Apr 15, 2013 1:21 PModie_63 wrote:
>LOL - Should have known :-)
After a fresh start of my database, it shows the following...
SQL> select count(*) from user_objects;
COUNT(*)
530
SQL> var doc clob
exec :doc := dbms_xmlgen.getxml('select * from user_objects')SQL>
PL/SQL procedure successfully completed.
SQL> set autotrace traceonly
set timing on
set pages 100
set lines 200
SQL> SQL> SQL> SQL>
SQL>
SQL> select *
from xmltable(
'for $i in /ROWSET/ROW
return element r {
element object_name {$i/OBJECT_NAME/text()}
, element object_type {$i/OBJECT_TYPE/text()}
, element status {$i/STATUS/text()}
passing xml 2 3 4 5 6 7 8 9 parse(document :doc)
columns object_name varchar2(30) path 'object_name'
, object_type varchar2(19) path 'object_type'
, status varchar2(7) path 'status'
) ; 10 11 12 13
530 rows selected.
Elapsed: 00:00:01.02
Execution Plan
Plan hash value: 3781821901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 49008 | 29 (0)| 00:00:01 |
| 1 | XMLTABLE EVALUATION | | | | | |
Statistics
636 recursive calls
1041 db block gets
1991 consistent gets
35 physical reads
0 redo size
19539 bytes sent via SQL*Net to client
929 bytes received via SQL*Net from client
37 SQL*Net roundtrips to/from client
30 sorts (memory)
0 sorts (disk)
530 rows processed
SQL> select *
from xmltable(
'/ROWSET/ROW'
passing xmlparse(document :doc)
columns object_name varchar2(30) path 'OBJECT_NAME'
, object_type varchar2(19) path 'OBJECT_TYPE'
, status varchar2(7) path 'STATUS'
2 3 4 5 6 7 8 ) ;
530 rows selected.
Elapsed: 00:00:00.06
Execution Plan
Plan hash value: 3781821901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 49008 | 29 (0)| 00:00:01 |
| 1 | XMLTABLE EVALUATION | | | | | |
Statistics
2 recursive calls
1041 db block gets
741 consistent gets
0 physical reads
0 redo size
19539 bytes sent via SQL*Net to client
929 bytes received via SQL*Net from client
37 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
530 rows processed
SQL> select extractvalue(value(t), '/ROW/OBJECT_NAME')
, extractvalue(value(t), '/ROW/OBJECT_TYPE')
, extractvalue(value(t), '/ROW/STATUS')
from table(
xmlsequence(
extract(xmltype(:doc), '/ROWSET/ROW')
) t ; 2 3 4 5 6 7 8
530 rows selected.
Elapsed: 00:00:00.27
Execution Plan
Plan hash value: 1186311642
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 16336 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | 8168 | 16336 | 29 (0)| 00:00:01 |
Note
- Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)
Statistics
12 recursive calls
2162 db block gets
847 consistent gets
0 physical reads
0 redo size
19629 bytes sent via SQL*Net to client
929 bytes received via SQL*Net from client
37 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
530 rows processed
SQL>
SQL>
SQL> -- STUFF BEING NOW CACHED ---
SQL>
SQL>
SQL>
select *
from xmltable(
'for $i in /ROWSET/ROW
return element r {
element object_name {$i/OBJECT_NAME/text()}
, element object_type {$i/OBJECT_TYPE/text()}
, element status {$i/STATUS/text()}
passing xmSQL> 2 3 4 5 6 7 8 9 lparse(document :doc)
columns object_name varchar2(30) path 'object_name'
, object_type varchar2(19) path 'object_type'
, status varchar2(7) path 'status'
select *
from xmltable(
'/ROWSET/ROW'
10 11 12 13 passing xmlparse(document :doc)
columns object_name varchar2(30) path 'OBJECT_NAME'
, object_type varchar2(19) path 'OBJECT_TYPE'
, status varchar2(7) path 'STATUS'
select extractvalue(value(t), '/ROW/OBJECT_NAME')
, extractvalue(value(t), '/ROW/OBJECT_TYPE')
, extractvalue(value(t), '/ROW/STATUS')
from table(
xmlsequence(
extract(xmltype(:doc), '/ROWSET/ROW')
) t ;
530 rows selected.
Elapsed: 00:00:00.06
Execution Plan
Plan hash value: 3781821901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 49008 | 29 (0)| 00:00:01 |
| 1 | XMLTABLE EVALUATION | | | | | |
Statistics
0 recursive calls
1041 db block gets
553 consistent gets
0 physical reads
0 redo size
19539 bytes sent via SQL*Net to client
929 bytes received via SQL*Net from client
37 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
530 rows processed
SQL> SQL> SQL> 2 3 4 5 6 7 8
530 rows selected.
Elapsed: 00:00:00.05
Execution Plan
Plan hash value: 3781821901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 49008 | 29 (0)| 00:00:01 |
| 1 | XMLTABLE EVALUATION | | | | | |
Statistics
0 recursive calls
1041 db block gets
553 consistent gets
0 physical reads
0 redo size
19539 bytes sent via SQL*Net to client
929 bytes received via SQL*Net from client
37 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
530 rows processed
SQL> SQL> SQL> 2 3 4 5 6 7 8
530 rows selected.
Elapsed: 00:00:00.25
Execution Plan
Plan hash value: 1186311642
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 16336 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | 8168 | 16336 | 29 (0)| 00:00:01 |
Note
- Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)
Statistics
0 recursive calls
2162 db block gets
637 consistent gets
0 physical reads
0 redo size
19629 bytes sent via SQL*Net to client
929 bytes received via SQL*Net from client
37 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
530 rows processedXQuery FLWOR : 00:00:00.06
XQuery XPath : 00:00:00.05
XMLSeq XPath : 00:00:00.25
As I said, Oracle' statement of Direction was that it would become "deprecated", the moment it officially is mentioned in the docs, you will see less and less effort in maintaining (in this case) the propriety methods...
M. -
Hi All,
I have users complaining of poor performance for TCP applications over a site to site VPN.
I would like to know if anyone knows what to look for when trying to see if we need to reduce the MTU on each side of the VPN.
I dont want to reconfigure MTU unless I have to because one of the two sites is the centre of the hub and I will most likely have to configure it for all of the other sites if I configure it for the one.
the VPNs run between 5510 devices running 7.08 and 8.21
thanks very much for any help
Regards
Amanda Lalli-CafiniI am having an issue with performance since changing our Linked server connection to SQL 2012. The query in 2008R2 ran in 9 seconds and now it takes around 10 minutes. When I trace the query on the 2 servers, the statement is completely different.
CREATE TABLE [dbo].[PS_OCC_ADDRESS_S](
[EMPLID] [varchar](11) NULL,
[ADDRESS_TYPE] [varchar](4) NULL,
[OCC_ADDR_TYP_DESCR] [varchar](30) NULL,
[ADDRESS1] [varchar](55) NULL,
[ADDRESS2] [varchar](55) NULL,
[ADDRESS3] [varchar](55) NULL,
[CITY] [varchar](30) NULL,
[STATE] [varchar](6) NULL,
[POSTAL] [varchar](12) NULL,
[COUNTY] [varchar](30) NULL,
[COUNTRY] [varchar](3) NULL,
[DESCR] [varchar](30) NULL,
[FERPA] [varchar](1) NULL,
[LASTUPDDTTM] [varchar](75) NULL,
[LASTUPDOPRID] [varchar](30) NULL
) ON [PRIMARY]
Statement:
SELECT EmplId, Address1, Address2, Address3, City, State = substring(State,1,2), Zip = substring(Postal,1,10), County, Country
--INTO tmp_Addr
FROM [Sql03].[DWHCRPT].[dbo].[PS_OCC_ADDRESS_S]
WHERE (Address_Type = 'PERM') and (EmplID IN (
SELECT UserID As EmplID FROM Collegium.dbo.Users
UNION
SELECT EmplID As EmplID FROM Emap.dbo.Applications
UNION
SELECT ID As EmplID FROM HS_Program_Application.dbo.Applications
Any suggestions would be appreciated
SELECT UserID As EmplID FROM Collegium.dbo.Users
into #temp1
UNION
SELECT EmplID As EmplID FROM Emap.dbo.Applications
UNION
SELECT ID As EmplID FROM HS_Program_Application.dbo.Applications
....WHERE (Address_Type = 'PERM') and EmplID IN (select * from #temp1)
Still it does not resolve 9 secs over 10 minutes. Good luck.
Maybe you are looking for
-
Free of cost item sale.
Hi all, It is a my claint urjent requirement free of cost item sales where we will define that, what is the order type, how i have to explain them any body plz give me some help . Thanks in advance for your support and help.
-
Random Visio Stencils Displaying Only as Black and White Lines (No colour)
I'm experiencing a strange problem where some Visio stencils/shapes are displaying only as black and white outlines (i.e. - without any colour). I'm using Visio 2010 on a Samsung NC900 laptop with 8gb RAM, 1.7Ghz i7 CPU on Windows 7. The symptoms tha
-
How to create a blank theme with no embedded textures?
Hi, I have created a custom theme based on the "White" theme thinking that it was truly blank. But when I save my new theme, I see that it weighs a hefty 11Mb. After looking in the package contents, I see that there are marble textures remaining from
-
Javascript in HTML of Email Being Ignored
The users of our application access the login page by first visiting a simple HTML page with virtually nothing but a link on the page. The user then clicks the link, and within the HTML of this page is javascript that makes the login page of the appl
-
Invoice List Cancellation Status table field
Hi, I need to know, which table field is updated when we cancel an invoice List using VF26. If there is cancellation once the VBRK-SFAKN is updated with the cancelled Invoice List Number. But in case there are multiple cancellations, the SFAKN field