Select query performance is very slow
Could you please explain me about BITMAP CONVERSION FROM ROWIDS
Why the below query going for two times BITMAP CONVERSION TO ROWIDS on the same table.
SQL> SELECT AGG.AGGREGATE_SENTENCE_ID ,
2 AGG.ENTITY_ID,
3 CAR.REQUEST_ID REQUEST_ID
4 FROM epic.eh_aggregate_sentence agg ,om_cpps_active_requests car
5 WHERE car.aggregate_sentence_id =agg.aggregate_sentence_id
6 AND car.service_unit = '0ITNMK0020NZD0BE'
7 AND car.request_type = 'CMNTY WORK'
8 AND agg.hours_remaining > 0
9 AND NOT EXISTS (SELECT 'X'
10 FROM epic.eh_agg_sent_termination aggSentTerm
11 WHERE aggSentTerm.aggregate_sentence_id = agg.aggregate_sentence_id
12 AND aggSentTerm.date_terminated <= epic.epicdatenow);
Execution Plan
Plan hash value: 1009556971
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 660 | 99 (2)| 00:00:02 |
|* 1 | HASH JOIN ANTI | | 5 | 660 | 99 (2)| 00:00:02 |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 7 | 658 | 95 (0)| 00:00:02 |
|* 4 | TABLE ACCESS BY INDEX ROWID | OM_CPPS_ACTIVE_REQUESTS | 45 | 2565 | 50 (0)| 00:00:01 |
| 5 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 6 | BITMAP AND | | | | | |
| 7 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 8 | INDEX RANGE SCAN | OM_CA_REQUEST_REQUEST_TYPE | 641 | | 12 (0)| 00:00:01 |
| 9 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 10 | INDEX RANGE SCAN | OM_CA_REQUEST_SERVICE_UNIT | 641 | | 20 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | PK_EH_AGGREGATE_SENTENCE | 1 | | 0 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | EH_AGGREGATE_SENTENCE | 1 | 37 | 1 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID | EH_AGG_SENT_TERMINATION | 25 | 950 | 3 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DATE_TERMINATED_0520 | 4 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("AGGSENTTERM"."AGGREGATE_SENTENCE_ID"="AGG"."AGGREGATE_SENTENCE_ID")
4 - filter("CAR"."AGGREGATE_SENTENCE_ID" IS NOT NULL)
8 - access("CAR"."REQUEST_TYPE"='CMNTY WORK')
10 - access("CAR"."SERVICE_UNIT"='0ITNMK0020NZD0BE')
11 - access("CAR"."AGGREGATE_SENTENCE_ID"="AGG"."AGGREGATE_SENTENCE_ID")
12 - filter("AGG"."HOURS_REMAINING">0)
14 - access("AGGSENTTERM"."DATE_TERMINATED"<="EPIC"."EPICDATENOW"())now this query is giving the correct result, but performance is slow.
Please help to improve the performance.
SQL> desc epic.eh_aggregate_sentence
Name Null? Type
ENTITY_ID CHAR(16)
AGGREGATE_SENTENCE_ID NOT NULL CHAR(16)
HOURS_REMAINING NUMBER(9,2)
SQL> desc om_cpps_active_requests
Name Null? Type
REQUEST_ID NOT NULL VARCHAR2(16)
AGGREGATE_SENTENCE_ID VARCHAR2(16)
REQUEST_TYPE NOT NULL VARCHAR2(20)
SERVICE_UNIT VARCHAR2(16)
SQL> desc epic.eh_agg_sent_termination
Name Null? Type
TERMINATION_ID NOT NULL CHAR(16)
AGGREGATE_SENTENCE_ID NOT NULL CHAR(16)
DATE_TERMINATED NOT NULL CHAR(20)
.
user10594152 wrote:
Thanks for your reply.
Still i am getting same problemIt is not a problem. Bitmap conversion usually is a very good thing. Useing this feature the database can use one or several unselective b*indexes. Combine them and do a kind of bitmap selection. THis should be slightly faster than a FTS and much faster than a normal index access.
Your problem is that your filter criteria seem to be not very usefull. Whcih is the criteria that does the best reduction of rows?
Also any kind of NOT EXISTS is potentiall not very fast (NOT IN is worse). You can rewrite your query with an OUTER JOIN. Sometimes this will help, but not always.
SELECT AGG.AGGREGATE_SENTENCE_ID ,
AGG.ENTITY_ID,
CAR.REQUEST_ID REQUEST_ID
FROM epic.eh_aggregate_sentence agg
JOIN om_cpps_active_requests car ON ar.aggregate_sentence_id =agg.aggregate_sentence_id
LEFT JOIN epic.eh_agg_sent_termination aggSentTerm ON aggSentTerm.aggregate_sentence_id = agg.aggregate_sentence_id and aggSentTerm.date_terminated <= epic.epicdatenow
WHERE car.service_unit = '0ITNMK0020NZD0BE'
AND car.request_type = 'CMNTY WORK'
AND agg.hours_remaining > 0
AND aggSentTerm.aggregate_sentence_id is nullEdited by: Sven W. on Aug 31, 2010 4:01 PM
Similar Messages
-
Gmail performance is very slow in Firefox 4.
Gmail performance is very slow in Firefox 4 and much slower when compared to performance in Firefox 3.6.x After clicking a button within the Gmail interface (e.g. refresh, opening an email, sending an email etc.) there is a noticeable wait time in Firefox 4. In Firefox 3.6 and actually in any/all previous versions of Firefox, this was not the case. Using same machine for Firefox 4 as I was for Firefox 3.x. Dell Inspiron Laptop E6400, Dual core Intel 2 GHz, 3.5 GB RAM, 80 GB free HDD space, 32-bit Windows XP SP3. Gmail is set to use HTTPS always.
Hi GoldMoon,
(1)
Refer to your last posting, seems that you've created index which wouldn't help your query performance.
You need to create index from the field which supplied from your 'WHERE' condition. So if I follow your initial posting in this thread, your index should contains:
- BUKRS (which will be compared with your company variable)
- PRCTR (which will be compared with your s_prctr variable)
- GJAHR (which will be compared with your s_year variable)
- HKONT (which will be compared with your glcode value in your it_final[])
- BUDAT (which will be compared with your s_budat range/select option.
And to follow the BSIS field order, the above index field need to be arranged as follow
- BUKRS
- HKONT
- GJAHR
- BUDAT
- PRCTR
(2).
Try not to use 'INTO CORRESPONDING FIELDS' as it will increase table-memory overhead, but use 'INTO' and make sure the field list and internal table fields is in the same order.
So, once the index has been created, and 'INTO CORRESPONDING FIELDS' has been changed to 'INTO' your selection should looks like this:
IF NOT it_final[] IS INITIAL.
SELECT bukrs
hkont
augdt
augbl
zuonr
gjahr
belnr
buzei
budat
werks
kostl
aufnr
shkzg
dmbtr
prctr
FROM bsis
INTO TABLE it_bseg
FOR ALL ENTRIES IN it_final
WHERE bukrs eq company
AND hkont eq it_final-glcode
AND gjahr eq s_year
AND budat in s_budat
AND prctr eq s_prctr.
ENDIF.
Hope it helps -
Macbook pro performance is very slow
Mac book Pro system performance is very slow.
Reinstall OS X if you've never done this before:
How to Perform an Archive and Install
An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
Repairing the Hard Drive and Permissions
Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger, Leopard or Snow Leopard.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit DU and return to the installer. Now restart normally.
If DU reports errors it cannot fix, then you will need Disk Warrior and/or Tech Tool Pro to repair the drive. If you don't have either of them or if neither of them can fix the drive, then you will need to reformat the drive and reinstall OS X.
2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.
An even better choice is to wipe the drive and reinstall Leopard from scratch. Make a backup first, then restore your files from the backup. -
Database performance is very slow
Hi DBA's
Plz help me out !!!
Application users complaining database performance is very slow. Its an 10g DB in IBM AIx Server.
Any changes needed pls be post as soon as possible
Buffer Cache Hit Ratio 94.69
Chained Row Ratio 0
Database CPU Time Ratio 17.21
Database Wait Time Ratio 82.78
Dictionary Cache Hit Ratio 99.38
Execute Parse Ratio -25.6
Get Hit Ratio 70.62
Latch Hit Ratio 99.65
Library Cache Hit Ratio 99.43
Parse CPU to Elapsed Ratio 8.4
Pin Hit Ratio 81.6
Soft-Parse Ratio 94.29
=====================================
NAME TYPE VALUE
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
nls_currency string
nls_dual_currency string
nls_iso_currency string
open_cursors integer 600
optimizer_secure_view_merging boolean TRUE
session_cached_cursors integer 20
sql92_security boolean FALSE
===========================================================
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 4272M
sga_target big integer 4G
pga_aggregate_target big integer 2980M
Total Ram Size is 8 GBSQL> select username,sid from v$session where username='WPCPRODUSR';
USERNAME SID
WPCPRODUSR 378
WPCPRODUSR 379
WPCPRODUSR 380
WPCPRODUSR 381
WPCPRODUSR 382
WPCPRODUSR 383
WPCPRODUSR 384
WPCPRODUSR 385
WPCPRODUSR 386
WPCPRODUSR 387
WPCPRODUSR 388
USERNAME SID
WPCPRODUSR 389
WPCPRODUSR 390
WPCPRODUSR 391
WPCPRODUSR 392
WPCPRODUSR 393
WPCPRODUSR 394
WPCPRODUSR 395
WPCPRODUSR 396
WPCPRODUSR 397
WPCPRODUSR 398
WPCPRODUSR 399
USERNAME SID
WPCPRODUSR 400
WPCPRODUSR 401
WPCPRODUSR 402
WPCPRODUSR 403
WPCPRODUSR 404
WPCPRODUSR 405
WPCPRODUSR 406
WPCPRODUSR 407
WPCPRODUSR 408
WPCPRODUSR 409
WPCPRODUSR 410
USERNAME SID
WPCPRODUSR 411
WPCPRODUSR 412
WPCPRODUSR 413
WPCPRODUSR 414
WPCPRODUSR 415
WPCPRODUSR 416
WPCPRODUSR 417
WPCPRODUSR 418
WPCPRODUSR 419
WPCPRODUSR 420
WPCPRODUSR 421
USERNAME SID
WPCPRODUSR 422
WPCPRODUSR 423
WPCPRODUSR 424
WPCPRODUSR 425
WPCPRODUSR 426
WPCPRODUSR 427
WPCPRODUSR 428
WPCPRODUSR 429
WPCPRODUSR 430
WPCPRODUSR 431
WPCPRODUSR 432
USERNAME SID
WPCPRODUSR 433
WPCPRODUSR 434
WPCPRODUSR 435
WPCPRODUSR 436
WPCPRODUSR 437
WPCPRODUSR 438
WPCPRODUSR 439
WPCPRODUSR 440
WPCPRODUSR 441
WPCPRODUSR 442
WPCPRODUSR 443
USERNAME SID
WPCPRODUSR 444
WPCPRODUSR 445
WPCPRODUSR 446
WPCPRODUSR 447
WPCPRODUSR 448
WPCPRODUSR 449
WPCPRODUSR 450
WPCPRODUSR 451
WPCPRODUSR 452
WPCPRODUSR 453
WPCPRODUSR 454
USERNAME SID
WPCPRODUSR 455
WPCPRODUSR 456
WPCPRODUSR 457
WPCPRODUSR 458
WPCPRODUSR 459
WPCPRODUSR 460
WPCPRODUSR 461
WPCPRODUSR 462
WPCPRODUSR 463
WPCPRODUSR 464
WPCPRODUSR 465
USERNAME SID
WPCPRODUSR 466
WPCPRODUSR 467
WPCPRODUSR 468
WPCPRODUSR 469
WPCPRODUSR 470
WPCPRODUSR 471
WPCPRODUSR 472
WPCPRODUSR 473
WPCPRODUSR 474
WPCPRODUSR 475
WPCPRODUSR 476
USERNAME SID
WPCPRODUSR 477
WPCPRODUSR 478
WPCPRODUSR 479
WPCPRODUSR 480
WPCPRODUSR 481
WPCPRODUSR 482
WPCPRODUSR 483
WPCPRODUSR 484
WPCPRODUSR 485
WPCPRODUSR 486
WPCPRODUSR 487
USERNAME SID
WPCPRODUSR 488
WPCPRODUSR 489
WPCPRODUSR 490
WPCPRODUSR 491
WPCPRODUSR 492
WPCPRODUSR 493
WPCPRODUSR 494
WPCPRODUSR 495
WPCPRODUSR 496
WPCPRODUSR 497
WPCPRODUSR 498
USERNAME SID
WPCPRODUSR 499
WPCPRODUSR 500
WPCPRODUSR 501
WPCPRODUSR 502
WPCPRODUSR 503
WPCPRODUSR 504
WPCPRODUSR 505
WPCPRODUSR 506
WPCPRODUSR 507
WPCPRODUSR 508
WPCPRODUSR 509
USERNAME SID
WPCPRODUSR 510
WPCPRODUSR 511
WPCPRODUSR 512
WPCPRODUSR 513
WPCPRODUSR 514
WPCPRODUSR 515
WPCPRODUSR 516
WPCPRODUSR 517
WPCPRODUSR 518
WPCPRODUSR 519
WPCPRODUSR 520
USERNAME SID
WPCPRODUSR 521
WPCPRODUSR 522
WPCPRODUSR 523
WPCPRODUSR 524
WPCPRODUSR 525
148 rows selected. -
Performance is very slow.
I continue to get the spinning color wheel on whatever I try.
It doesnt matter if its an email, browser or document of any kind it takes several seconds for anything to happen.Hello, see how many of these you can answer...
See if the Disk is issuing any S.M.A.R.T errors in Disk Utility...
http://support.apple.com/kb/PH7029
Open Activity Monitor in Applications>Utilities, select All Processes & sort on CPU%, any indications there?
How much RAM & free space do you have also, click on the Memory & Disk Usage Tabs.
Open Console in Utilities & see if there are any clues or repeating messages when this happens.
In the Memory tab, are there a lot of Pageouts? -
SAP SNC Portal DCM screen performance is very slow and times out
Friends,
SAP SNC Portal DCM screen performance is very slow and times out when user trying to pull data using customer location
What are the cleanup activites we can do to improve the overall SNC performance ?
We did open OSS message but so far no reply from SAP , Is there any one faced performance issue ?
User/vendor is complaining about slowness , query is standard SAP and its taking more time .(table - /LIME/NTREE) , It looks like number of data are huge causing this problem related to LIME/NTREE table. What are the options to improve the performance ?
Thanks in Advance
Hanuman Choudharyhi Team,
Pls . note the advise from SAP below, IS there any have experiance of archiveing /LIME records ?
Please advise how to start & what the steps in archiving ?
Thanks in advance
I had a look at the DCM query performance in PH1 system and figured out
that most of the time is spent at the LIME layer of database. The
following LIME tables are having far too many entries and is causing
the bottleneck during the query execution.
/LIME/NLOG_QUAN - 38,165,467
/LIME/PN_ITEM - 19,116,518
/LIME/PN_ITEM_TB - 19,154,124
These tables are storing the historical information about LIME(stock)
updates. Since these table grow with each change/update of stock
information, it will slow down the performance of the system over a
period of time. And to avoid the slow responses, the tables should
ideally be archived on a periodic basis to keep the data volume as
minimal as possible. You may have to discuss with the Business to
determine the number of days of LIME record you would want to retain
in the system. I would strongly recommend you to consider the LIME
archival retaining the minimum days (<=60 days) of historical
information. You can find more information about the Lime Archival
in the Sap Help link:
http://help.sap.com/saphelp_scm2007/helpdata/en/44/2a83121dde23d1e10000000a1553f7/frameset.htm.
Kindly get in touch with your BASIS consultant for the LIME archival.
The application performance should definitely improve after the LIME
archival. Please do not hesitate to get in touch with me in case you
require any further clarification in this regards.
Best Regards -
Hi Guys,
My Prod db performance is very slow..
I collected the addm report and got some sql queries which are consuming significant memory.
Below out put is from ADDM Report
FINDING 1: 45% impact (9529 seconds)
SQL statements consuming significant database time were found.
RECOMMENDATION 1: SQL Tuning, 21% benefit (4393 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"bxkdtdywnp6xa".
RELEVANT OBJECT: SQL statement with SQL_ID bxkdtdywnp6xa and
PLAN_HASH 2404633280
and I just wanted to know what is the base level analysis from DBA end.
Regards,
Maddyuser11263705 wrote:
HIPatience, Grasshopper
You posted this follow-up a mere 85 minutes after your previous post.
This forum is not a chat line, and it is not paid support.
No one is responsible for monitoring it and giving a quick response.
Furthermore, it is a global forum. The person with the information you seek may very well live 20 time zones away from you and was going to bed just as you posted. He will not even see your post for several more hours.
Your original post went up in the middle of the night for half the world.
And going into a weekend, at that.
No one with the information you seek is deliberately withholding it until you sound sufficiently desperate. -
Weblogic server performance is very slow and memory consumption is 99%
I am facing one critical issue with the weblogic server..
The server performance is very slow and one of the process is consuming more that 99% of the memory. Bouncing the server is not resolving the issue.
Can see the memory usage below...
Tasks: 134 total, 2 running, 132 sleeping, 0 stopped, 0 zombie
Cpu(s):100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7990656k total, 7947652k used, 43004k free, 9164k buffers
Swap: 16386260k total, 4691704k used, 11694556k free, 56352k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10263 oracle 24 0 10.9g 6.5g 14m S 99.2 85.3 34:31.52 java
7409 oracle 16 0 12764 768 508 S 0.3 0.0 0:16.45 top
Can some body help me on this.
Thanks in advance.
-PrasadUse the weblogic forum-
WebLogic Server - Upgrade / Install / Environment / Migration -
I am doing an an aquisition and displaying the data on graphs. When I run the program it is slow. I think because I have the number of scans to read associated with my scan rate. It takes the number of seconds I want to display on the chart times the scan rate and feeds that into the number of samples to read at a time from the AI read. The problem is that it stalls until the data points are aquired and displayed so I cannot click or change values on the front panel until the updates occur on the graph. What can I do to be able to help this?
On Fri, 15 Aug 2003 11:55:03 -0500 (CDT), HAL wrote:
>My performance is very slow when I run graphs. How do I increase the
>speed at which I can do other things while the data is being updated
>and displayed on the graphs?
>
>I am doing an an aquisition and displaying the data on graphs. When I
>run the program it is slow. I think because I have the number of
>scans to read associated with my scan rate. It takes the number of
>seconds I want to display on the chart times the scan rate and feeds
>that into the number of samples to read at a time from the AI read.
>The problem is that it stalls until the data points are aquired and
>displayed so I cannot click or change values on the front panel until
>the updates occur on the graph. What can I do to be a
ble to help
>this?
It may also be your graphics card. LabVIEW can max the CPU and you
screen may not be refreshing very fast.
--Ray
"There are very few problems that cannot be solved by
orders ending with 'or die.' " -Alistair J.R Young -
Overall Performance is very slow
Hi,
I have 4GB in PI-DEV and 4GB in BI-DEV server, BI server is running very good and fast, but PI-DEV performace is very very slow even it take 5 munites to login myself...
There is no user login in DEV server... NW 7.0 2004s, Windows 2003 OS, Patch level 16. oracle 10g
Is anyone tell me step by step how do i check performacne issue... i read performance book but not that help
Thanks in Advanced...Sorry for late Reply
Basically I have PI-DEV and BI-DEV system and both system has only 4GB memory and same location same network, same patch level 16, same kernal level, under windows 2003 server, and Oracle DB, but different instance... some reason in PI-DEV performance is very slow only Integration Engine...
when I go SXMB_MONI it take more then process 20 munites one transaction, as i maintion before
I checked both memory paramaters are same like configTools
I checked database table space
I checked heap memory paramaters
I deleted AFG_XI_MSG table entries no messages are hold
sometihng really wrong in Integration Engine -
Dear Experts,
We are facing an issue with SM tool’s performance is very slow.
It is taking minimum of 4 mins time to open a sm and when open/raise a CR, previously in OVSC it used to take only 20 seconds of time.
It’s not only about the raising of CR it’s with all the modules in the tool.
We are facing slowness issue both thin client and web client.
Please suggect any remedy to fix the issue.
Thanks&Regards,
Tools TeamHello,
The specific forum is dealing with printing software for Macintosh envoirnment.
It seems our topic is more related to the Windows OS and I would assume a server envoirnment.
I suggest posting your question in the commercial forums which should be more suitable for a such:
http://h30499.www3.hp.com/hpeb/
Best of luck,
Shlomi
Say thanks by clicking the Kudos thumb up in the post.
If my post resolve your problem please mark it as an Accepted Solution -
SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra -
Mail has 16k messages, and performance is very slow, with loading times taking up to 5 seconds every time I open Mail.
How can I increase performance?
I'm running a MacBook Air 4GB 1.7GHz 10.7.2.
GrahamOne possible solution would be to organise your inbox into folders.
Its never relly good on any system to have one folder that has everything in it.
Try going to you web gui for that mail account and organise your folders and move mails from your inbox into corresponding folders for better organisation.
Several folders containing the same amount of one folder will usually load a little quicker as the folder may not be accessed to download its content unless veiwed.
So having 10 folders with organised content, and you inbox as an area thats to hold only new emails would work much much quicker with imap.
Most imap servers will only update the contents of a folder when its veiwed. -
BSIS performance is very Slow in query
Dear All,
How to increase performance for the following query BSIS
select a~saknr as glcode
b~txt50 as gldesc
into corresponding fields of table it_final
from skb1 as a
inner join skat as b
on asaknr = bsaknr
where a~saknr in p_glcode
and a~bukrs eq company
and b~spras eq 'EN'
and b~ktopl eq 'ABIX'.
sort it_gl by saknr.
the above query get 220 G/L accounts.*
if not it_final[] is initial.
select
bukrs
hkont
augdt
augbl
zuonr
gjahr
belnr
buzei
budat
werks
kostl
aufnr
shkzg
dmbtr
prctr
into corresponding fields of table it_bseg
from bsis
for all entries in it_final
where bukrs eq company
and prctr eq s_prctr
and gjahr eq s_year
and hkont eq it_final-glcode
and budat in s_budat "BETWEEN fromdt AND todt .
*above query taken above 30 minutes in production.
give me suggestions to tune query .
Regards,
Moon
Edited by: GoldMoon on Jan 13, 2010 4:35 PMHi GoldMoon,
(1)
Refer to your last posting, seems that you've created index which wouldn't help your query performance.
You need to create index from the field which supplied from your 'WHERE' condition. So if I follow your initial posting in this thread, your index should contains:
- BUKRS (which will be compared with your company variable)
- PRCTR (which will be compared with your s_prctr variable)
- GJAHR (which will be compared with your s_year variable)
- HKONT (which will be compared with your glcode value in your it_final[])
- BUDAT (which will be compared with your s_budat range/select option.
And to follow the BSIS field order, the above index field need to be arranged as follow
- BUKRS
- HKONT
- GJAHR
- BUDAT
- PRCTR
(2).
Try not to use 'INTO CORRESPONDING FIELDS' as it will increase table-memory overhead, but use 'INTO' and make sure the field list and internal table fields is in the same order.
So, once the index has been created, and 'INTO CORRESPONDING FIELDS' has been changed to 'INTO' your selection should looks like this:
IF NOT it_final[] IS INITIAL.
SELECT bukrs
hkont
augdt
augbl
zuonr
gjahr
belnr
buzei
budat
werks
kostl
aufnr
shkzg
dmbtr
prctr
FROM bsis
INTO TABLE it_bseg
FOR ALL ENTRIES IN it_final
WHERE bukrs eq company
AND hkont eq it_final-glcode
AND gjahr eq s_year
AND budat in s_budat
AND prctr eq s_prctr.
ENDIF.
Hope it helps -
TaskQueryService Performance is very Slow
Hi All,
I am using TaskQueryService to query task from BPM workList app.
The performance of the app is very slow.
For retrieving 200 tasks from the WorkList App it is consuming around 5 seconds, which is very slow.
For the above i had removed payload and requesting for only basic info as shown in the code.
I am using Remote Ejb's for the invocation.
For getting the IWorkFlowContext it is taking negligible time, which includes authentication.
In the below code snippet i am invoking the querytask() multiple times so that the time used to create ITaskQueryService is negated. I am only looking at the performance after the first invocation. The subsequent invocations are better by .5 secs which is the time taken to create ITaskQueryService .
List<ITaskQueryService.OptionalInfo> optionalInfo =
new ArrayList<ITaskQueryService.OptionalInfo>();
//optionalInfo.add(ITaskQueryService.OptionalInfo.PAYLOAD);
optionalInfo.add(ITaskQueryService.OptionalInfo.GROUP_ACTIONS);
optionalInfo.add(ITaskQueryService.OptionalInfo.CUSTOM_ACTIONS);
optionalInfo.add(ITaskQueryService.OptionalInfo.ALL_ACTIONS);
optionalInfo.add(ITaskQueryService.OptionalInfo.ATTACHMENTS);
optionalInfo.add(ITaskQueryService.OptionalInfo.COMMENTS);
optionalInfo.add(ITaskQueryService.OptionalInfo.SHORT_HISTORY);
Predicate predictAll = new Predicate(TableConstants.WFTASK_STATE_COLUMN, Predicate.OP_EQ, IWorkflowConstants.TASK_STATE_ASSIGNED);
//Predicate predictAll = new Predicate(TableConstants.WFTASK_STATE_COLUMN, WorkflowConstants.TASK_STATE_ASSIGNED);
// Predicate predictAll = null;
if (null != category) {
Predicate predictCat =
new Predicate(TableConstants.WFTASK_CATEGORY_COLUMN,
Predicate.OP_EQ, category);
predictAll = new Predicate(predictAll, Predicate.AND, predictCat);
if (null != title) {
Predicate predictTitle =
new Predicate(TableConstants.WFTASK_TITLE_COLUMN,
Predicate.OP_EQ, title);
predictAll =
new Predicate(predictAll, Predicate.AND, predictTitle);
Ordering ordering =
new Ordering(TableConstants.WFTASK_CREATEDDATE_COLUMN, true, true);
ordering.addClause(TableConstants.WFTASK_PRIORITY_COLUMN, true, true);
// int count = this.getTaskQueryService().countTasks(this.getWorkFlowContext(userName), filter, null, predictAll);
// System.out.println(" the total count is : " + count);
ITaskQueryService taskService = this.getTaskQueryService();
List<Task> t = null;
for (int i =0; i < 5 ; i ++) {
Long startt1 = System.currentTimeMillis();
t = taskService.queryTasks(this.getWorkFlowContext(userName),
null,
optionalInfo, filter,
null, predictAll,
ordering, 0, 10);
Long startt2 = System.currentTimeMillis();
Regards,
NageshHello, see how many of these you can answer...
See if the Disk is issuing any S.M.A.R.T errors in Disk Utility...
http://support.apple.com/kb/PH7029
Open Activity Monitor in Applications>Utilities, select All Processes & sort on CPU%, any indications there?
How much RAM & free space do you have also, click on the Memory & Disk Usage Tabs.
Open Console in Utilities & see if there are any clues or repeating messages when this happens.
In the Memory tab, are there a lot of Pageouts?
Maybe you are looking for
-
Can't Install Windows 7 32-bit Drivers
So I recently had have a hard drive replaced in a Macbook since the original failed and had to re-install Windows 7 again. After intalling Windows 7, I inserted my Snow Leapord DVD (10.6.0) and let it run the set up but it didn't install very many dr
-
Making the "white" template look like the "darkroom" template
my current podcast page is built from the "white" template, but I am re-designing the rest of the site from the "darkroom" template. i do not want to change the template of my podcast page because i don't want to mess with the template on each indivi
-
Hi Sapians I have created 2 Value Fields for Qty posting, How and where I mapped to get my results as follows: When Mvt 101 will be done on Product (Material) : Qty should be posted to VV001 (Production) When Mvt 102 will be done on Product (Material
-
Crosstab Formatting Crystal 8.5
I have a crosstab that I am trying to format. I have done most of the work but this (hopefully) last two parts are annoying. First Issue: Layout: Rows are Names Columns are Categories Summarized Field is sum of amount (for each name for each categor
-
Blocking users after migrating from WCS to NCS?
One of our people in our Helpdesk blocked a user mac because of an alert that said they were using the WiFi in an unsafe manner. Probably streaming or file sharing. Anyway when they blocked the mac they noticed that in the older WCS it would ask i