Discoverer 9.0.4.45.02 Performance issues
I see various performance issues with this version of discoverer.Like -
1.Reports taking a long time. eg - it take 25 minutes to display 100 recorrds
2.CPU spikes whie exporting reports to excel spreadsheet using discoverer viewer / plus.
I have read many whitepapers on performance issues. But nothing is very specific to discoverer 10g. Anybody encountered similar issues ?
It depends...
What kind of host do you have?
1) Have you checked memory consumtion - perhaps you're swapping a lot?
2) Have you checked the cpu consumtion - perhaps you need a better cpu or more cpu's.
3) How about network performance?
Regards,
Martin Malmstrom
Similar Messages
-
Database migrated from Oracle 10g to 11g Discoverer report performance issu
Hi All,
We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
In database 10g the report is working fine but the same report is not working fine in 11g.
The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
Please advise.
Regards,Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
HTH
Srini -
Performance issues when creating a Report / Query in Discoverer
Hi forum,
Hope you are can help, it involves a performance issues when creating a Report / Query.
I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = Posted we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
Please see attached the SQL Inspector Plan:
Before Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
AND-EQUAL
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_N1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
After Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS FULL GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
Many thanks,
LanceHi Rod,
I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
Ive been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
I think the problem is with the column using DECODE. When querying the column in TOAD the value of P is returned. But in discoverer the condition is done on the value Posted. Im not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
Lance
DECODE( JOURNAL_BATCH1.STATUS,
'+', 'Unable to validate or create CTA',
'+*', 'Was unable to validate or create CTA',
'-','Invalid or inactive rounding differences account in journal entry',
'-*', 'Modified invalid or inactive rounding differences account in journal entry',
'<', 'Showing sequence assignment failure',
'<*', 'Was showing sequence assignment failure',
'>', 'Showing cutoff rule violation',
'>*', 'Was showing cutoff rule violation',
'A', 'Journal batch failed funds reservation',
'A*', 'Journal batch previously failed funds reservation',
'AU', 'Showing batch with unopened period',
'B', 'Showing batch control total violation',
'B*', 'Was showing batch control total violation',
'BF', 'Showing batch with frozen or inactive budget',
'BU', 'Showing batch with unopened budget year',
'C', 'Showing unopened reporting period',
'C*', 'Was showing unopened reporting period',
'D', 'Selected for posting to an unopened period',
'D*', 'Was selected for posting to an unopened period',
'E', 'Showing no journal entries for this batch',
'E*', 'Was showing no journal entries for this batch',
'EU', 'Showing batch with unopened encumbrance year',
'F', 'Showing unopened reporting encumbrance year',
'F*', 'Was showing unopened reporting encumbrance year',
'G', 'Showing journal entry with invalid or inactive suspense account',
'G*', 'Was showing journal entry with invalid or inactive suspense account',
'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
'I', 'In the process of being posted',
'J', 'Showing journal control total violation',
'J*', 'Was showing journal control total violation',
'K', 'Showing unbalanced intercompany journal entry',
'K*', 'Was showing unbalanced intercompany journal entry',
'L', 'Showing unbalanced journal entry by account category',
'L*', 'Was showing unbalanced journal entry by account category',
'M', 'Showing multiple problems preventing posting of batch',
'M*', 'Was showing multiple problems preventing posting of batch',
'N', 'Journal produced error during intercompany balance processing',
'N*', 'Journal produced error during intercompany balance processing',
'O', 'Unable to convert amounts into reporting currency',
'O*', 'Was unable to convert amounts into reporting currency',
'P', 'Posted',
'Q', 'Showing untaxed journal entry',
'Q*', 'Was showing untaxed journal entry',
'R', 'Showing unbalanced encumbrance entry without reserve account',
'R*', 'Was showing unbalanced encumbrance entry without reserve account',
'S', 'Already selected for posting',
'T', 'Showing invalid period and conversion information for this batch',
'T*', 'Was showing invalid period and conversion information for this batch',
'U', 'Unposted',
'V', 'Journal batch is unapproved',
'V*', 'Journal batch was unapproved',
'W', 'Showing an encumbrance journal entry with no encumbrance type',
'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
'X', 'Showing an unbalanced journal entry but suspense not allowed',
'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
'Z', 'Showing invalid journal entry lines or no journal entry lines',
'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ), -
Hi All,
We are using R12(12.1)
OS:RHEL 4
We are experiencing performance issues on Discoverer.So could you please provide Details which can be tuned to make it fast.
Edited by: user8936206 on May 7, 2010 5:43 PMHi;
please check:
Using Discoverer 10.1.2 with Oracle E-Business Suite Release 12 [ID 373634.1]
http://download-west.oracle.com/docs/html/B13918_03/maint.htm#i1028691
Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite 11i [ID 290807.1]
Information on Earlier JInitiator Versions For Oracle E-Business Suite 11i [ID 232200.1]
Hope it helps
Regard
Helios -
Performance Issue with Crosstab Reports Using Disco Viewer 10.1.2.48.18
We're experiencing Performance Issue (retrieving 40000 rows) with Crosstab Reports Using Disco Viewer 10.1.2.48.18 ( > 01 Minute , executing "Building Page Axis" or executing a Refresh).
Are there parameters to tun (in pref.txt file) , in order to reduce "Building Page Axis" execution ?
Note : We've got the same performance problem , using Discoverer Desktop 10.1.2.48.18.
Thank's in advance for your Help.Hi
Well if the same issue occurs in both Desktop and Viewer then you have your answer. It's not the way that Discoverer is running the workbook its the way the workbook has been constructed.
For a start, 40000 rows for a Crosstab is way over the top and WILL cause performance issues. This is because Discoverer has to create a bucket for every data point for every combination of items on the page, side and top axes. The more rows, page items and column headings that you have, the more buckets you have and therefore the longer it will take for Discoverer to work out the contents of every bucket.
Also, whenever you use page items or crosstabs, Discoverer has to retrieve all of the rows for the entire query, not just the first x rows as with a table. This is because it cannot possibly know how many buckets to create until it has all the rows.
You therefore to:
a) apply sufficient filters to reduce the amount of data being returned to something manageable
b) reduce the number of page items, if used
c) reduce the number of items on the side or top axis of a crosstab
d) reduce the number of complex calculations, especially calculations that would generate a new bucket
If you have a lot of complex calculations, you should consider the use of a materialized view / summary folder to pre-calculate the values.
Does this help?
Best wishes
Michael Armstrong-Smith
URL: http://learndiscoverer.com
Blog: http://learndiscoverer.blogspot.com -
Report Performance Issue - Activity
Hi gurus,
I'm developing an Activity report using Transactional database (Online real time object).
the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
In order to fullfill that requirment I've created 2 report
1) All Activities related to Contact -- Report A
pull in Acitivity ID , Activity Type, Status, Contact ID
2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
<Activity ID not equal to any Activity ID in Report B>
Anyone encountered performance issue due to the advanced filter in analytic before?
any input is really appriciated
Thanks in advanced,
FinaFina,
Union is always the last option. If you can get all record in one report, do not use union.
since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
if contact id is null (or = 'Unspecified') then owner name else contact name
Hopefully, this is helping. -
Report performance Issue in BI Answers
Hi All,
We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
Could any once suggest to improve the performance in BI answers?
Thanks in advance,I hope you dont have many case statements and complex calculations that you do in the Answers.
Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
hope it helps
thanks
Prash -
BW BCS cube(0bcs_vc10 ) Report huge performance issue
Hi Masters,
I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
Some of the querys is taking more 15 to 20 minutes to execute the report.
This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
Current service pack we are using is
SAP_BW 350 0016 SAPKW35016
FINBASIS 300 0012 SAPK-30012INFINBASIS
BI_CONT 353 0008 SAPKIBIFP8
SEM-BW 400 0012 SAPKGS4012
Best of Luck
Chris
BW BCS cube(0bcs_vc10 ) Report huge performance issueRavi,
I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
they resolved it.
Regards,
Chris -
This is the question we will try to answer...
What si the bottle neck (hardware) of Adobe Premiere Pro CS6
I used PPBM5 as a benchmark testing template.
All the data and log as been collected using performance counter
First of all, describe my computer...
Operating System
Microsoft Windows 8 Pro 64-bit
CPU
Intel Xeon E5 2687W @ 3.10GHz
Sandy Bridge-EP/EX 32nm Technology
RAM
Corsair Dominator Platinum 64.0 GB DDR3
Motherboard
EVGA Corporation Classified SR-X
Graphics
PNY Nvidia Quadro 6000
EVGA Nvidia GTX 680 // Yes, I created bench stats for both card
Hard Drives
16.0GB Romex RAMDISK (RAID)
556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
I have other RAID installed, but not relevant for the present post...
PSU
Cosair 1000 Watts
After many days of tests, I wanna share my results with community and comment them.
CPU Introduction
I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it. I used Prime 95 to get the result. // I know this seem to be ordinary, but you will understand soon...
The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel. The CPU gives everything it can !
Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
(picture 1)
Disk Introduction
I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
(picture 2)
Now, I know my limits ! It's time to enter deeper in the subject !
PPBM5 (H.264) Result
I rendered the sequence (H.264) using Adobe Media Encoder.
The result :
My CPU is not used at 100%, the turn around 50%
My Disk is totally idle !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate seem to be a wave (up and down). Probably caused by (Encrypt time.... write.... Encrypt time.... write...) // It's ok, ~5Mb/sec during transfert rate !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 39 Go RAM free after the test ! // Excellent
~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
GPU Load on card seem to be a wave also ! (up and down) ~40% usage of GPU during the process of encoding.
GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process. Why ???? Is there some time delay in the encoding process ?
Other : Quadro 6000 & GTX 680 gives the same result !
(picture 3)
PPBM5 (Disk Test) Result (RAID LSI)
I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
The result :
My CPU is not used at 100%
My Disk wave and wave again, but far far from the limit !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate wave and wave again (up and down). Probably caused by (Buffering time.... write.... Buffering time.... write...) // It's ok, ~375Mb/sec peak during transfert rate ! Easy !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 40.5 Go RAM free after the test ! // Excellent
~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
GPU Ram get 400Mb of RAM (No usage for encoding)
Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process. Why ???? Is there some time delay in the encoding process ?
(picture 4)
PPBM5 (Disk Test) Result (Direct in RAMDrive)
I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
Comment/Question : Look at the transfert rate under (picture 5). It's exactly the same speed than with my RAID 0 LSI controller. Impossible ! Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage. CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%). // This kind of results let me REALLY confused. It's smell bug and big problem with hardware and IO usage in CS6 !
(picture 5)
PPBM5 (MPEG-DVD) Result
I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
The result :
My CPU is not used at 100%
My Disk is totally idle !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate wave and wave again (up and down). Probably caused by (Encoding time.... write.... Encoding time.... write...) // It's ok, ~2Mb/sec during transfert rate ! Real Joke !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 40 Go RAM free after the test ! // Excellent
~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
GPU Load on card = 100 (This use the maximum of my GPU)
GPU Ram get 1Gb of RAM
Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only. Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
(picture 6)
Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
You can look the result in the picture.
Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process. Why ???? Adobe Premiere seem to have some bug with thread management. My hardware is idle ! I understand AVCHD can be very difficult to decode, but where is the waste ? My computer want, but the software not !
(picture 7)
Render composition using 3D Raytracer in After Effects CS6
You can look the result in the picture.
Comment : GPU seems to be the bottle neck when using After Effects. CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
(picture 8)
Conclusion
There is nothing you can do (I thing) with CS6 to get better performance actually. GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card). Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance). I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU. I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
Premiere Pro, I'm speechless ! Premiere Pro is not able to get max performance of my computer. Not just 10% or 20%, but average 60%. I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor. But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post. It's seem to be a bug...
Thank you.Patrick,
I can't explain everything, but let me give you some background as I understand it.
The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
Just my $ 0.02 -
Performance Issue for BI system
Hello,
We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
The Main problem is while running a report or creating a query is taking way too long time.
Kindly help me.Hello SIva,
Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
No one is using the system currently but in ST02 i can see the swaps are in red.
Buffer HitRatio % Alloc. KB Freesp. KB % Free Sp. Dir. Size FreeDirEnt % Free Dir Swaps DB Accs
Nametab (NTAB) 0
Table definition 99,60 6.798 20.000 29.532 153.221
Field definition 99,82 31.562 784 2,61 20.000 6.222 31,11 17.246 41.248
Short NTAB 99,94 3.625 2.446 81,53 5.000 2.801 56,02 0 2.254
Initial records 73,95 6.625 998 16,63 5.000 690 13,80 40.069 49.528
0
boldprogram 97,66 300.000 1.074 0,38 75.000 67.177 89,57 219.665 725.703bold
CUA 99,75 3.000 875 36,29 1.500 1.401 93,40 55.277 2.497
Screen 99,80 4.297 1.365 33,35 2.000 1.811 90,55 119 3.214
Calendar 100,00 488 361 75,52 200 42 21,00 0 158
OTR 100,00 4.096 3.313 100,00 2.000 2.000 100,00 0
0
Tables 0
Generic Key 99,17 29.297 1.450 5,23 5.000 350 7,00 2.219 3.085.633
Single record 99,43 10.000 1.907 19,41 500 344 68,80 39 467.978
0
Export/import 82,75 4.096 43 1,30 2.000 662 33,10 137.208
Exp./ Imp. SHM 89,83 4.096 438 13,22 2.000 1.482 74,10 0
SAP Memory Curr.Use % CurUse[KB] MaxUse[KB] In Mem[KB] OnDisk[KB] SAPCurCach HitRatio %
Roll area 2,22 5.832 22.856 131.072 131.072 IDs 96,61
Page area 1,08 2.832 24.144 65.536 196.608 Statement 79,00
Extended memory 22,90 958.464 1.929.216 4.186.112 0 0,00
Heap memory 0 0 1.473.767 0 0,00
Call Stati HitRatio % ABAP/4 Req ABAP Fails DBTotCalls AvTime[ms] DBRowsAff.
Select single 88,59 63.073.369 5.817.659 4.322.263 0 57.255.710
Select 72,68 284.080.387 0 13.718.442 0 32.199.124
Insert 0,00 151.955 5.458 166.159 0 323.725
Update 0,00 378.161 97.884 395.814 0 486.880
Delete 0,00 389.398 332.619 415.562 0 244.495
Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM -
RE: Case 59063: performance issues w/ C TLIB and Forte3M
Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.htmlEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
IOS 8.1+ Performance Issue
Hello,
I encountered a serious performance bug in Adobe Air iOS application on devices running iOS 8.1 or later. Approximately in 1-2 minutes fps drops to 7 or lower without interacting with the app. This is very noticeable in the app. The app looks like frozen for about 0.5 seconds. The bug doesn't appear on every session.
Devices tested: iPad Mini iOS 8.1.1, iPhone 6 iOS 8.2. iPod Touch 4 iOS 6 is working correctly.
Air SDK versions: 15 and 17 tested.
I can track the bug using Adobe Scout. There is a noticeable spike on frame time 1.16. Framerate drops to 7.0. The App spends much time on function Runtime overhead. Sometimes the top activity is Running AS3 attached to frame or Waiting For Next Frame instead of Runtime overhead.
The bug can be reproduced on an empty application having a one bitmap on stage. Open the app and wait for two minutes and the bug should appear. If not, just close and relaunch the app.
Bugbase link: Bug#3965160 - iOS 8.1+ Performance Issue
Miska SavelaHi
Id already activated Messages and entered the 6 digit code I was presented with into my iPhone. I can receive txt messages from non iOS users on my iMac and can reply to those messages.
I just can't send a new message from scratch to a non iOS user :-s
Thanks
Baz -
Returning multiple values from a called tabular form(performance issue)
I hope someone can help with this.
I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
Sorry for being so long-winded and thanks in advance for any help.Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
Hope this helps,
Candace Stover
Forms Product Management -
Performance issues with class loader on Windows server
We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
The thread dumps indicate many threads are waiting in queue for the native file methods:
"[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
java.io.File.exists(Unknown Source)
weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
java.lang.ClassLoader.getResourceAsStream(Unknown Source)
javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
java.security.AccessController.doPrivileged(Native Method)
javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
javax.xml.parsers.FactoryFinder.find(Unknown Source)
javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciatedHi shubhu,
I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
https://issues.jboss.org/browse/JBPAPP-6166
Regards,
P-H
http://javaeesupportpatterns.blogspot.com/
Maybe you are looking for
-
How do I view XMP metadata in Adobe Reader?
How do I view XMP metadata in Adobe Reader? I've created a PDF which (I believe) includes XMP metadata, and I'd obviously like to check that it's been done correctly. However it's not obvious how to view this within Adobe Reader (I'm using 11.0.04 o
-
File Content Conversation in PI7.1
Hi, Iam working out file_Rfc_file Scenario..my input file contains plant say ''3000''..i have written file content conversion in sender side..this wat i have written. item.fieldFixedLengths 1,4 item.endSeparator 'nl' item.fieldNames key,
-
Error in SQL script generated from OWB 10.1.0.4.0 Metadata Export Bridge
Hi... maybe i´m abusing this forum .. but .. when you have questions .. you have to look for answers I use OWB 10.1.0.4 to buid some dimensions and one cube. I validate and generate this object and the result was successful. I made the deployment and
-
Hello I have a loading table which as a primary key. We insert into load_ctl table which has a load_ctl_id and then sql load a csv file into a table called load_table. Now when we have to process the load_table we work with the primary key from the c
-
Get the Pageid or URL programmatically
Hello, How can I get the ID of the page or the current URL/Page programmatically in PL/SQL ? I'm using a dynamic page within a Portal Page and I would like to know the pageid (of the portal page) or current URL. Which Database Package/view can I use