Pivot - Performance Issue with large dataset
Hello,
Database version : Oracle 10.2.0.4 - Linux
I'm using a function to return a pivot query depending on an input "RUN_ID" value
For example, i consider two differents "RUN_ID" (e.g. 119 and 120) with exactly the same dataset
I have a performance issue when i run the result query with the "RUN_ID"=120.
Pivot:
SELECT MAX (a.plate_index), MAX (a.plate_name), MAX (a.int_well_id),
MAX (a.row_index_alpha), MAX (a.column_index), MAX (a.is_valid),
MAX (a.well_type_id), MAX (a.read_index), MAX (a.run_id),
MAX (DECODE (a.value_type || a.value_index,
'CALC190', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC304050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC306050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC30050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC3011050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC104050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC106050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC10050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC1011050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC204050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC206050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC20050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC2011050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC80050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC70050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'RAW0', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'RAW5030', a.this_value,
NULL
MAX (a.dose), MAX (a.unit), MAX (a.int_plate_id), MAX (a.run_name)
FROM vw_well_data a
WHERE a.run_id = :app_run_id
GROUP BY a.int_well_id, a.read_index
Run the query :
SELECT Sql_FullText,(cpu_time/100000) "Cpu Time (s)",
(elapsed_time/1000000) "Elapsed time (s)",
fetches,buffer_gets,disk_reads,executions
FROM v$sqlarea
WHERE Parsing_Schema_Name ='SCHEMA';
With results :
SQL_FULLTEXT Cpu Time (s) Elapsed time (s) FETCHES BUFFER_GETS DISK_READS EXECUTIONS
query1 (RUN_ID=119) 22.15857 3.589822 1 2216 354 1
query2 (RUN_ID=120) 1885.16959 321.974332 3 7685410 368 3
Explain Plan for RUNID 119_
PLAN_TABLE_OUTPUT
Plan hash value: 3979963427
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 261 | 98397 | 434 (2)| 00:00:06 |
| 1 | HASH GROUP BY | | 261 | 98397 | 434 (2)| 00:00:06 |
| 2 | VIEW | VW_WELL_DATA | 261 | 98397 | 433 (2)| 00:00:06 |
| 3 | UNION-ALL | | | | | |
|* 4 | HASH JOIN | | 252 | 21168 | 312 (2)| 00:00:04 |
| 5 | NESTED LOOPS | | 249 | 15687 | 112 (2)| 00:00:02 |
|* 6 | HASH JOIN | | 249 | 14442 | 112 (2)| 00:00:02 |
| 7 | TABLE ACCESS BY INDEX ROWID | PLATE | 29 | 464 | 2 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 29 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 13286 | 544K| 109 (1)| 00:00:02 |
| 10 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 12 | TABLE ACCESS BY INDEX ROWID| WELL | 13286 | 402K| 108 (1)| 00:00:02 |
|* 13 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 13286 | | 46 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | WELL_RAW_DATA | 26361 | 540K| 199 (2)| 00:00:03 |
|* 16 | INDEX RANGE SCAN | IDX_WELL_RAW_RUN_ID | 26361 | | 92 (2)| 00:00:02 |
| 17 | NESTED LOOPS | | 9 | 891 | 121 (2)| 00:00:02 |
|* 18 | HASH JOIN | | 9 | 846 | 121 (2)| 00:00:02 |
|* 19 | HASH JOIN | | 249 | 14442 | 112 (2)| 00:00:02 |
| 20 | TABLE ACCESS BY INDEX ROWID | PLATE | 29 | 464 | 2 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 29 | | 1 (0)| 00:00:01 |
| 22 | NESTED LOOPS | | 13286 | 544K| 109 (1)| 00:00:02 |
| 23 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 24 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 25 | TABLE ACCESS BY INDEX ROWID| WELL | 13286 | 402K| 108 (1)| 00:00:02 |
|* 26 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 13286 | | 46 (0)| 00:00:01 |
| 27 | TABLE ACCESS BY INDEX ROWID | WELL_CALC_DATA | 490 | 17640 | 9 (0)| 00:00:01 |
|* 28 | INDEX RANGE SCAN | IDX_WELL_CALC_RUN_ID | 490 | | 4 (0)| 00:00:01 |
|* 29 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("WELL_RAW_DATA"."RUN_ID"="WELL"."RUN_ID" AND
"WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
6 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
8 - access("PLATE"."RUN_ID"=119)
11 - access("RUN"."RUN_ID"=119)
13 - access("WELL"."RUN_ID"=119)
14 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
16 - access("WELL_RAW_DATA"."RUN_ID"=119)
18 - access("WELL"."RUN_ID"="WELL_CALC_DATA"."RUN_ID" AND
"WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
19 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
21 - access("PLATE"."RUN_ID"=119)
24 - access("RUN"."RUN_ID"=119)
26 - access("WELL"."RUN_ID"=119)
28 - access("WELL_CALC_DATA"."RUN_ID"=119)
29 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
Explain Plan for RUNID 120_
PLAN_TABLE_OUTPUT
Plan hash value: 599334230
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 754 | 24 (5)| 00:00:01 |
| 1 | HASH GROUP BY | | 2 | 754 | 24 (5)| 00:00:01 |
| 2 | VIEW | VW_WELL_DATA | 2 | 754 | 23 (0)| 00:00:01 |
| 3 | UNION-ALL | | | | | |
|* 4 | TABLE ACCESS BY INDEX ROWID | WELL_RAW_DATA | 1 | 21 | 3 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 84 | 9 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 63 | 6 (0)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 58 | 6 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 27 | 3 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID| PLATE | 1 | 16 | 2 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 1 | | 1 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID | WELL | 1 | 31 | 3 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 59 | | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | IDX_WELL_RAW_DATA_WELL_ID | 2 | | 2 (0)| 00:00:01 |
|* 17 | TABLE ACCESS BY INDEX ROWID | WELL_CALC_DATA | 1 | 36 | 8 (0)| 00:00:01 |
| 18 | NESTED LOOPS | | 1 | 99 | 14 (0)| 00:00:01 |
| 19 | NESTED LOOPS | | 1 | 63 | 6 (0)| 00:00:01 |
| 20 | NESTED LOOPS | | 1 | 58 | 6 (0)| 00:00:01 |
| 21 | NESTED LOOPS | | 1 | 27 | 3 (0)| 00:00:01 |
| 22 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 23 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 24 | TABLE ACCESS BY INDEX ROWID| PLATE | 1 | 16 | 2 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 1 | | 1 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | WELL | 1 | 31 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 59 | | 2 (0)| 00:00:01 |
|* 28 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_WELL_CALC_RUN_ID | 486 | | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("WELL_RAW_DATA"."RUN_ID"=120)
10 - access("RUN"."RUN_ID"=120)
12 - access("PLATE"."RUN_ID"=120)
13 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
14 - access("WELL"."RUN_ID"=120)
15 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
16 - access("WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
17 - filter("WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
23 - access("RUN"."RUN_ID"=120)
25 - access("PLATE"."RUN_ID"=120)
26 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
27 - access("WELL"."RUN_ID"=120)
28 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
29 - access("WELL_CALC_DATA"."RUN_ID"=120)I need some advice to understand the issue and to improve the performance.
Thanks,
Grégory
Hello,
Thanks for your response.
Stats are computed recently with DBMS_STATS package (case 2) and we have histogramm on 'RUN_ID' columns.
I tried to use the deprecated "analyze" method (case 1) and obtained better results!
DECLARE
-- Get tables used in the view vw_well_data --
CURSOR c1
IS
SELECT table_name, last_analyzed
FROM user_tables
WHERE table_name LIKE 'WELL%';
BEGIN
FOR r1 IN c1
LOOP
-- Case 1 : Analyze method : Perf is good --
EXECUTE IMMEDIATE 'analyze table '
|| r1.table_name
|| ' compute statistics ';
-- Case 2 : DBMS_STATS --
DBMS_STATS.gather_table_stats ('SCHEMA', r1.table_name);
END LOOP;
END;The explain plans are the same as before
Any explanations, suggestions ?
Thanks,
Gregory
Similar Messages
-
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
-
Performance issues with FDK in large XML documents
In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
--Franz -
Performance issues with Homesharing?
I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
Has anyone some suggestions?Hi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Performance issues with data warehouse loads
We have performance issues with our data warehouse load ETL process. I have run
analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
ScottHi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Performance Issue with BSIS(open accounting items)
Hey All,
I am having serious performance issue with a accrual report which gets all open GL items, and need some tips for optimization.
The main issue is that I am accesing large tables like BSIS, BSEG, BSAS etc without proper indexes and that I am dealing with huge amounts of data.
The select itself take a long time and after that as I have so much data overall execution is slow too.
The select which concerns me the most is:
SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
INTO TABLE i_bsis
FROM bsis
WHERE bukrs = '1000'
AND hkont in r_hkont
AND budat <= p_lcdate
AND augdt = 0
AND augbl = space
AND gsber = c_ZRL1
AND gjahr BETWEEN l_gjahr2 AND l_gjahr
AND ( blart = c_re "Invoice
OR blart = c_we "Goods receipt
OR blart = c_zc "Invoice Cancels
OR blart = c_kp ). "Accounting offset
I have seen other related threads, but was not that helpful.
We already have a secondary index on bukrs hkont and budat, and i have checked in ST05 that it does use it. But inspite that it takes more than 15 hrs to complete(maybe because of huge data).
Any Input is highly appreciated.
ThanksThank you Thomas for your inputs:
You said that R_HKONT contains several ranges of account numbers. If these ranges cover a significant
portion of the overall existing account numbers, then there is no really quick access possible via the
BSIS primary key.
Unfortunately R_HKONT contains all account numbers.
As Rob said, your index on HKONT and BUDAT does not help much, since you are selecting "<=" on
BUDAT. No chance of narrowing down that range?
Will look into this.
What about GSBER? Does the value in c_ZRL1 provide a rather small subset of the overall values? Then
an index on BUKRS and GSBER might be helpful.
ZRL1 does provide a decent selection . But I dont know if one more index is a good idea on overall
system performance.
I assume that the four document types are not very selective, so it probably does not pay off to
investigate selecting on BKPF (there is an index involving BLART) and joining BSIS for the additional
information. You still might want to look into it though.
I did try to investigate this option too. Based on other threads related to BSIS and Robs Suggestion in
those threads I tried this:
SELECT bukrs belnr gjahr blart budat
FROM bkpf INTO TABLE bkpf_l
WHERE bukrs = c_pepsico
AND bstat IN (' ', 'A', 'B', 'D', 'M', 'S', 'V', 'W', 'Z')
AND blart IN ('RE', 'WE', 'ZC', 'KP')
AND gjahr BETWEEN l_gjahr2 AND l_gjahr
AND budat <= p_lcdate.
SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
FROM bsis INTO TABLE i_bsis FOR ALL ENTRIES IN bkpf_l
WHERE bukrs = bkpf_l-bukrs
AND hkont IN r_hkont
AND budat = bkpf_l-budat
AND augdt = 0
AND augbl = space
AND gjahr = bkpf_l-gjahr
AND belnr = bkpf_l-belnr
AND blart = bkpf_l-blart
AND gsber = c_zrl1.
The improves the select on BSIS a lot, but the first select on BKPF kills it. Not sure if this would help
improve performance.
Also I was wondering whether it helps on refreshing the tabe statistics through DB20. The last refresh
was done 7 months back. How frequently should we do this? Will it help? -
Is anyone working with large datasets ( 200M) in LabVIEW?
I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?
Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
Hope this helps!
Chris
Attachments:
Mean Derivation.JPG 20 KB -
Performance Issue with RF Scanners after SAP Enhancement Pack 5 Upgrade
We are on component version SAP ECC 6.0, and recently upgraded to Enhancement Pack 5. I believe we are on Net Weaver 7.10, and using RF scanners in one plant that is Warehouse Managed. Evidentially when we moved to EHP5, the Web SAP Console went away and we are left with ITS Mobile. This has created several issues and continues to be a performance barrier for the forklift drivers in the warehouse. We see there is a high use of java script, and the processors canu2019t handle this. When we login to tcode LM00, on a laptop or desktop computer, there are no performance issues. When we login to tcode LM00, with the RF scanners, the system is very slow. It might take 30 seconds to confirm one item on a WM Transfer Order.
1.) Can we revert back to Web SAP Console, after we have upgraded to EHP5?
2.) What is creating the performance issues with the RF Scanners now that we switched over to SAP ITS mobile?
Our RF Scanners are made by Intermec, but I donu2019t think that is where the solution lies. One person in our IT Operations has spent a good deal of time configuring SAPITS to get it to work, but still it isnu2019t performing.Tom,
I am sorry I did not see this earlier.
I'm currently working on a very similar project with ITS mobile and the problem is to accurately determine the root cause of the problem in the least amount of time. The tool that works is found here: http://www.connectrf.com/index.php/mcm/managed-diagnostics/
Isolating the network from the application and the device is a time consuming process unless you have a piece of software that can trace the HTTP transaction between host and device on both wired and wireless side of the network. Once that is achieved (as with Connect's tool) you can then you can begin to solve the problem.
What I found in my project is that the amount of data traffic generated by ITS mobile can be reduced drastically, which speeds the response time of the mobile devices, especially with large number of devices in distribution centers.
Let me know if I can answer more questions related to this topic.
Cheers,
Shari -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance issues with Bapi BAPI_MATERIAL_AVAILABILITY...
Hello,
I have a Z program to check ATP which is working with Bapi BAPI_MATERIAL_AVAILABILITY....
As I am in the retail system we have performance issues with this bapi due the huge amount of articles in the system we need to calculate the ATP.
any way to improve the performance?
Thanks and best regards
LThe BAPI appears to execute for only one plant/material, etc., at a time, so I would have to concentrate on making data retrieval and post-bapi processing as efficient as possible. In your trace output, how much of your overall time is consumed by the BAPI? How much by the other code? You might find improvements there...
-
Performance issues with Imac intel 2011 27 inch..
I have an 2011 imac 3,4 GHz Intel Core i7 with 16 GB 1333 MHz DDR3 RAM.
Since a few months i noticed performance issues with FCPX, where before it would run through video it shows sometimes the color ball. Now the computer is slow in starting up and shutting down also. Ran disk utility but to no avail. I am now considering reinstalling mavericks.
But I ran etrecheck.
here are the results, please who can help me? I use this computer to edit and about everything else.
copied:
Problem description:
slow start and slow quit of computer. Performance issues with fcpx and other programms.
EtreCheck version: 2.1.8 (121)
Report generated 6 maart 2015 21:49:31 CET
Download EtreCheck from http://etresoft.com/etrecheck
Click the [Click for support] links for help with non-Apple products.
Click the [Click for details] links for more information about that line.
Hardware Information: ℹ️
iMac (27-inch, Mid 2011) (Technical Specifications)
iMac - model: iMac12,2
1 3.4 GHz Intel Core i7 CPU: 4-core
16 GB RAM Upgradeable
BANK 0/DIMM0
4 GB DDR3 1333 MHz ok
BANK 1/DIMM0
4 GB DDR3 1333 MHz ok
BANK 0/DIMM1
4 GB DDR3 1333 MHz ok
BANK 1/DIMM1
4 GB DDR3 1333 MHz ok
Bluetooth: Old - Handoff/Airdrop2 not supported
Wireless: en1: 802.11 a/b/g/n
Video Information: ℹ️
AMD Radeon HD 6970M - VRAM: 1024 MB
iMac 2560 x 1440
System Software: ℹ️
OS X 10.9.4 (13E28) - Time since boot: 0:7:46
Disk Information: ℹ️
WDC WD1001FALS-403AA0 disk1 : (1 TB)
EFI (disk1s1) <not mounted> : 210 MB
MacintoshHD2 (disk1s2) /Volumes/MacintoshHD2 : 999.86 GB (152.18 GB free)
APPLE SSD TS256C disk0 : (251 GB)
EFI (disk0s1) <not mounted> : 210 MB
MacintoshHD (disk0s2) / : 250.14 GB (62.78 GB free)
Recovery HD (disk0s3) <not mounted> [Recovery]: 650 MB
HL-DT-STDVDRW GA32N
USB Information: ℹ️
Apple Inc. FaceTime HD Camera (Built-in)
Mitsumi Electric Hub in Apple Extended USB Keyboard
Logitech USB-PS/2 Optical Mouse
Mitsumi Electric Apple Extended USB Keyboard
Western Digital Ext HDD 1021 2 TB
EFI (disk2s1) <not mounted> : 210 MB
datavideo (disk2s2) /Volumes/datavideo : 1.37 TB (133.89 GB free)
prive (disk2s3) /Volumes/prive : 627.23 GB (68.13 GB free) - 7 errors
Apple Inc. BRCM2046 Hub
Apple Inc. Bluetooth USB Host Controller
Lexar USB_3_0 Reader
Seagate Expansion Desk 3 TB
EFI (disk3s1) <not mounted> : 315 MB
downloads (disk3s2) /Volumes/downloads : 249.18 GB (138.35 GB free)
BackUpBro (disk3s3) /Volumes/BackUpBro : 2.75 TB (1.03 TB free)
Apple Internal Memory Card Reader
Apple Computer, Inc. IR Receiver
Thunderbolt Information: ℹ️
Apple Inc. thunderbolt_bus
G-Technology G-RAID with Thunderbolt
Configuration files: ℹ️
/etc/sysctl.conf - Exists
Gatekeeper: ℹ️
Mac App Store and identified developers
Kernel Extensions: ℹ️
/Applications/Toast 11 Titanium/Spin Doctor.app
[not loaded] com.hzsystems.terminus.driver (4) [Click for support]
/Library/Extensions
[not loaded] com.Avid.driver.AvidDX (5.9.1 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.desktopvideo.iokit.driver (10.1.4 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.desktopvideo.iokit.framebufferdriver (10.1.4 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.desktopvideo.multibridge.iokit.driver (10.1.4 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.driver.BlackmagicIO (10.1.4 - SDK 10.8) [Click for support]
/Library/Extensions/DeckLink_Driver.kext/Contents/PlugIns
[not loaded] com.blackmagic-design.desktopvideo.firmware (10.1.4 - SDK 10.8) [Click for support]
/System/Library/Extensions
[not loaded] at.obdev.nke.LittleSnitch (3876 - SDK 10.8) [Click for support]
[not loaded] com.SafeNet.driver.Sentinel (7.5.2) [Click for support]
[not loaded] com.paceap.kext.pacesupport.master (5.9.1 - SDK 10.6) [Click for support]
[not loaded] com.roxio.BluRaySupport (1.1.6) [Click for support]
/System/Library/Extensions/PACESupportFamily.kext/Contents/PlugIns
[not loaded] com.paceap.kext.pacesupport.leopard (5.9.1 - SDK 10.4) [Click for support]
[not loaded] com.paceap.kext.pacesupport.panther (5.9.1 - SDK 10.-1) [Click for support]
[loaded] com.paceap.kext.pacesupport.snowleopard (5.9.1 - SDK 10.6) [Click for support]
[not loaded] com.paceap.kext.pacesupport.tiger (5.9.1 - SDK 10.4) [Click for support]
/Users/[redacted]/Library/Services/ToastIt.service/Contents/MacOS
[not loaded] com.roxio.TDIXController (2.0) [Click for support]
Startup Items: ℹ️
Digidesign Mbox 2: Path: /Library/StartupItems/Digidesign Mbox 2
DigidesignLoader: Path: /Library/StartupItems/DigidesignLoader
Startup items are obsolete in OS X Yosemite
Problem System Launch Agents: ℹ️
[running] com.paragon.NTFS.notify.plist [Click for support]
Launch Agents: ℹ️
[not loaded] com.adobe.AAM.Updater-1.0.plist [Click for support]
[loaded] com.adobe.CS4ServiceManager.plist [Click for support]
[loaded] com.avid.ApplicationManager.plist [Click for support]
[loaded] com.avid.backgroundservicesmanager.plist [Click for support]
[loaded] com.avid.dmfsupportsvc.plist [Click for support]
[loaded] com.avid.interplay.dmfservice.plist [Click for support]
[loaded] com.avid.interplay.editortranscode.plist [Click for support]
[loaded] com.avid.transcodeserviceworker.plist [Click for support]
[running] com.blackmagic-design.DesktopVideoFirmwareUpdater.plist [Click for support]
[failed] com.brother.LOGINserver.plist [Click for support] [Click for details]
[loaded] com.paragon.updater.plist [Click for support]
[failed] com.teamviewer.teamviewer.plist [Click for support] [Click for details]
[failed] com.teamviewer.teamviewer_desktop.plist [Click for support]
Launch Daemons: ℹ️
[loaded] com.adobe.fpsaud.plist [Click for support]
[loaded] com.adobe.SwitchBoard.plist [Click for support]
[loaded] com.adobe.versioncueCS4.plist [Click for support]
[loaded] com.avid.AMCUninstaller.plist [Click for support]
[running] com.avid.interplay.editorbroker.plist [Click for support]
[running] com.avid.interplay.editortranscodestatus.plist [Click for support]
[loaded] com.blackmagic-design.desktopvideo.XPCService.plist [Click for support]
[running] com.blackmagic-design.DesktopVideoHelper.plist [Click for support]
[running] com.blackmagic-design.streaming.BMDStreamingServer.plist [Click for support]
[not loaded] com.digidesign.fwfamily.helper.plist [Click for support]
[running] com.edb.launchd.postgresql-8.4.plist [Click for support]
[loaded] com.microsoft.office.licensing.helper.plist [Click for support]
[loaded] com.noiseindustries.FxFactory.FxPlug.plist [Click for support]
[loaded] com.noiseindustries.FxFactory.plist [Click for support]
[running] com.paceap.eden.licensed.plist [Click for support]
[loaded] com.teamviewer.Helper.plist [Click for support]
[failed] com.teamviewer.teamviewer_service.plist [Click for support]
[loaded] jp.co.canon.MasterInstaller.plist [Click for support]
[loaded] PACESupport.plist [Click for support]
User Login Items: ℹ️
iTunesHelper Programma Hidden (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
Internet Plug-ins: ℹ️
Default Browser: Version: 537 - SDK 10.9
OfficeLiveBrowserPlugin: Version: 12.2.6 [Click for support]
AdobePDFViewerNPAPI: Version: 11.0.0 - SDK 10.6 [Click for support]
FlashPlayer-10.6: Version: 16.0.0.305 - SDK 10.6 [Click for support]
Silverlight: Version: 5.1.30514.0 - SDK 10.6 [Click for support]
Flash Player: Version: 16.0.0.305 - SDK 10.6 [Click for support]
QuickTime Plugin: Version: 7.7.3
iPhotoPhotocast: Version: 7.0 - SDK 10.8
SharePointBrowserPlugin: Version: 14.0.0 [Click for support]
AdobePDFViewer: Version: 11.0.0 - SDK 10.6 [Click for support]
EPPEX Plugin: Version: 10.0 [Click for support]
JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Check version
User internet Plug-ins: ℹ️
Google Earth Web Plug-in: Version: 7.0 [Click for support]
Audio Plug-ins: ℹ️
DVCPROHDAudio: Version: 1.3.2
3rd Party Preference Panes: ℹ️
Blackmagic Desktop Video [Click for support]
DigidesignMbox2 [Click for support]
Digidesign Mbox 2 Pro [Click for support]
Flash Player [Click for support]
Paragon NTFS for Mac ® OS X [Click for support]
Time Machine: ℹ️
Skip System Files: NO
Mobile backups: OFF
Auto backup: NO - Auto backup turned off
Volumes being backed up:
MacintoshHD: Disk size: 250.14 GB Disk used: 187.36 GB
downloads: Disk size: 249.18 GB Disk used: 110.84 GB
Destinations:
G-RAID with Thunderbolt [Local]
Total size: 8.00 TB
Total number of backups: 6
Oldest backup: 2014-04-08 14:24:41 +0000
Last backup: 2014-09-24 21:58:28 +0000
Size of backup disk: Excellent
Backup size 8.00 TB > (Disk size 499.32 GB X 3)
Top Processes by CPU: ℹ️
2% Google Chrome
1% WindowServer
0% opendirectoryd
0% mds
0% dpd
Top Processes by Memory: ℹ️
206 MB mds_stores
172 MB com.apple.IconServicesAgent
155 MB Google Chrome
137 MB Dock
113 MB Google Chrome Helper
Virtual Memory Information: ℹ️
10.02 GB Free RAM
4.36 GB Active RAM
1.35 GB Inactive RAM
1.44 GB Wired RAM
1.24 GB Page-ins
0 B Page-outs
Diagnostics Information: ℹ️
Mar 6, 2015, 09:37:26 PM Self test - passed
Mar 6, 2015, 08:15:56 PM /Library/Logs/DiagnosticReports/Mail_2015-03-06-201556_[redacted].hangProblem description:
disk problems
EtreCheck version: 2.1.8 (121)
Report generated 7 maart 2015 14:16:18 CET
Download EtreCheck from http://etresoft.com/etrecheck
Click the [Click for support] links for help with non-Apple products.
Click the [Click for details] links for more information about that line.
Hardware Information: ℹ️
iMac (27-inch, Mid 2011) (Technical Specifications)
iMac - model: iMac12,2
1 3.4 GHz Intel Core i7 CPU: 4-core
16 GB RAM Upgradeable
BANK 0/DIMM0
4 GB DDR3 1333 MHz ok
BANK 1/DIMM0
4 GB DDR3 1333 MHz ok
BANK 0/DIMM1
4 GB DDR3 1333 MHz ok
BANK 1/DIMM1
4 GB DDR3 1333 MHz ok
Bluetooth: Old - Handoff/Airdrop2 not supported
Wireless: en1: 802.11 a/b/g/n
Video Information: ℹ️
AMD Radeon HD 6970M - VRAM: 1024 MB
iMac 2560 x 1440
System Software: ℹ️
OS X 10.9.4 (13E28) - Time since boot: 0:19:51
Disk Information: ℹ️
WDC WD1001FALS-403AA0 disk1 : (1 TB)
EFI (disk1s1) <not mounted> : 210 MB
MacintoshHD2 (disk1s2) /Volumes/MacintoshHD2 : 999.86 GB (152.18 GB free)
APPLE SSD TS256C disk0 : (251 GB)
EFI (disk0s1) <not mounted> : 210 MB
MacintoshHD (disk0s2) / : 250.14 GB (62.97 GB free)
Recovery HD (disk0s3) <not mounted> [Recovery]: 650 MB
HL-DT-STDVDRW GA32N
HGST HDS724040ALE640 disk2 : (4 TB)
EFI (disk2s1) <not mounted> : 210 MB
disk2s2 (disk2s2) <not mounted> : 4.00 TB
Boot OS X (disk2s3) <not mounted> : 134 MB - one error
HGST HDS724040ALE640 disk3 : (4 TB)
EFI (disk3s1) <not mounted> : 210 MB
disk3s2 (disk3s2) <not mounted> : 4.00 TB
Boot OS X (disk3s3) <not mounted> : 134 MB
USB Information: ℹ️
Mitsumi Electric Hub in Apple Extended USB Keyboard
Logitech USB-PS/2 Optical Mouse
Mitsumi Electric Apple Extended USB Keyboard
Apple Inc. BRCM2046 Hub
Apple Inc. Bluetooth USB Host Controller
Apple Inc. FaceTime HD Camera (Built-in)
Lexar USB_3_0 Reader
Seagate Expansion Desk 3 TB
EFI (disk5s1) <not mounted> : 315 MB
downloads (disk5s2) /Volumes/downloads : 249.18 GB (138.35 GB free)
BackUpBro (disk5s3) /Volumes/BackUpBro : 2.75 TB (1.03 TB free)
Apple Computer, Inc. IR Receiver
Apple Internal Memory Card Reader
Thunderbolt Information: ℹ️
Apple Inc. thunderbolt_bus
G-Technology G-RAID with Thunderbolt
Configuration files: ℹ️
/etc/sysctl.conf - Exists
Gatekeeper: ℹ️
Mac App Store and identified developers
Kernel Extensions: ℹ️
/Applications/Toast 11 Titanium/Spin Doctor.app
[not loaded] com.hzsystems.terminus.driver (4) [Click for support]
/Library/Extensions
[not loaded] com.Avid.driver.AvidDX (5.9.1 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.desktopvideo.iokit.driver (10.1.4 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.desktopvideo.iokit.framebufferdriver (10.1.4 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.desktopvideo.multibridge.iokit.driver (10.1.4 - SDK 10.8) [Click for support]
[not loaded] com.blackmagic-design.driver.BlackmagicIO (10.1.4 - SDK 10.8) [Click for support]
/Library/Extensions/DeckLink_Driver.kext/Contents/PlugIns
[not loaded] com.blackmagic-design.desktopvideo.firmware (10.1.4 - SDK 10.8) [Click for support]
/System/Library/Extensions
[not loaded] at.obdev.nke.LittleSnitch (3876 - SDK 10.8) [Click for support]
[not loaded] com.SafeNet.driver.Sentinel (7.5.2) [Click for support]
[not loaded] com.paceap.kext.pacesupport.master (5.9.1 - SDK 10.6) [Click for support]
[not loaded] com.roxio.BluRaySupport (1.1.6) [Click for support]
/System/Library/Extensions/PACESupportFamily.kext/Contents/PlugIns
[not loaded] com.paceap.kext.pacesupport.leopard (5.9.1 - SDK 10.4) [Click for support]
[not loaded] com.paceap.kext.pacesupport.panther (5.9.1 - SDK 10.-1) [Click for support]
[loaded] com.paceap.kext.pacesupport.snowleopard (5.9.1 - SDK 10.6) [Click for support]
[not loaded] com.paceap.kext.pacesupport.tiger (5.9.1 - SDK 10.4) [Click for support]
/Users/[redacted]/Library/Services/ToastIt.service/Contents/MacOS
[not loaded] com.roxio.TDIXController (2.0) [Click for support]
Startup Items: ℹ️
Digidesign Mbox 2: Path: /Library/StartupItems/Digidesign Mbox 2
DigidesignLoader: Path: /Library/StartupItems/DigidesignLoader
Startup items are obsolete in OS X Yosemite
Problem System Launch Agents: ℹ️
[running] com.paragon.NTFS.notify.plist [Click for support]
Launch Agents: ℹ️
[not loaded] com.adobe.AAM.Updater-1.0.plist [Click for support]
[loaded] com.adobe.CS4ServiceManager.plist [Click for support]
[running] com.avid.ApplicationManager.plist [Click for support]
[running] com.avid.backgroundservicesmanager.plist [Click for support]
[loaded] com.avid.dmfsupportsvc.plist [Click for support]
[loaded] com.avid.interplay.dmfservice.plist [Click for support]
[loaded] com.avid.interplay.editortranscode.plist [Click for support]
[loaded] com.avid.transcodeserviceworker.plist [Click for support]
[running] com.blackmagic-design.DesktopVideoFirmwareUpdater.plist [Click for support]
[failed] com.brother.LOGINserver.plist [Click for support] [Click for details]
[loaded] com.paragon.updater.plist [Click for support]
[failed] com.teamviewer.teamviewer.plist [Click for support] [Click for details]
[failed] com.teamviewer.teamviewer_desktop.plist [Click for support]
Launch Daemons: ℹ️
[loaded] com.adobe.fpsaud.plist [Click for support]
[loaded] com.adobe.SwitchBoard.plist [Click for support]
[loaded] com.adobe.versioncueCS4.plist [Click for support]
[loaded] com.avid.AMCUninstaller.plist [Click for support]
[running] com.avid.interplay.editorbroker.plist [Click for support]
[running] com.avid.interplay.editortranscodestatus.plist [Click for support]
[loaded] com.blackmagic-design.desktopvideo.XPCService.plist [Click for support]
[running] com.blackmagic-design.DesktopVideoHelper.plist [Click for support]
[running] com.blackmagic-design.streaming.BMDStreamingServer.plist [Click for support]
[not loaded] com.digidesign.fwfamily.helper.plist [Click for support]
[running] com.edb.launchd.postgresql-8.4.plist [Click for support]
[loaded] com.microsoft.office.licensing.helper.plist [Click for support]
[loaded] com.noiseindustries.FxFactory.FxPlug.plist [Click for support]
[loaded] com.noiseindustries.FxFactory.plist [Click for support]
[running] com.paceap.eden.licensed.plist [Click for support]
[loaded] com.teamviewer.Helper.plist [Click for support]
[failed] com.teamviewer.teamviewer_service.plist [Click for support]
[loaded] jp.co.canon.MasterInstaller.plist [Click for support]
[loaded] PACESupport.plist [Click for support]
User Login Items: ℹ️
iTunesHelper Programma Hidden (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
Internet Plug-ins: ℹ️
Default Browser: Version: 537 - SDK 10.9
OfficeLiveBrowserPlugin: Version: 12.2.6 [Click for support]
AdobePDFViewerNPAPI: Version: 11.0.0 - SDK 10.6 [Click for support]
FlashPlayer-10.6: Version: 16.0.0.305 - SDK 10.6 [Click for support]
Silverlight: Version: 5.1.30514.0 - SDK 10.6 [Click for support]
Flash Player: Version: 16.0.0.305 - SDK 10.6 [Click for support]
QuickTime Plugin: Version: 7.7.3
iPhotoPhotocast: Version: 7.0 - SDK 10.8
SharePointBrowserPlugin: Version: 14.0.0 [Click for support]
AdobePDFViewer: Version: 11.0.0 - SDK 10.6 [Click for support]
EPPEX Plugin: Version: 10.0 [Click for support]
JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Check version
User internet Plug-ins: ℹ️
Google Earth Web Plug-in: Version: 7.0 [Click for support]
Audio Plug-ins: ℹ️
DVCPROHDAudio: Version: 1.3.2
3rd Party Preference Panes: ℹ️
Blackmagic Desktop Video [Click for support]
DigidesignMbox2 [Click for support]
Digidesign Mbox 2 Pro [Click for support]
Flash Player [Click for support]
Paragon NTFS for Mac ® OS X [Click for support]
Time Machine: ℹ️
Skip System Files: NO
Mobile backups: OFF
Auto backup: NO - Auto backup turned off
Volumes being backed up:
MacintoshHD: Disk size: 250.14 GB Disk used: 187.17 GB
downloads: Disk size: 249.18 GB Disk used: 110.83 GB
Destinations:
G-RAID with Thunderbolt [Local]
Total size: 8.00 TB
Total number of backups: 6
Oldest backup: 2014-04-08 14:24:41 +0000
Last backup: 2014-09-24 21:58:28 +0000
Size of backup disk: Excellent
Backup size 8.00 TB > (Disk size 499.32 GB X 3)
Top Processes by CPU: ℹ️
2% WindowServer
1% fontd
0% firefox
0% AvidApplicationManager
0% AppleSpell
Top Processes by Memory: ℹ️
498 MB firefox
241 MB mds_stores
172 MB com.apple.IconServicesAgent
137 MB Dock
100 MB java
Virtual Memory Information: ℹ️
12.33 GB Free RAM
2.39 GB Active RAM
1.11 GB Inactive RAM
1.34 GB Wired RAM
934 MB Page-ins
0 B Page-outs
Diagnostics Information: ℹ️
Mar 7, 2015, 01:56:19 PM Self test - passed
Mar 6, 2015, 08:15:56 PM /Library/Logs/DiagnosticReports/Mail_2015-03-06-201556_[redacted].hang -
Performance Issues with crystal reports 11 - Critical
Post Author: DJ Gaba
CA Forum: Exporting
I have migrated from crystal reports version 8 to version 11.
I am experiencing some performance issues with reports when displayed in version 11
Reports that was taking 2 seconds in version 8 is now taking 4-5 seconds in versino 11
I am using vb6 to export my report file into pdf
ThanksPost Author: synapsevampire
CA Forum: Exporting
Pleae don't multiple forums on the site with the same question.
I responded to your other post.
-k
Maybe you are looking for
-
CS6 on Mac says .eps file is an unknown format and cannot be opened
Trying to open a logo sent by a client an PS, AI, and ID have all given me an error saying that .eps is an invalid format. It's the only logo file available from the client, so my options are limited. First time I've had an issue with this ever, so n
-
White screen when i open my computer
My battery is bad in my laptop so i have to continually run off of the power cord. when i close my computer and the unplug it to move to another room, and i plug it back in, i get this white screen that is opaque and has a little loading bar at the b
-
CABLE CONNECTION FROM 6131 TO 3.5MM JACK PLUG INPU...
I have recently acquired a 6131, for which I have ordered a 2GB microSD card and USB cable in order to transfer music files into the phone. I am unable to find a Nokia cable, which will output these music files from the phone via 1 3.5 mm jack plug i
-
Ticker bean and WHEN-BUTTON-PRESSED trigger
Hi, In Forms 6i, I use the Ticker bean. The problem with it, is that it appears on all canevas of my form. But if the bean is "stopped" (i.e. the clock is stopped), it works fine and it does not appear on the next canevas I show. However, I must have
-
Upgrading from CS2 'Premium' to Photoshop CS5
I have CS2 Premium and find that I have no use for much of the suite's products, though I use Photoshop quite a bit. I would like to upgrade to the new Photoshop CS5 and was wondering if I can do this from CS2 Premium? All I see in the "upgrade from"