Regarding Workflow poor performance CRM
Hi All,
We are using Campaign automation in CRM for executing Campaigns through Market Planner.
The problem we are facing at the moment is that some of the workflow tasks are taking 20 times more than the required timelines.
I have checked the workflow logs and all seems to workfine, is there any where in the system where we can capture the executin path with time lines for an individual task(I know we can see the total duration in SWIA) but i need more breakup.
Thanks in Advance,
Many Thanks
Kishore Yerra
Solved
Similar Messages
-
Poor performance of the BDB cache
I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
Overview
Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
The Database
Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
sequences (maintains record IDs for all other tables)
urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
downloads (p), downloads.values (s), downloads.xfer (s)
agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
search (p), search.values (s), search.hits (s)
users (p), users.values (s), users.hits (s), users.groups.hits (sf)
errors (p), errors.values (s), errors.hits (s)
dhosts (p), dhosts.values (s)
statuscodes (HTTP status codes)
totals.daily (31 days)
totals.hourly (24 hours)
totals (one record)
countries (a couple of hundred countries)
system (one record)
visits.active (active visits - variable length)
downloads.active (active downloads - variable length)
All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
Database Size
One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
urls (p):
8192 Underlying database page size
2031 Overflow key/data size
1471636 Number of unique keys in the tree
1471636 Number of data items in the tree
193 Number of tree internal pages
577738 Number of bytes free in tree internal pages (63% ff)
55312 Number of tree leaf pages
145M Number of bytes free in tree leaf pages (67% ff)
2620 Number of tree overflow pages
16M Number of bytes free in tree overflow pages (25% ff)
urls.hits (s)
8192 Underlying database page size
2031 Overflow key/data size
2 Number of levels in the tree
823 Number of unique keys in the tree
1471636 Number of data items in the tree
31 Number of tree internal pages
201970 Number of bytes free in tree internal pages (20% ff)
45 Number of tree leaf pages
243550 Number of bytes free in tree leaf pages (33% ff)
2814 Number of tree duplicate pages
8360024 Number of bytes free in tree duplicate pages (63% ff)
0 Number of tree overflow pages
The Testbed
I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
I also used a code profiler to analyze SSW and BDB performance.
The Problem
Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
The Tests
SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
Some of the other things I tried/observed:
* I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
* I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
* I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
* The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
* I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec. -
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
Poor Performance in ETL SCD Load
Hi gurus,
We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
Some questions for the above:
- is this an expected average execution duration for this number of records?
- updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
Thanks in advance,
GuilhermeHi,
Thank you for posting in Windows Server Forum.
Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
Hope it helps!
Thanks.
Dharmesh Solanki
TechNet Community Support -
Poor performance with Oracle Spatial when spatial query invoked remotely
Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
Thank you in advance for any thoughts you might share.OK, that's clearer.
Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
set autotrace on
set timing on
SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
Have you profiled the procedure? Here is an example of how to do it:
Prompt Firstly, create PL/SQL profiler table
@$ORACLE_HOME/rdbms/admin/proftab.sql
Prompt Secondly, use the profiler to gather stats on execution characteristics
DECLARE
l_run_num PLS_INTEGER := 1;
l_max_num PLS_INTEGER := 1;
v_geom mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
BEGIN
dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE')); -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
v_geom := Parallel(v_geom,10,0.05,1); -- Put your procedure call here
dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
END;
SHOW ERRORS
Prompt Finally, report activity
COLUMN runid FORMAT 99999
COLUMN run_comment FORMAT A40
SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
COLUMN runid FORMAT 99999
COLUMN unit_number FORMAT 99999
COLUMN unit_type FORMAT A20
COLUMN unit_owner FORMAT A20
COLUMN text FORMAT A100
compute sum label 'Total_Time' of total_time on runid
break on runid skip 1
set linesize 200
SELECT u.runid || ',' ||
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
text
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid
AND
u.unit_number = d.unit_number
JOIN all_source als ON ( als.owner = 'CODESYS'
AND als.type = u.unit_type
AND als.name = u.unit_name
AND als.line = d.line# )
WHERE u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
regards
Simon -
Poor performance of Premiere Pro on 2013 Mac Pro
A month ago I bought a 2013 Mac Pro mostly for running Premiere Pro CC. The configuration is 6 cores @ 3.5 GHz, 64 GB RAM, 2 x D700 graphics, 1 TB internal SSD. It runs OS X Mavericks 10.9.4.
I tried Premiere Pro CC today for the first time. The input--entirely stored on the internal SSD; no external disk was even connected--was about 100 clips shot as AVCHD (1080p25 PAL) at 24 Mbps.
I edited this in Premiere Pro and added small adjustments (mostly changes of levels) in about 20 clips. The rest had no adjustments. There were no transitions applied. For the most part, it just had to copy the trimmed input clips to the output.
I queued this for Adobe Media Encoder CC at H.264, 1080p25, @ 24 Mbps VBR 2 pass. Audio was 192 kbps, 48 kHZ,stereo. The output was written to the internal SSD.
While encoding, the CPU was running 50-70% idle (!!!) so it was using only 2-3 cores and half the memory (32GB) was unused. It certainly wasn't trying very hard. No other programs except Premiere, AME, and the activity monitor were open during encoding and I was just sitting there watching it lumber along.
The time to encode the timeline of 9:47 was 21:28. That is 2.2x real time. My 4-year-old Windows machine using CS6 could do that. What's wrong? Should Premiere Pro be this slow on hardware this fast? Is there some secret box "Use all the cores and all allowed memory" that I have to check (I set the preferences to allow Adobe to use 52 GB). Does Premiere get tired in the evenings? The internal SSD runs at 900 MB/s and there was no transcoding to do. What was going on?
This is my first test of Premiere and it is kind of disappointing. Did I just waste 6500 euros on the Mac Pro or am I using the wrong settings. Any advice is welcome.
Thanks.Yes, should be great if someone put some light in all this performance thing in the new mac pro, I just did a test, 3 video tracks with 4K Red footage, 50% of opacity each one and I just play the 2 min sequence, and Premiere started to drop frames at half resolution playback and when I changed to full resolution playback the sequence was unplayable, I mean, they sold this MacPro/Premiere workflow as the way to edit 4K without compromises and now we are not capable to deliver jobs because the poor performance... and I'm not talking now about the constant crashes and freezes, I'm talking about that we can't play just 3 tracks without any effect.
Would be great if someone put some light on this, maybe we are blaming adobe and it's apple's fault and we should go to the apple store to return the machine... I did test on the Mac and looks OK, I'm more than confuse about this... -
Poor Performance using Generic Conectivity for ODBC
Hi my friends.
I have a problem usign Generic Conectivity , I need update 500,000 records in MS SQL Server 2005, I'm using Generic Conectivity for ODBC. In my Oracle Database i have create a DB_LINK called TEST to redirect to MS SQL Server database.
Oracle Database: 10.2.0.4
MS SQL Server version: 2005
The time for update 1,000 records in MS SQL Server using DBMS_HS_PASSTHROUGH is ten minutes. This is a poor performance.
This is de PL/SQL
DECLARE
c INTEGER;
nr INTEGER;
CURSOR c_TEST
IS
SELECT "x", "y"
FROM TEST_cab
WHERE ROWNUM <= 1000
ORDER BY "y";
BEGIN
FOR cur IN c_TEST
LOOP
c := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@TEST;
DBMS_HS_PASSTHROUGH.PARSE@TEST(
c,
'UPDATE sf_TEST_sql
SET observation= ?
WHERE department_id = ?
AND employee_id = ? ');
DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 1, 'S');
DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 2, 'N');
DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 3, 'ELABORADO');
DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 4, 'PENDIENTE');
DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 5, cur."x");
DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 6, cur."y");
nr := DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@TEST (c);
DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@TEST (c);
COMMIT;
END LOOP;
END;
You can help, how better the performance for update 500,000 record by record in MS SQL Server. I need advantages of use Oracle Transparent Gateway for Microsoft SQL Server.
your's can suggest my another solution ?
Thanks.Hi,
There are no real parameters to tune the gateways. You should turn on gateway debug tracing and check the SQL that is being sent to SQL*Server to make sure is update statements and nothing else.
If this is the case then the time taken will be down to various factors, such as network delays, processing on SQL*Server and so on.
How long does it take to update the same number of records directly on SQL*Server without the Oracle or the gateway involved or if you use another non-Oracle client to do the same updates across a network, if a network is involved ?
It may be possible to improve performance by changing the HS_RPC_FETCH_REBLOCKING parameter, so have a look at this note in My Oracle Support -
Tuning Generic Connectivity And Gateways (Doc ID 230543.1)
Regards,
Mike -
Poor performance on Discoverer after upgrade to 11g database
Hello,
We have two customers who have experienced a sharp decline in the time it takes to run their queries when they have upgraded to an 11g database.
One was a full Disco upgrade from 4 to 10 plus the database upgrade.
The other was purely a database upgrade - Discoverer version was already 10g.
They were both on 9i database.
They are both Oracle Apps - and the reports are based on a mixture of everything from standard tables to custom tables - there is no pattern (or one that we have seen) in the poorly performing reports.
I have not seen much on metalink regarding this - has anyone else come across why this would be?
It does seem to be only Discoverer - standard reports and the app are performing as expected.
Any advice welcome,
Thanks
RachaelHi Rachael
There are additional database privileges needed for running in 10g and 11g databases that weren't needed in 9i. Here's the typical privileges that I use:
accept username prompt'Enter Username: '
accept pword prompt'Enter Password: '
create user &username identified by &pword;
grant connect, resource to &username;
grant analyze any to &username;
grant create procedure, create sequence to &username;
grant create session, create table, create view to &username;
grant execute any procedure to &username;
grant global query rewrite to &username;
grant create any materialized view to &username;
grant drop any materialized view to &username;
grant alter any materialized view to &username;
grant select any table, unlimited tablespace to &username;
grant execute on sys.dbms_job to &username;
grant select on sys.v_$parameter to &username;
I appreciate that all of the above are not needed for all users and some are only needed if the user is going to be scheduling.
However, I do know that sys.v_$parameter was not needed in 9i. Can you check whether you have this assigned?
I know this might sound silly too, but you need to refresh your database statistics in order for Discoverer to work well. You might also want to try turning off the query predictor if it is still enabled.
Best wishes
Michael -
Poor performance scorecard Internet Explorer
When using Internet Explorer (8) the performance of the scorecard module ( strategy map, KPI's, watch list) is VERY POOR.
Both, when running as end user and also while developing scorecards its very time consuming.
In contrast to Firefox. Internet Explorer is a factor 10 slower.
For example, opening a strategy tree with 20 KPI's within Firefox is 1 second
The same action within IE is 10 seconds.
Are there any options within IE (or OBIEE) to solve the poor performance issue?
Regards,
MarcelHi,
As I known, with a 64-bit OS, for compatibility reason, the 32-bit IE is your Default browser, not the 64-bit. Most often a difference in performance is seen between the 32-bit and 64-bit versions of Internet Explorer is more due to the add-ons enabled on
the 32-bit version. Most add-ons aren’t yet compatible with 64-bit Internet Explorer, so the list of enabled add-ons is much shorter.
If you are experiencing noticeably slower performance when using Internet Explorer 32-bit, try using Tools > Manage Add-ons to disable unnecessary add-ons. You can also temporarily disable all add-ons to see the performance difference between running
with an add-on enabled and disabled.
For better support, we would like to confirm the detailed issue you found on IE 64.
Hope it helps.
Regards,
Blair Deng
Blair Deng
TechNet Community Support -
Poor performance of WD Abap/ Adobe
Dear sirs,
I would like to know if anybody of you have experienced very poor performance of WD ABAP with Adobe interactive form. Our client has paid for a 2-3 pages interactive form in WDA and is complaining about very poor performace. As a result no users are using this application because of this poor performance.
Can anybody point out what can be a problem? Some developement problem? Basis issue? Any experience related to WDA Adobe performance? Thank you all, OttoUpdate: SAP OSS message was opened regarding this problem.
We got a list of patches to update, notes to apply etc. All was done, applied, patched. The performance didn´t get better, it anything it was like extra percent or two, but nothing what would make the customer less angry.
The result: this technology is promising but a) needs strong client PC b) will get better (i hope gets better soon)
Our basis team checked all the times (of actions that has to be done to load/use the app) and the memory need both on server and client. On some client PCs only the Adobe Rader was starting like a half a minute (and more, not less). If you add time for WD, for WD/ Adobe communication and the data transfer, the time to start working with WD ABAP Adobe app can be more than a minute. That is not very usable.
Otto -
Hi.
I have a workflow that involves opening several raw files into Camera Raw via Bridge, adjusting the settings, then from Bridge I use the command to open files into Photoshop layers. While that is happening in Photoshop, I go back to Bridge and open the next several files in Camera Raw and make adjustments. When done, the first lot of files will be in layers in Photoshop ready for me to start layer blending.
This worked fine in CS5, but in CS6 it all goes wrong after a few batches of files. Camera Raw is slow to respond to adjustment of sliders and is slow to update to changes in settings. Then I find that Photoshop has aborted the loading of files into layers. I end up with one layer with an empty layer above it. I close this file, but when I quit Photoshop I'm prompted to save a file that I cannot see and is not listed as an open document under Photoshop>Windows.
This is a show-stopper for me, and have reverted to CS5 to get work done. Hoping to find a solution.
Running a MacPro, with 2 x 2.66 GHz Quad Core Intel Xeon processors, 24GB RAM, standard ATI Radeon HD 4870 card that came with the Mac.A lot of the poor performance of CS6 is linked to the video card and driver. CS6 relies a lot more on the video card than CS5 and also has a new graphics engine. It may take a little longer for ATI and NVidia to get the drivers configured correctly. THe latest 12.4 from ATI is worse than the 12.2. Perhaps the next update will get it right.
-
Shared nothing live migration over SMB. Poor performance
Hi,
I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
Hardware:
Dell M620 blades
256Gb RAM
2*8C Intel E5-2680 CPUs
Samsung 840 Pro 512Gb SSD running in Raid1
6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
The graphs are from 4 tests.
Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
Test 3 is a shared nothing live migration of a live VM over SMB
Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
Any ideas?
Test
Config
Vmswitch
RSS
VMQ
Live Migration Config
Throughput (MB/s)
NTtcp
NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
No
Yes
No
N/A
500
NTtcp
NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
No
Yes
No
N/A
1130
Shared nothing live migration
Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
No
Yes
No
Kerberos, Use SMB, any available net
74
Storage migration
Offline VM, 8Gb disk. Migrated from host 1 -> host2
No
Yes
No
Unencrypted BITS transfer
350Hi Per Kjellkvist,
Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
Then test 3 and 4 .
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Significantly Poorer Performance in AIR 2.0 vs 1.5.X
The company I work for has a demonstration Adobe AIR application that we distribute to customers. We have not tested it under AIR 2.0 until today but after updating the frame-rate has degraded substantially from the 1.5.X releases. We tried recompiling with the latest SDKs but no luck in improving the poor performance. We are now dreading our customers upgrading to AIR 2.0. We will of course try digging deeper and solving the problem but I'd be happy to share the installer with anyone from the Adobe team if they would like to have a look at it since AIR 2.0 was meant to be out of the box faster.
I have the same problem with my application - ironically dropping the frame of my application (from 60fps to 30 fps) has the effect that it runs faster under air 2.0!
Wierd.
I hope you guys at Adobe can fix this.
Regards, Mark. -
DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record
Hi,
Please read this message in the [ABAP Performance and Tuning|DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record; section and see if you have any advise.
Best Regards,
MarjoHi,
I met this issue when I tried to write vaule to massive fields of excel range.
And I solve this issue by using CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_EXPORT.
So, I think you may fix it in the same way.
1. Select range in by the I_OI_SPREADSHEET
2. Call method I_OI_DOCUMENT_PROXY->COPY_SELECTION
3. Call method CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_IMPORT
Cheers, -
Poor performance of web dynpro application
Hi,
I have developed web dynpro application which fetches data from R3 system using JCO connection.Large amount of data is transferred between R3 and application,because of which it is taking too long for displaying result.
After displaying timestamp before and after RFC execution code I came to know that, it is taking approx. 5 min for RFC execution, resulting in poor performance.Time taken for rest of processing is negligible.Is there any way by which I can reduce time for RFC execution or data transfer?
Thanks in advance,
ApurvaHI Apurva,
I think u r displaying the whole data at a stretch in the front end. So it will take some time for rendring. So try to reduce the display elements (For example, for tables, display only 10 rows at a time).
regards
Fahad Hamsa
Maybe you are looking for
-
Hello Experts, I am facing an issue with Formula collision, below is an expample My requirement is to use Column formula for the value (????) in the below table. Restriction (C1) Restriction (C2) Formula(C1-C2) R
-
When I send/forward an email I get an error message that the recipient does not allow "relaying". It doesn't matter who the recipient is.
-
Hi , We are follwoign PER settelement with Production orders. 1. When we create Production order and save Planned qty and Act qty (zero) is not getting updted at the time of save. 2. Target cost is zero so that i could not calculate variances. can y
-
Forcing a path to create the query
Hi everybody, here is my doubt about OBIEE: let's say I have three dimension tables and two fact tables, all joined together (DimA joined to Fact1 and Fact2, DimB joined to Fact1 and Fact2 and Dim3 also joined to Fact1 and Fact2). How can I force whi
-
SQL mirroring for CMS database
I don't think SQL mirroring for CMS database is supported. anybody has any information onthis?