Cache / performance question...
Hello,
I've been running tests against an all-write load to try and understand limitations and BDB internals, and am wondering what causes BDB to begin 'forcing' pages from cache even though clean pages still exist, checkpoints are being regularly run, and trickling is being used? As an all-write load (in a testing environment) continues to operate, I notice performance begins to suffer and cannot recover at roughly the time db_stat begins reporting forced writes of both clean and dirty pages. For example, a write load of 5000 txn/sec degrades to 1000/sec and below after ~500K transactions, and I start to see things like the below in db_stat:
3219 Clean pages forced from the cache
7271 Dirty pages forced from the cache
183004 Dirty pages written by trickle-sync thread
Even when the cache has clean pages, as the database grows over time, I imagine that new keys may be written to "older" pages. This particular example uses DB_HASH. In this case, although the cache has a clean page, a given key "475000" may be written to an old page already on disk. This would, as far as I understand, require that page to be re-read into the cache to complete the write. However, monitoring vmstat does not show me any inbound IO activity whatsoever during the time that the performance begins to drop. I am therefore not certain that old pages are being read from disk, and confused as to why write performance begins to suffer so greatly when the cache fills up.
I'm trying this without DB_TXN_NOSYNC/etc - my environment favors steady performance over bursty speed, so I don't want to forward-load anything that might trigger a large number of writes later whenever possible.
I have sample source and testing results that I can attach, but I'm first trying to understand if I'm not understanding something basic before I post too much information. Thanks! :)
Edited by: user10542315 on Jan 1, 2009 7:22 PM
Hello,
Although I'm not an engineer on the product, and I'm sure one will chime in to help you tune your tests, I thought I'd ask you if you've seen the following white paper http://www.oracle.com/technology/products/berkeley-db/pdf/berkeley-db-perf.pdf ? It's a bit dated, but it is trying to do some of the same things you are doing. We're focusing a lot of resources on performance given the radical shifts in the ratio of cores, hyper-threads, memory, disk-speed, and flash-storage devices. All of these have changed the underlying assumptions one makes when building a database engine ("Will we be memory, or CPU bound? How big a bottle-neck will I/O be? How can we avoid that? Can we max out all the cores/threads of all resources when running purely in memory?"). There are a lot of interesting areas to research and improve. We're doing some of that work in 4.8, more later. I can't be specific, but we're trying our best to keep DB at the bleeding edge of hardware capabilities.
-greg
Gregory Burd - Product Manager - Oracle Berkeley DB - [email protected]
Similar Messages
-
Simple performance question. the simplest way possible, assume
I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
When add is true, I must add a constant value to each element in the array.
When add is false, I must subtract a constant value to each element in the array.
Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
private void process(){
for (int i=0;i<dimension1;i++)
for (int ii=0;ii<dimension1;ii++)
for (int iii=0;iii<dimension1;iii++)
for (int iiii=0;iiii<dimension1;iiii++)
if (add)
matrix[i][ii][iii][...] += constant;
else
matrix[i][ii][iii][...] -= constant;
private void process(){
if (add)
for (int i=0;i<dimension1;i++)
for (int ii=0;ii<dimension1;ii++)
for (int iii=0;iii<dimension1;iii++)
for (int iiii=0;iiii<dimension1;iiii++)
matrix[i][ii][iii][...] += constant;
else
for (int i=0;i<dimension1;i++)
for (int ii=0;ii<dimension1;ii++)
for (int iii=0;iii<dimension1;iii++)
for (int iiii=0;iiii<dimension1;iiii++)
matrix[i][ii][iii][...] -= constant;
}Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.erjoalgo wrote:
I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
matrix[i][ii][iii][...] +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
Winston -
Guys,
I do understand that ccPBM is very resource hungry but what I was wondering is this:
Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
Your opinion is appreciated.
Thanks a lot,
Viktor VargaHi,
In SXMB_ADM you can set the time out higher for the sync processing.
Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
when you go for BPM the whole workflow
has to come into action so for example
when your mapping last < 1 sec without bpm
if you do it in a BPM the transformation step
can last 2 seconds + one second mapping...
(that's just an example)
so the workflow gives you many design possibilities
(brigde, error handling) but it can
slow down the process and if you have
thousands of messages the preformance
can be much worse than having the same without BPM
see below links
http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
BPM Performance tuning
BPM Performance issue
BPM performance question
BPM performance- data aggregation persistance
Regards
Chilla.. -
SCM Live cache performance tunning
I got a ne project here I have to work on SCM live cache, performance tunning and ECC Integrating ith SCM.
If you have any documents can you please help me.
- Thanks7,
Try these
https://websmp103.sap-ag.de/~sapidb/011000358700000567062006E
https://websmp103.sap-ag.de/~sapidb/011000358700002213412003E
https://websmp103.sap-ag.de/~sapidb/011000358700000715082008E
https://websmp103.sap-ag.de/~sapidb/011000358700007382642002E
https://websmp103.sap-ag.de/~sapidb/011000358700008748092002E
Rgds,
DB49 -
Swing performance question: CPU-bound
Hi,
I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
Thanks,
CurtYou obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck. -
Xcontrol: performance question (again)
Hello,
I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
Is there a way to reduce the cpu-load when using xcontrols?
If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
Regards,
soranito
Message Edited by soranito on 04-04-2010 08:16 PM
Message Edited by soranito on 04-04-2010 08:18 PM
Attachments:
XControl_performance_test.zip 60 KBsoranito wrote:
Hello,
I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
Okay, I think I understand question now. You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
Okay, try opening the project I replied yesterday. I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi. Notice how I separated up your data event into two events? Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE. While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE. This is will limit the number of times facade will be looping. It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution. If that doesn't work, just play around with the loopback statement. I can't remember the exact method.
Yeah, I agree xcontrol shouldn't be overconsuming system resource. I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it. Imo, I don't think xcontrol is quite ready for primetime yet. Just too many issues that need improvement.
Message Edited by lavalava on 04-06-2010 03:34 PM -
Performance question - Caching data of a big table
Hi All,
I have a general question about caching, I am using an Oracle 11g R2 database.
I have a big table about 50 millions of rows that is accessed very often by my application. Some query runs slow and some are ok. But (obviously) when the data of this table are already in the cache (so basically when a user requests the same thing twice or many times) it runs very quickly.
Does somebody has any recommendations about caching the data / table of this size ?
Many thanks.Chiwatel wrote:
With better formatting (I hope), sorry I am not used to the new forum !
Plan hash value: 2501344126
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | | 7232 (100)| | | 68539 |00:14:20.06 | 212K| 87545 | | | |
| 1 | SORT ORDER BY | | 1 | 7107 | 624K| 7232 (1)| | | 68539 |00:14:20.06 | 212K| 87545 | 3242K| 792K| 2881K (0)|
2 | NESTED LOOPS | | 1 | | | | | | 68539 |00:14:19.26 | 212K| 87545 | | | |
| 3 | NESTED LOOPS | | 1 | 7107 | 624K| 7230 (1)| | | 70492 |00:07:09.08 | 141K| 43779 | | | |
* 4 | INDEX RANGE SCAN | CM_MAINT_PK_ID | 1 | 7107 | 284K| 59 (0)| | | 70492 |00:00:04.90 | 496 | 453 | | | |
| 5 | PARTITION RANGE ITERATOR | | 70492 | 1 | | 1 (0)| KEY | KEY | 70492 |00:07:03.32 | 141K| 43326 | | | |
|* 6 | INDEX UNIQUE SCAN | D1T400P0 | 70492 | 1 | | 1 (0)| KEY | KEY | 70492 |00:07:01.71 | 141K| 43326 | | | |
|* 7 | TABLE ACCESS BY GLOBAL INDEX ROWID| D1_DVC_EVT | 70492 | 1 | 49 | 2 (0)| ROWID | ROWID | 68539 |00:07:09.17 | 70656 | 43766 | | | |
Predicate Information (identified by operation id):
4 - access("ERO"."MAINT_OBJ_CD"='D1-DEVICE' AND "ERO"."PK_VALUE1"='461089508922')
6 - access("ERO"."DVC_EVT_ID"="E"."DVC_EVT_ID")
7 - filter(("E"."DVC_EVT_TYPE_CD"='END-GSMLOWLEVEL-EXCP-SEV-1' OR "E"."DVC_EVT_TYPE_CD"='STR-GSMLOWLEVEL-EXCP-SEV-1'))
Your user has executed a query to return 68,000 rows - what type of user is it, a human being cannot possibly cope with that much data and it's not entirely surprising that it might take quite some time to return it.
One thing I'd check is whether you're always getting the same execution plan - Oracle's estimates here are out by a factor of about 95 (7,100 rows predicted vs. 68,500 returned) perhaps some of your variation in timing relates to plan changes.
If you check the figures you'll see about half your time came from probing the unique index, and half came from visiting the table. In general it's hard to beat Oracle's caching algorithms, but indexes are often much smaller than the tables they cover, so it's possible that your best strategy is to protect this index at the cost of the table. Rather than trying to create a KEEP cache the index, though, you MIGHT find that you get some benefit from creating a RECYCLE cache for the table, using a small percentage of the available memory - the target is to fix things so that table blocks you won't revisit don't push index blocks you will revisit from memory.
Another detail to consider is that if you are visiting the index and table completely randomly (for 68,500 locations) it's possible that you end up re-reading blocks several times in the course of the visit. If you order the intermediate result set from the from the driving table first you may find that you're walking the index and table in order and don't have to re-read any blocks. This is something only you can know, though. THe code would have to change to include an inline view with a no_merge and no_eliminate_oby hint.
Regards
Jonathan Lewis -
Use of bind variables with the oracle db - to improve library cache perform
Dear Friends,
We are using oracle 9.0.1.1.1 db server - the performance of the db was pathetic and upon investigation it was revealed that the library cache was over loaded with sql hard parses generated by not using bind variables. We are using vb as a front end with oracle and our connection object in vb is created using ole db for oracle provided by oracle (installed from oracle client custom - programmer option).
I would appreciate if any body can tell how can we use bind variables in vb to connect to oracle such that the hard parses can be changed into soft parses.
Your effort to bring some peace in my life is worth comendable and I would be very obliged for your time and help.
Thanks a lot.
Bye
Take care.
qjGenerally, you would use bind variables by changing statements that are written like this
select * from emp where empno=6678 and ename='Jones'
so that they're written like
select * from emp where empno=? and ename=?
How you then bind these question marks to the particular values you want depends on the API you're using (ADO? OLE DB directly?, etc). If you have a support contract, there are plenty of examples for any API on metalink.oracle.com
Justin -
STATSPACK Performance Question / Discrepancy
I'm trying to troubleshoot a performance issue and I'm having trouble interpreting the STATSPACK report. It seems like the STATSPACK report is missing information that I expect to be there. I'll explain below.
Header
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
2636235846 testdb 1 30-Jan-11 16:10 11.2.0.2.0 NO
Host Name Platform CPUs Cores Sockets Memory (G)
~~~~ ---------------- ---------------------- ----- ----- ------- ------------
TEST Microsoft Windows IA ( 4 2 0 3.4
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 3427 01-Feb-11 06:40:00 65 4.4
End Snap: 3428 01-Feb-11 07:00:00 66 4.1
Elapsed: 20.00 (mins) Av Act Sess: 7.3
DB time: 146.39 (mins) DB CPU: 8.27 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 192M 176M Std Block Size: 8K
Shared Pool: 396M 412M Log Buffer: 10,848K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB time(s): 7.3 2.0 0.06 0.04
DB CPU(s): 0.4 0.1 0.00 0.00
Redo size: 6,366.0 1,722.1
Logical reads: 1,114.6 301.5
Block changes: 35.8 9.7
Physical reads: 44.9 12.1
Physical writes: 1.5 0.4
User calls: 192.2 52.0
Parses: 101.5 27.5
Hard parses: 3.6 1.0
W/A MB processed: 0.1 0.0
Logons: 0.1 0.0
Executes: 115.1 31.1
Rollbacks: 0.0 0.0
Transactions: 3.7As you can see a significant amount of time was spent in database calls (DB Time) with relatively little time on CPU (DB CPU). Initially that made me think there were some significant wait events.
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
log file sequential read 48,166 681 14 7.9
CPU time 484 5.6
db file sequential read 35,357 205 6 2.4
control file sequential read 50,747 23 0 .3
Disk file operations I/O 16,518 18 1 .2
-------------------------------------------------------------However, looking at the Top 5 Timed Events I don't see anything out of the ordinary given my normal operations. the log file sequential read may be a little slow but it doesn't make up a significant portion of the execution time.
Based on an Excel/VB spreadsheet I wrote, which converts STATSPACK data to graphical form, I suspected that there was a wait event not listed here. So I decided to query the data directly. Here is the query and result.
SQL> SELECT wait_class
2 , event
3 , delta/POWER(10,6) AS delta_sec
4 FROM
5 (
6 SELECT syev.snap_id
7 , evna.wait_class
8 , syev.event
9 , syev.time_waited_micro
10 , syev.time_waited_micro - LAG(syev.time_waited_micro) OVER (PARTITION BY syev.event ORDER BY syev.snap_id) AS delta
11 FROM perfstat.stats$system_event syev
12 JOIN v$event_name evna ON evna.name = syev.event
13 WHERE syev.snap_id IN (3427,3428)
14 )
15 WHERE delta > 0
16 ORDER BY delta DESC
17 ;
?WAIT_CLASS EVENT DELTA_SEC
Idle SQL*Net message from client 21169.742
Idle rdbms ipc message 19708.390
Application enq: TM - contention 7199.819
Idle Space Manager: slave idle wait 3001.719
Idle DIAG idle wait 2382.943
Idle jobq slave wait 1258.829
Idle smon timer 1220.902
Idle Streams AQ: qmn coordinator idle wait 1204.648
Idle Streams AQ: qmn slave idle wait 1204.637
Idle pmon timer 1197.898
Idle Streams AQ: waiting for messages in the queue 1197.484
Idle Streams AQ: waiting for time management or cleanup tasks 791.803
System I/O log file sequential read 681.444
User I/O db file sequential read 204.721
System I/O control file sequential read 23.168
User I/O Disk file operations I/O 17.737
User I/O db file parallel read 14.536
System I/O log file parallel write 7.618
Commit log file sync 7.150
User I/O db file scattered read 3.488
Idle SGA: MMAN sleep for component shrink 2.461
User I/O direct path read 1.621
Other process diagnostic dump 1.418
... snip ...So based on the above it looks like there was a significant amount of time spent in enq: TM - contention
Question 1
Why does this wait event not show up in the Top 5 Timed Events section? Note that this wait event is also not listed in any of the other wait events sections either.
Moving on, I decided to look at the Time Model Statistics
Time Model System Stats DB/Inst: testdb /testdb Snaps: 3427-3428
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % DB time
sql execute elapsed time 8,731.0 99.4
PL/SQL execution elapsed time 1,201.1 13.7
DB CPU 496.3 5.7
parse time elapsed 26.4 .3
hard parse elapsed time 21.1 .2
PL/SQL compilation elapsed time 2.8 .0
connection management call elapsed 0.6 .0
hard parse (bind mismatch) elapsed 0.5 .0
hard parse (sharing criteria) elaps 0.5 .0
failed parse elapsed time 0.0 .0
repeated bind elapsed time 0.0 .0
sequence load elapsed time 0.0 .0
DB time 8,783.2
background elapsed time 87.1
background cpu time 2.4Great, so it looks like I spent >99% of DB Time in SQL calls. I decided to scroll to the SQL ordered by Elapsed time section. The header information surprised me.
SQL ordered by Elapsed time for DB: testdb Instance: testdb Snaps: 3427 -3
-> Total DB Time (s): 8,783
-> Captured SQL accounts for 4.1% of Total DB Time
-> SQL reported below exceeded 1.0% of Total DB TimeIf I'm spending > 99% of my time in SQL, I would have expected the captured % to be higher.
Question 2
Am I correct in assuming that a long running SQL that started before the first snap and is still running at the end of the second snap would not display in this section?
Question 3
Would that answer my wait event question above? Ala, are wait events not reported until the action that is waiting (execution of a SQL statement for example) is complete?
So I looked a few snaps past what I have posted here. I still haven't determined why the enq: TM - contention wait is not displayed anywhere in the STATSPACK reports. I did end up finding an interesting PL/SQL block that may have been causing the issues. Here is the SQL ordered by Elapsed time for a snapshot that was taken an hour after the one I posted.
SQL ordered by Elapsed time for DB: testdb Instance: testdb Snaps: 3431 -3
-> Total DB Time (s): 1,088
-> Captured SQL accounts for ######% of Total DB Time
-> SQL reported below exceeded 1.0% of Total DB Time
Elapsed Elap per CPU Old
Time (s) Executions Exec (s) %Total Time (s) Physical Reads Hash Value
26492.65 29 913.54 ###### 1539.34 480 1013630726
Module: OEM.CacheModeWaitPool
BEGIN EMDW_LOG.set_context(MGMT_JOB_ENGINE.MODULE_NAME, :1); BEG
IN MGMT_JOB_ENGINE.process_wait_step(:2);END; EMDW_LOG.set_conte
xt; END;I'm still not sure if this is the problem child or not.
I just wanted to post this to get your thoughts on how I correctly/incorrectly attacked this problem and to see if you can fill in any gaps in my understanding.
Thanks!Centinul wrote:
I'm still not sure if this is the problem child or not.
I just wanted to post this to get your thoughts on how I correctly/incorrectly attacked this problem and to see if you can fill in any gaps in my understanding.
I think you've attacked the problem well.
It has prompted me to take a little look at what's going on, running 11.1.0.6 in my case, and something IS broken.
The key predicate in statspack for reporting top 5 is:
and e.total_waits > nvl(b.total_waits,0)In other words, an event gets reported if total_waits increased across the period.
So I've been taking snapshots of v$system_event and looking at 10046 trace files at level 8. The basic test was as simple as:
<ul>
Session 1: lock table t1 in exclusive mode
Session 2: lock table t1 in exclusive mode
</ul>
About three seconds after session 2 started to wait, v$system_event incremented total_waits (for the "enq: TM - contention" event). When I committed in session 1 the total_waits figure did not change.
Now do this after waiting across a snapshot:
We start to wait, after three seconds we record a wait, a few minutes later perfstat takes a snapshot.
30 minutes later "session 1" commits and our wait ends, but we do not increment total_waits, but we record 30+ minutes wait time.
30 minutes later perfstat takes another snapshot
The total_waits has not changed between the start and end snapshot even though we have added 30 minutes to the "enq: TM - contention" in the interim.
The statspack report loses our 30 minutes from the Top N.
It's a bug - raise an SR.
Edit: The AWR will have the same problem, of course.
Regards
Jonathan Lewis
Edited by: Jonathan Lewis on Feb 1, 2011 7:07 PM -
Hey,
I'm new to AIR mobile development although i have around a decades experience with Flash, but with AIR for mobile i'm a bit lost. So i was hoping someone can answer these questions for me...
1. Is there currently no way to use a phones vibration with AIR? I remember seeing people asking this on these forums last year, and it still looks as if this extremely simple thing cannot be done? And if it cant, are there plans to have it in AIR 2.7?
2. When i select GPU acceleration, does it cache all bitmaps automatically? Or do i still have to manually select "Cache as Bitmap" for everything i want cached? (i'm using Flash Pro CS5.5). Because it seems to have no effect on frame rates whatever i do here as long as GPU render mode is selected.
3. Would i get better performance using vectors or bitmaps for graphics?
4. Whats the texture size limit for Android and iOS before it can no longer be cached?
5. Does it matter performance wise if JPG's and PNG's are used for graphics instead of bitmaps? (when i've not selected Cache as Bitmap).
6. Whats the differences between a AIR for Android and AIR for iOS? I've noticed that for Android it supports use of the camera, for iOS it dont. Is there anything else?
7. How can i embed a HD video in an AIR for Android app? i've seen this thread: http://forums.adobe.com/thread/849395?tstart=0
but it's for AIR on iOS, i wouldn't think it's any different for Android, but i'm getting a blank screen when i use the code from that thread (and i'm testing on a Android phone too, not in Flash > Test Movie).
Or if anyone has ANY tips for performance at all then i'd love to hear themHello,
I think that Colin may have answered a lot of your questions but here it is:
1. There is currently no way to make the phone vibrate for within AIR. You can read this post on how to create an Android/AIR hybrid application which gives you extension access to some native functionality. Warning: it is somewhat advanced and not supported by Adobe
http://www.jamesward.com/2011/05/11/extending-air-for-android/?replytocom=163323
You can expect to have this functionality in AIR at some point in the future, perhaps not in 2.7
2. in AIR for Android, there is no need to set GPU if you are only using bitmaps. Cache As Bitmap is only needed for vector art.
3. Bitmaps would be better but performance is not the only consideration. Remember that memory is limited so bitmaps can use it all so vectors may be a good option if you have a lot of art. A great advantage of using AIR is that you have the option between the two. My recommendation would be that if your art is going to be animated, create it as bitmap or use cacheAsBitmap if it only moves on x and y axis and use CacheAsBitmapMatrix if it transforms in other way (rotation, size).
4. The limit on Android was 1,024×1,024 last time I checked.
5. JPGs are smaller in size but PNGs look better. A side note, because the PPI is so high on devices, everything looks great so you can get away with lesser quality assets. Test all options.
6. The differences between the two platforms are minimal. Camera first but also Android lets you access the media library (where all the images taken by the device are stored) where iOS do not. In general Android is more open.
7. Video is a very large topic. If you embed the video, you will have a black screen until it is fully loaded. I should say that I did a project with a 1 GB movie and it loaded right away. I have not used the StageWebView to display the movie. Keep in mind that the way the video is endoced matters a great deal (use baseline profile).
Here is some sample code:
import flash.net.NetConnection;
import flash.net.NetStream;
import flash.media.Video;
import flash.events.NetStatusEvent;
var connection:NetConnection;
var video:Video;
video = new Video();
video.width = 480;
video.height = 320;
connection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, netConnectionEvent);
connection.connect(null);
function netConnectionEvent(event:NetStatusEvent):void {
event.target.removeEventListener(NetStatusEvent.NET_STATUS,
netConnectionEvent);
if (event.info.code == "NetConnection.Connect.Success") {
var stream:NetStream = new NetStream(connection);
stream.addEventListener(NetStatusEvent.NET_STATUS, netStreamEvent);
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
// attach the stream to the video to display
video.attachNetStream(stream);
stream.play("someVideo.flv");
addChild(video);
function onMetaData(info:Object):void {}
I would recommend looking at my book. I cover a lot of these topics in detail (and went through the same hurdles as you):
http://oreilly.com/catalog/0636920013884
(you can get it on Amazon too). -
Distributed cache performance?
Hi,
I have a question about the performance of a cluster using a distributed cache:
A distributed cache is available in the the cluster, using the expiry-delay functionality. Each node first inserts new entries in the cache and then periodically updates the entries as long as the entry is needed in the cluster (entries that are no longer periodically updated will be removed due to the expiry-delay).
I performed a small test using a cluster with two nodes that each inserted ~2000 entries in the distributed cache. The nodes then periodically update their entries at 5 minutes intervals (using the Map.put(key, value) method). The nodes never access the same entries, so there will be no synchronization issues.
The problem is that the CPU load on the machines running the nodes are very high, ~70% (and this is quite powerful machines with 4 CPUs running Linux). To be able to find the reason for the high CPU load, I used a profiler tool on the application running on one of the nodes. It showed that the application spent ~70% of the time in com.tangosol.coherence.component.net.socket.UdpSocket.receive. Is this normal?
Since each node has a lot of other things to do, it is not acceptable that 70% of the CPU is used only for this purpose. Can this be a cache configuration issue, or do I have to find some other approach to perform this task?
Regards
AndreasHi Andreas,
Can you provide us with some additional information. You can e-mail it to our support account.
- JProfiler snapshot of the profiling showing high CPU utilization
- multiple full thread dumps for the process taken a few seconds apart, these should be taken when running outside of the profiler
- Your override file (tangosol-coherence-override.xml)
- Your cache configuration file (coherence-cache-config.xml)
- logs from the high CPU event, please also include -verbose:gc in the logs, directing the output to the coherence log file
- estimates on the sizes of the objects being updated in the cache
As this is occurring even when you are not actively adding data to the cache, can you describe what else your application is doing at this time. It would be extremely odd for Coherence to consume any noticeable amount of CPU if you are not making heavy use of the cache.
Note that when using the Map.put method the old value is returned to the caller, which for a distributed cache means extra network load, you may wish to consider switching to Map.putAll() as this does not need to return the old value, and is more efficient even if you are only operating on a single entry.
thanks,
Mark -
Hello Experts,
Iam new to Livecache and AP0 and was trying to understand the process of backups reason i need to perform complete SCM 7.0 systemcoopy . In past i have done system copies and well aware of the procedures .
Here it goes Presently we are on SCM 7.0
Question :--> I see our Livecache is installed on one machine and AP0 on another machine. Iam not aware how Livecache
was integrated with AP0.
Iam trying to understand how to take backup of livecache and then restore the same in Target.
In LC10
LCA --> ProblemAnalysis --> Logs --> DBA History
i can see that the live cache backups are in getting successfully everyday.
I see that Backup Label as DAT_000000<xxx> and Backup Template as bac214<xxx>.
Where can i see this Backup Template defined ( the path ) .
Can someone provide an insight regarding the live cache backups.
Checked SAP Note 457425 .
--> After system copy how would i integrate Livecache and AP0 .
Thanks in Advance.
MedhaHello Medha,
1.
a)
Please review all recommended steps in the SAP note:
886103 System Landscape Copy for SAP SCM
when you are planning to do the SCM system copy.
For SAP liveCache documentation please see the SAP note 767598.
b)
SAP link:
SAP liveCache technology
-> Best Practices for Solution Management: mySAP SCM
review:
"Checklist for Recovery of SAP APO liveCache >= 7.4"
< It will be helpful to practice the steps of the document on the test liveCache instance first >
2.
Sessions: http://maxdb.sap.com/training/
=>
Session 2: Basic Administration with Database Studio
Session 3: CCMS Integration into the SAP System
Session 11: SAP MaxDB Backup and Recovery
Those sessions will help you. If you still have some questions after that => update the thread.
4.
Took classes for the MAXDB database administration or get the SAP consulting support.
Regards, Natalia Khlopina -
Coherence performance question
Hi,
I have 3-4 jms consumers that put entries in parallel to the same cache. In parallel there are 2 times more threads that delete entries (no gets for now). While investigating the overall performance of the application I frequently spot the threads below. Can somebody comment if this is normal and if it is causing any contention or synchronisation. Running on solaris with 8 intel cores. Cache is distributed write-through. The cache store is just a mock.
"jmsContainer-1" prio=10 tid=0x0968dad0 nid=0x61 in Object.wait() [http://0x9fb23000..0x9fb23c38]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:474)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.waitPolls(DistributedCache.CDB:17)
- locked <0xf0fd3c18> (a java.util.HashSet)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.putAll(DistributedCache.CDB:84)
at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1344)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.putAll(DistributedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
at com.foo.util.jobbus.impl.JobBusCoherenceImpl.addTail(JobBusCoherenceImpl.java:135)
at com.foo.bar.input.MDP_NoEsper.process(MDP_NoEsper.java:129)
at com.foo.bar.input.MDP_NoEsper.onMessage(MDP_NoEsper.java:71)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:531)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:466)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:435)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:316)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:927)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:851)
at java.lang.Thread.run(Thread.java:595)
"jmsContainer-1" prio=10 tid=0x0b5265a0 nid=0x60 in Object.wait() [http://0x9fb64000..0x9fb64db8]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:474)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.waitPolls(DistributedCache.CDB:17)
- locked <0xf106f538> (a java.util.HashSet)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.putAll(DistributedCache.CDB:84)
at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1344)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.putAll(DistributedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
at com.foo.util.jobbus.impl.JobBusCoherenceImpl.addTail(JobBusCoherenceImpl.java:135)
at com.foo.bar.input.MDP_NoEsper.process(MDP_NoEsper.java:129)
at com.foo.bar.input.MDP_NoEsper.onMessage(MDP_NoEsper.java:71)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:531)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:466)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:435)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:316)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:927)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:851)
at java.lang.Thread.run(Thread.java:595)
"jmsContainer-1" prio=10 tid=0x08857c68 nid=0x5f in Object.wait() [http://0x9fba5000..0x9fba5d38]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:474)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.waitPolls(DistributedCache.CDB:17)
- locked <0xf1185680> (a java.util.HashSet)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.putAll(DistributedCache.CDB:84)
at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1344)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.putAll(DistributedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
at com.foo.util.jobbus.impl.JobBusCoherenceImpl.addTail(JobBusCoherenceImpl.java:135)
at com.foo.bar.input.MDP_NoEsper.process(MDP_NoEsper.java:129)
at com.foo.bar.input.MDP_NoEsper.onMessage(MDP_NoEsper.java:71)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:531)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:466)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:435)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:316)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:927)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:851)
at java.lang.Thread.run(Thread.java:595)
"jmsContainer-1" prio=10 tid=0x0a636180 nid=0x5e in Object.wait() [http://0x9fbe6000..0x9fbe6ab8]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:474)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.waitPolls(DistributedCache.CDB:17)
- locked <0xf0ef28e8> (a java.util.HashSet)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.putAll(DistributedCache.CDB:84)
at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1344)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.putAll(DistributedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
at com.foo.util.jobbus.impl.JobBusCoherenceImpl.addTail(JobBusCoherenceImpl.java:135)
at com.foo.bar.input.MDP_NoEsper.process(MDP_NoEsper.java:129)
at com.foo.bar.input.MDP_NoEsper.onMessage(MDP_NoEsper.java:71)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:531)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:466)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:435)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:316)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:927)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:851)
at java.lang.Thread.run(Thread.java:595)Additional information and questions...On Solaris T1 machine I see times like 80ms per put() when configured with 3 concurrent writer threads - distributed cache with only one node. This is very severe limitation - practically the throughput of this 3 writers is limited to 30messages/sec. Is this normal time for put() operation?
How fast should I expect put operation to be for object with 20 string fields 10 chars each implementing ExternaliableLite? How many concurrent writers does it typically make to have in one cluster node?
What are the best practices to implement fast writers to partitioned cache?
BR,
Georgi -
MBP with 27" Display performance question
I'm looking for advice regarding improving the performance, if possible, of my Macbook Pro and new 27'' Apple display combination. I'm using a 13" Macbook Pro 2.53Ghz with 4GB RAM, NVIDIA GeForce 9400M graphics card and I have 114GB of the 250GB of HD space available. What I'm really wondering is is this enough spec to run the 27" display easily. Apple says it is… and it does work, but I suspect that I'm working at the limit of what my MCB is capable of. My main applications are Photoshop CS5 with Camera RAW and Bridge. Everything works but I sometimes get lock ups and things are basically a bit jerky. Is the bottle neck my 2.53Ghz processor or the graphics card? I have experimented with the Open GL settings in Photoshop and tried closing all unused applications. Does anyone have any suggestions for tuning things and is there a feasible upgrade for the graphics card if such a thing would make a difference? I have recently started working with 21mb RAW files which I realise isn't helping. Any thoughts would be appreciated.
Matt.I just added a gorgeous LCD 24" to my MBP setup (the G5 is not Happy) The answer to your question is yes. Just go into Display Preferences and drag the menu bar over to the the 24 this will make the 24 the Primary Display and the MBP the secondary when connected.
-
Performance question about 11.1.2 forms at runtime
hi all,
Currently we are investigating a forms/reports migration from 10 to 11.
Initialy we were using v. 11.1.1.4 as the baseline for the migration. Now we are looking at 11.1.2.
We have the impression that the performance has decreased significantly between these two releases.
To give an example:
A wizard screen contains an image alongside a number of items to enter details. In 11.1.1.4 this screen shows up immediately. In 11.1.2 you see the image rolling out on the canvas whilst the properties of the items seem to be set during this event.
I saw that a number of features were added to be able to tune performance which ... need processing too.
I get the impression that a big number of events are communicating over the network during the 'built' of the client side view of the screen. If I recall well during the migration of 6 to 9, events were bundled to be transmitted over the network so that delays couldn't come from network roundtrips. I have the impression that this has been reversed and things are communicated between the client and server when they arrive and are not bundled.
My questions are:
- is anyone out there experiencing the same kind of behaviour?
- if so, is there some kind of property(ies) that exist to control the behaviour and improve performance?
- are there properties for performance monitoring that are set but which cause the slowness as a kind of sideeffect and maybe can be unset.
Your feedback will be dearly appreciated,
Greetigns,
Jan.The profile can't be changed although I suspect if there was an issue then banding the line would be something they could utilise if you were happy to do so.
It's all theoretical right now until you get the service installed. Don't forget there's over 600000 customers now on FTTC and only a very small percentage of them have faults. It might seem like lots looking on this forum but that's only because forums are where people tend to come to complain.
If you want to say thanks for a helpful answer,please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution’
Maybe you are looking for
-
IMac Freezing/crashing, please help. Tried everything! :(
Hello Fellow MacTalkers, I am writing this post to ask you for a help with an issue that is very frustrating to resolve. To begin with, I have purchased my MacBook Pro 2.4 GHZ, 3GB, Leopard running notebook about 7 months ago. Until recently I have d
-
Apple 's new drivers for Canon printers
Apple has just released new drivers for Canon printers. I have an i9900 inkjet printer which is not listed in the supported printers. Will the driver work for me or am I out of luck? Thanks.
-
How to go about Iphone5 sleep/wake button replacement program in Saudi Arabia
The sleep/wake button of my iphone5 doesn't work. Aware of the replacement program but literally no help from the service provider in KSA. It's been 4 months since my warranty expired. Any ideas anyone?
-
Training SpamAssassin with Ham E-Mail in the Quarantine Folder
From time to time (maybe once or twice a week), I get a legitimate piece of e-mail quarantined in the /var/virusmails folder. What is the best way to send a file (e.g., "spam-v9TygbyHYnHz.gz") like that through SpamAssassin so it learns that the e-ma
-
I'm looking for a set of AirPlay-compatible speakers for use outdoors on my patio. I'd like to have them work *without* an AirPort Express, i.e., something like the the B&W Zeppelin Air or the iHome iW1, but for outdoor use. Any suggestions?