Flex 4 - rendering many rows + cache
Hi all,
Currently I am working on a long list, that contains many (custom) rows.
Since rendering a long list takes a lot of resources, we're using a custom build ItemRenderer technique (it needs to be said that this is not the default ItemRenderer technique that can be used in f.e. DataGroup's. Basically what our list renderer does is checking which part of the list is currently vissible in the viewport and it renders only those rows. When a user scrolls down - so a new part of the list is vissible - it will start rendering those rows, etc. Also, when the user does not scroll for a little while, it will start auto rendering some rows that are not in the viewport untill, eventually, everything is rendered).
Everything works brilliant, except for this:
rendering a row takes appr. half a second since the rows are quite complex, and also because a lot of other things are going on in the application.
Each row consists out of several skinnable containers, with a style applied to each of them. Further on, each rows contains several labels to display text and they have also a custom style applied to them (to color the text, f.e.).
Since 90% of the rows have the same background (the background basically is a set of several styled skinnable containers) I was wondering if there is a possibility to render the background once, cache that result some how, and then use that cached background for all the other rows. This way, I can prevent that all the rows (1.000's of them) needs to do the same rendering over and over again.
Does any of you guys have a tip or hint about caching/optimizing this process?
Thanks a lot in advance.
Please ask if anything is unclear.
I thought flex only rendered what was visible anyway, and to save resources will automatically reuse itemrenderer as they scroll out of view.
Maybe just let flex do its thing and not try to do it manually
also how are the itemrenderes built? I know in mobile projects its recommended that you use AS to build them rather than mxml and not to nest layout containers but lay them out by calculating positions and size with AS.
I've seen Tour de Flex on Android download and render 20,000 lines in seconds. Maybe not the complex itemrenders you use but 20,000 lines on a mobile is pretty impressive.
Similar Messages
-
Performance issues; waited too long for a row cache enqueue lock!
hi Experts,
OS: Oracle Solaris on SPARC (64-bit)
DB version:
SQL> select * from V$VERSION;
BANNER
Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Solaris: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL>We have seen 100% CPU usage and high database load, so I checked the instance and have seen there were many blocking sessions and more than 71 sessions running the same select ;
elect tablespace_name as tbsname from (select tablespace_name,sum(bytes)/1024/1024 free_mb,0 total_mb,0 max_mb from dba_free_space group by tablespace_name union select tablespace_name, 0 current_mb,sum(bytes)/1024/1024 total_mb, sum(decode(maxbytes, 0, bytes, maxbytes))/1024/1024 max_mb from dba_data_files group by tablespace_name) group by tablespace_name having round((sum(total_mb)-sum(free_mb))/sum(max_mb)*100) > 95 Blocking sessions are running queries like this;
SELECT * from MYTABLE WHERE MYCOL=:1 FOR UPDATE;This select queries are coming from a cron job running every 10 minutes to check the tablespaces; so I first killed (kill -9 pid) those select statements so the load and CPU decreased to 13% of CPU usage. Blocking sessions still there and I didn't killed them waiting for app guys confirmation... after few hours and the CPU usage never went down the 13%; I have seen many errors;
WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=...System State dumped to trace file .....trcAfter that , we decided to restart the DB to release the locks!
I would like to understand why during loads we were no able to run those select statements, statspack schedule snapshot reports were not able to finish, also automatic
database statistics... why 5 for update statements locked the whole DB?user12035575 wrote:
SELECT FOR UPDATE will only lock the table row until the transaction is completed.
"WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK" happens when it needs to acquire a lock on data dictionary. Did you check the trace file associated with the statement?The trace file is too long, which information I need to focus more? -
"latch: row cache objects" and high "VERSION_COUNT"
Hello,
we are being faced with a situation where the database spends most of it's time waiting for latches in the shared pool (as seen in the AWR report).
All statements issued by the application are using bind variables, but what we can see in V$SQL is that even though the statements are using bind variables some of them have a relatively high version_count (> 300) and many invaliadations (100 - 200) even though the tables involved are very small (some not more than 3 or 4 rows).
Here is some (hopefully enough) information about the environment
Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production (on RedHat EL 5)
Parameters:
cursor_bind_capture_destination memory+disk
cursor_sharing EXACT
cursor_space_for_time FALSE
filesystemio_options none
hi_shared_memory_address 0
memory_max_target 12288M
memory_target 12288M
object_cache_optimal_size 102400
open_cursors 300
optimizer_capture_sql_plan_baselines FALSE
optimizer_dynamic_sampling 2
optimizer_features_enable 11.2.0.2
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
optimizer_secure_view_merging TRUE
optimizer_use_invisible_indexes FALSE
optimizer_use_pending_statistics FALSE
optimizer_use_sql_plan_baselines TRUE
plsql_optimize_level 2
session_cached_cursors 50
shared_memory_address 0The shared pool size (according to AWR) is 4,832M
The buffer cache is 3,008M
Now, my question: is a version_count of > 300 a problem (we have about 10-15 of those with a total of ~7000 statements in v$sqlarea). Those are also the statements listed in the AWR report at the top in the section "SQL ordered by Version Count" and "SQL ordered by Sharable Memory"
Is it possible that those statements are causing the the latch contention in the shared pool?
I went through https://blogs.oracle.com/optimizer/entry/why_are_there_more_cursors_in_11g_for_my_query_containing_bind_variables_1
The tables involved are fairly small and all the execution plans for each cursor are identical.
I can understand some of the invalidations that happen, because we have 7 schemas that have identical tables, but from my understanding that shouldn't cause such a high invalidation number. Or am I mistaken?
I'm not that experienced with Oracle tuning at that level, so I would appreciate any pointer on how I can find out where exactly the latch problem occurs
After flushing the shared pool, the problem seems to go away for a while. But apparently that is only fighting symptoms, not fixing the root cause of the problem.
Some of the statements in question:
SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE TRIGGER_NAME = :2 AND TRIGGER_GROUP = :3 AND TRIGGER_STATE = :4
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE JOB_NAME = :2 AND JOB_GROUP = :3 AND TRIGGER_STATE = :4
SELECT TRIGGER_STATE FROM QRTZ_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_SIMPLE_TRIGGERS SET REPEAT_COUNT = :1, REPEAT_INTERVAL = :2, TIMES_TRIGGERED = :3 WHERE TRIGGER_NAME = :4 AND TRIGGER_GROUP = :5
DELETE FROM QRTZ_TRIGGER_LISTENERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2So all of them are using bind variables.
I have seen that the columns used in the where clause all have histograms available. Would removing them reduce the number of invalidations?
Unfortunately I did not save the information from v$sql_shared_cursor before the shared pool was flushed, but most of the invalidations occurred in the ROLL_INVALID_MISMATCH column if that is of any help. There are some invalidations reported for AUTH_CHECK_MISMATCH and TRANSLATION_MISMATCH but to my understanding they caused by executing the statement for different schemas if I'm not mistaken.
Looking at v$latch_missed, most of the waits for parent = 'row cache objects' are for "kqrpre: find obj" and "kqreqd: reget">
In the AWR report, what does the Dictionary Cache Stats section say?
>
Here they are:
Dictionary Cache Stats
Cache Get Requests Pct Miss Scan Reqs Mod Reqs Final Usage
dc_awr_control 65 0.00 0 2 1
dc_constraints 729 33.33 0 729 1
dc_global_oids 60 23.33 0 0 31
dc_histogram_data 7,397 10.53 0 0 2,514
dc_histogram_defs 21,797 9.83 0 0 5,239
dc_object_grants 4 25.00 0 0 12
dc_objects 27,683 2.29 0 223 2,581
dc_profiles 1,842 0.00 0 0 1
dc_rollback_segments 1,634 0.00 0 0 39
dc_segments 7,335 6.94 0 360 1,679
dc_sequences 139 5.76 0 139 19
dc_table_scns 53 100.00 0 0 0
dc_tablespace_quotas 1,956 0.10 0 0 4
dc_tablespaces 17,488 0.00 0 0 11
dc_users 58,013 0.03 0 0 164
global database name 4,261 0.00 0 0 1
outstanding_alerts 54 0.00 0 0 9
sch_lj_oids 4 0.00 0 0 2
Library Cache Activity
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invalidations
ACCOUNT_STATUS 3,664 0.03 0 0 0
BODY 560 2.14 2,343 0.60 0 0
CLUSTER 52 0.00 52 0.00 0 0
DBLINK 3,668 0.00 0 0 0
EDITION 1,857 0.00 3,697 0.00 0 0
INDEX 99 19.19 99 19.19 0 0
OBJECT ID 68 100.00 0 0 0
SCHEMA 2,646 0.00 0 0 0
SQL AREA 32,996 2.26 1,142,497 0.21 189 226
SQL AREA BUILD 848 62.15 0 0 0
SQL AREA STATS 860 82.09 860 82.09 0 0
TABLE/PROCEDURE 17,713 2.62 26,112 4.88 61 0
TRIGGER 1,704 2.00 6,737 0.52 1 0 -
ROW CACHE ENQUEUE LOCK/ibrary cache load lock leads to database hung
(lowercase, curly brackets, no spaces)
We faced database hung on 3 node 11i erp 9i rac database.
We saw the library cache load lock timed out events reported in alert log.
Then few ora-600 and later ROW CACHE ENQUEUE LOCK timed out event. Eventually database was hung and we had to bounce the services .
we created support sr 7845542.992 for RCA.
The support says to increase shared pool size to avoid shared pool fragmentation and avoid reload ,additionaly to upgrade to 10g database.
I am not covinced adding additional pool size would solve this or upgrade to 10 .furthermore even 10g has such issues reported.
I saw couple of bugs mentioned such issue can happen due deadlock of session holding latches .
kindly let me know your view on issue
If required i can attach statspack for more information. (lowercase, curly brackets, no spaces)Many Thanks, i was keen to have your update .
There are 8 cpus on each node . Reloads very high during time period ,but normally there are not high reloads.
Statspack details for 3 nodes
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD1 1 9.2.0.8.0 YES npi-or-db-p-
11.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149817 30-Oct-09 13:00:09 574 #########
End Snap: 149837 30-Oct-09 14:00:17 602 #########
Elapsed: 60.13 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 122,414.93 11,449.13
Logical reads: 69,550.76 6,504.89
Block changes: 928.41 86.83
Physical reads: 196.24 18.35
Physical writes: 28.65 2.68
User calls: 343.97 32.17
Parses: 558.61 52.25
Hard parses: 43.48 4.07
Sorts: 467.24 43.70
Logons: 0.63 0.06
Executes: 2,046.99 191.45
Transactions: 10.69
% Blocks changed per Read: 1.33 Recursive Call %: 97.59
Rollback per transaction %: 5.07 Rows per Sort: 15.85
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.72 In-memory Sort %: 100.00
Library Hit %: 96.79 Soft Parse %: 92.22
Execute to Parse %: 72.71 Latch Hit %: 99.77
Parse CPU to Parse Elapsd %: 60.10 % Non-Parse CPU: 78.07
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 249,234 0 1,537 6 6.5
db file scattered read 61,776 0 769 12 1.6
row cache lock 780,098 10 566 1 20.2
library cache lock 697,849 157 432 1 18.1
latch free 127,926 4,715 387 3 3.3
global cache cr request 370,770 3,091 309 1 9.6
PL/SQL lock timer 59 58 112 1903 0.0
wait for scn from all nodes 303,572 18 103 0 7.9
library cache pin 26,231 2 100 4 0.7
global cache null to x 17,717 716 92 5 0.5
buffer busy waits 5,388 18 74 14 0.1
db file parallel read 5,245 0 69 13 0.1
log file sync 20,407 29 66 3 0.5
enqueue 52,200 70 60 1 1.4
buffer busy global CR 4,845 33 55 11 0.1
CGS wait for IPC msg 412,512 407,106 50 0 10.7
ksxr poll remote instances 1,279,565 483,046 48 0 33.2
log file parallel write 160,040 0 42 0 4.1
library cache load lock 1,491 2 29 20 0.0
global cache open x 19,507 344 28 1 0.5
buffer busy global cache 957 0 22 23 0.0
global cache s to x 16,516 180 20 1 0.4
db file parallel write 11,120 0 12 1 0.3
log file sequential read 618 0 11 18 0.0
DFS lock handle 23,768 0 10 0 0.6
control file sequential read 8,563 0 4 0 0.2
KJC: Wait for msg sends to c 1,549 57 4 3 0.0
lock escalate retry 76 76 4 52 0.0
SQL*Net break/reset to clien 12,546 0 3 0 0.3
SQL*Net more data to client 85,773 0 3 0 2.2
control file parallel write 1,265 0 2 1 0.0
global cache null to s 648 23 1 2 0.0
global cache busy 200 0 1 5 0.0
global cache open s 1,493 28 1 1 0.0
log file switch completion 12 0 1 61 0.0
PX Deq Credit: send blkd 161 70 1 4 0.0
kksfbc child completion 119 118 1 5 0.0
PX Deq: reap credit 5,948 5,456 0 0 0.2
PX Deq: Execute Reply 83 29 0 3 0.0
process startup 8 0 0 25 0.0
LGWR wait for redo copy 992 12 0 0 0.0
IPC send completion sync 450 450 0 0 0.0
PX Deq: Parse Reply 100 28 0 1 0.0
undo segment extension 10,380 10,372 0 0 0.3
PX Deq: Join ACK 146 65 0 1 0.0
buffer deadlock 222 221 0 0 0.0
async disk IO 1,179 0 0 0 0.0
wait list latch free 2 0 0 16 0.0
PX Deq: Msg Fragment 112 28 0 0 0.0
Library Cache Activity for DB: PROD Instance: PROD1 Snaps: 149817 -149837
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 116,007 1.1 133,347 19.9 24,338 0
CLUSTER 4,224 0.6 5,131 1.0 0 0
INDEX 15,048 24.1 13,798 26.4 2 0
JAVA DATA 82 0.0 692 39.6 136 0
JAVA RESOURCE 66 39.4 206 25.2 12 0
PIPE 1,140 0.5 1,160 0.5 0 0
SQL AREA 1,197,908 12.6 13,517,660 1.5 111,833 73
TABLE/PROCEDURE 3,847,439 0.8 4,230,265 7.9 142,200 0
TRIGGER 8,444 2.4 8,657 18.5 1,274 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 1,234 1,258 985 0
CLUSTER 3,222 25 25 25 0
INDEX 13,792 3,641 3,631 3,629 0
JAVA DATA 0 0 0 0 0
JAVA RESOURCE 0 26 25 0 0
PIPE 0 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 857,137 13,130 13,264 10,762 0
TRIGGER 0 200 202 200 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD2 2 9.2.0.8.0 YES npi-or-db-p-
12.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149847 30-Oct-09 14:00:05 493 #########
End Snap: 149857 30-Oct-09 15:00:02 432 #########
Elapsed: 59.95 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 71,853.44 32,058.65
Logical reads: 273,904.84 122,207.36
Block changes: 889.13 396.70
Physical reads: 40.40 18.03
Physical writes: 20.97 9.35
User calls: 153.74 68.60
Parses: 66.19 29.53
Hard parses: 2.66 1.19
Sorts: 25.70 11.47
Logons: 0.16 0.07
Executes: 726.41 324.10
Transactions: 2.24
% Blocks changed per Read: 0.32 Recursive Call %: 92.41
Rollback per transaction %: 4.84 Rows per Sort: 193.55
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 99.99 In-memory Sort %: 100.00
Library Hit %: 99.35 Soft Parse %: 95.97
Execute to Parse %: 90.89 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 36.55 % Non-Parse CPU: 98.28
Wait Events for DB: PROD Instance: PROD2 Snaps: 149847 -149857
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 65,823 33,667 90,459 1374 8.2
row cache lock 38,996 560 1,795 46 4.8
PX Deq Credit: send blkd 522 499 1,223 2344 0.1
PX Deq: Parse Reply 466 416 987 2117 0.1
db file sequential read 50,130 0 421 8 6.2
library cache lock 78,842 172 210 3 9.8
db file scattered read 6,904 0 152 22 0.9
global cache cr request 84,801 575 113 1 10.5
latch free 8,096 736 65 8 1.0
log file sync 5,676 27 41 7 0.7
wait for scn from all nodes 18,891 10 24 1 2.3
CGS wait for IPC msg 394,678 392,142 21 0 49.0
library cache pin 1,339 0 17 13 0.2
global cache null to x 2,145 48 16 8 0.3
global cache s to x 3,242 32 16 5 0.4
buffer busy waits 366 10 15 40 0.0
ksxr poll remote instances 70,990 31,295 14 0 8.8
db file parallel read 359 0 11 31 0.0
global cache open x 2,708 55 10 4 0.3
async disk IO 3,474 0 8 2 0.4
global cache open s 3,470 10 6 2 0.4
log file parallel write 13,076 0 5 0 1.6
global cache busy 58 40 5 90 0.0
PL/SQL lock timer 1 1 5 4877 0.0
DFS lock handle 3,362 0 5 1 0.4
log file sequential read 412 0 4 10 0.1
db file parallel write 2,774 0 3 1 0.3
library cache load lock 59 0 3 58 0.0
buffer busy global CR 722 0 3 4 0.1
control file sequential read 6,398 0 3 0 0.8
SQL*Net break/reset to clien 16,078 0 2 0 2.0
name-service call wait 26 0 2 67 0.0
control file parallel write 1,248 0 2 1 0.2
process startup 24 0 1 49 0.0
KJC: Wait for msg sends to c 3,491 4 1 0 0.4
SQL*Net more data to client 23,724 0 1 0 2.9
buffer busy global cache 23 0 0 19 0.0
global cache null to s 114 0 0 4 0.0
PX Deq: reap credit 5,646 5,509 0 0 0.7
log file switch completion 4 0 0 58 0.0
lock escalate retry 54 54 0 1 0.0
IPC send completion sync 119 118 0 0 0.0
direct path read 2,820 0 0 0 0.3
direct path read (lob) 3,632 0 0 0 0.5
PX Deq: Join ACK 88 37 0 0 0.0
direct path write 2,470 0 0 0 0.3
kksfbc child completion 6 6 0 6 0.0
buffer deadlock 3 3 0 11 0.0
global cache quiesce wait 4 4 0 8 0.0
Library Cache Activity for DB: PROD Instance: PROD2 Snaps: 149847 -149857
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 27,353 0.5 28,091 6.5 1,643 0
CLUSTER 203 1.0 269 1.5 0 0
INDEX 526 9.9 271 19.9 0 0
JAVA DATA 18 0.0 120 6.7 4 0
JAVA RESOURCE 20 45.0 56 26.8 3 0
JAVA SOURCE 1 100.0 1 100.0 0 0
PIPE 999 0.4 1,043 0.4 0 0
SQL AREA 131,793 7.6 3,406,577 0.4 7,012 0
TABLE/PROCEDURE 926,987 0.2 1,907,993 1.0 8,845 0
TRIGGER 1,519 0.1 1,532 4.9 69 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 129 277 117 0
CLUSTER 168 2 2 2 0
INDEX 271 52 56 52 0
JAVA DATA 0 0 0 0 0
JAVA RESOURCE 0 9 6 0 0
JAVA SOURCE 0 1 1 1 0
PIPE 0 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 89,523 764 868 460 0
TRIGGER 0 2 14 2 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD3 3 9.2.0.8.0 YES npi-or-db-p-
13.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149808 30-Oct-09 14:00:00 31 #########
End Snap: 149809 30-Oct-09 15:00:02 34 11,831.4
Elapsed: 60.03 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 1,518.14 36,700.35
Logical reads: 1,333.43 32,235.02
Block changes: 5.09 123.01
Physical reads: 54.31 1,312.88
Physical writes: 3.91 94.44
User calls: 1.46 35.40
Parses: 2.24 54.21
Hard parses: 0.04 0.93
Sorts: 0.84 20.28
Logons: 0.06 1.45
Executes: 3.11 75.23
Transactions: 0.04
% Blocks changed per Read: 0.38 Recursive Call %: 94.31
Rollback per transaction %: 45.64 Rows per Sort: 215.97
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 96.21 In-memory Sort %: 100.00
Library Hit %: 99.07 Soft Parse %: 98.29
Execute to Parse %: 27.94 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 69.88 % Non-Parse CPU: 97.92
Wait Events for DB: PROD Instance: PROD3 Snaps: 149808 -149809
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 19,510 7,472 15,509 795 130.9
PX Deq: Parse Reply 1,152 1,071 2,577 2237 7.7
row cache lock 2,202 518 1,579 717 14.8
db file scattered read 31,556 0 354 11 211.8
db file sequential read 17,272 0 67 4 115.9
db file parallel read 1,722 0 34 20 11.6
global cache cr request 53,754 91 32 1 360.8
wait for scn from all nodes 1,897 13 10 5 12.7
CGS wait for IPC msg 403,358 401,478 10 0 2,707.1
DFS lock handle 4,753 0 8 2 31.9
direct path read 1,248 0 6 5 8.4
PX Deq: Execute Reply 110 38 6 51 0.7
global cache open s 160 10 5 31 1.1
control file sequential read 6,442 0 3 0 43.2
name-service call wait 26 0 2 78 0.2
latch free 129 109 2 13 0.9
KJC: Wait for msg sends to c 153 24 1 9 1.0
control file parallel write 1,245 0 1 1 8.4
buffer busy waits 199 0 1 6 1.3
process startup 20 0 1 44 0.1
global cache null to x 74 2 1 9 0.5
global cache null to s 19 0 1 29 0.1
global cache open x 268 1 1 2 1.8
library cache lock 1,150 0 0 0 7.7
PX Deq: Join ACK 129 48 0 3 0.9
log file parallel write 1,157 0 0 0 7.8
async disk IO 219 0 0 1 1.5
direct path write 1,024 0 0 0 6.9
ksxr poll remote instances 6,740 4,595 0 0 45.2
PX Deq: reap credit 6,580 6,511 0 0 44.2
buffer busy global CR 73 0 0 2 0.5
log file sequential read 11 0 0 10 0.1
log file sync 100 0 0 1 0.7
global cache s to x 282 2 0 0 1.9
db file parallel write 95 0 0 1 0.6
library cache pin 142 0 0 0 1.0
SQL*Net break/reset to clien 28 0 0 1 0.2
IPC send completion sync 81 81 0 0 0.5
PX Deq: Signal ACK 32 14 0 1 0.2
PX Deq Credit: send blkd 3 1 0 7 0.0
SQL*Net more data to client 841 0 0 0 5.6
PX Deq: Msg Fragment 37 17 0 0 0.2
log file single write 4 0 0 1 0.0
db file single write 1 0 0 1 0.0
SQL*Net message from client 4,213 0 13,673 3246 28.3
gcs remote message 214,784 75,745 7,016 33 1,441.5
wakeup time manager 233 233 6,812 29237 1.6
PX Idle Wait 2,338 2,294 5,686 2432 15.7
PX Deq: Execution Msg 2,151 1,979 4,796 2229 14.4
Library Cache Activity for DB: PROD Instance: PROD3 Snaps: 149808 -149809
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 1,290 0.0 1,290 0.0 0 0
CLUSTER 18 0.0 8 0.0 0 0
SQL AREA 4,893 2.0 36,371 0.5 2 0
TABLE/PROCEDURE 1,555 3.9 3,834 4.9 71 0
TRIGGER 286 0.0 286 0.0 0 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 0 0 0 0
CLUSTER 4 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 863 224 42 42 0
TRIGGER 0 0 0 0 0
------------------------------------------------------------- -
Result set does not fit; it contains too many rows
Dear All,
We are in BI7 and running reports on Excel 2007. Even though number of rows limitation in Excel 2007 is more than 1Million, when I try to execute a report with more than 65k records of output, system is generating output only for 65k rows with message "Result set does not fit; it contains too many rows".
Our Patch levels:
GUI - 7.10
Patch level is 11
Is there any way to generate more than 65000 rows in Bex?
Thanks in advance...
regards,
Raju
Dear Gurus,
Could you please shed some light on this issue?
thanks and regards,
Raju
Edited by: VaraPrasadraju Potturi on Apr 14, 2009 3:13 AMVara Prasad,
This has been discussed on the forums - for reasons of backward compatibility I do not think BEx supports more that 65000 rows .... I am still not sure about the same since I have not tried out a query with more that 65K rows on excel 2007 but I think this is not possible... -
How to know how many rows including headers and footers does subreport has from Main Report
Hi, we are sturuggling with subreports. Main report has 3 subreports, each subreport is inplmented in a group header(3 subreports and 3 group headers). We would like to print a group header under subreport as a column header every page, and need a page break when group number is changed. This report exports a MS Excel(97-2003) report file.
In main report, [New After Page] is checked under Group Header #1d from [Section Expert]
In each subreport, [Repeat Group Header On Each Page] is checked under the highest group from [Group Expert]
Here are two issues;
Since Crystal Reports has more rows than Excel in one page, colmun header in each subreport is being printed in the middle of the page. It should be printed at the top of the page
When Subreport has many rows and has to be printed in more than 1 page, a page break is automatically inserted before column header. It should be printed right below column header which is Group Header #1
We have been trying to pass row counts(count of group header because group header is used as the details) using a shared variable from Subreport 1 to Subreport 2 via main report since Subreport2 cannot predict how many rows Subreport 1 has.
Here is what we are trying but we are getting an error which is "A constant expression is required here" under main report
- In Sunreport 1
whileprintingrecords;
shared numbervar SubGroupCount := DistinctCount({Table.Field});
- In Mainreport
shared numbervar SubGroupCount;
if(pagenumber) = 1
then (SubGroupCount)
else 50
Is there any solutionss or better ways other than above?
Thank you,
Main Report: Group Header #1a --> as Page Header
Run Date: mm/dd/yyyy Report Name
Main Report Group Header #1b --> Subreport 1
Header 1
Header 2
Header 3
Header 4
Header 5
Header 6
Main Report Goup Header #1c --> Subreport 2
Header 1
Header 2
Header 3
Header 4
Header 5
Header 6
Main Report Froup Header #1d --> Subreport 3
Header 1
Header 2
Header 3
Header 4
Header 5
Header 6Thank you for your reply and sorry for my complicated explanations. The report has confidential information, so I replaced to some fake data but I believe you can still see what I am trying to do..
Main Report
Subreport 2
Output1
Output2: the following page
--> more rows are printed..
We have two problems;
1. The column header in Output2 is supposed to be pronted right below the last row in Output2, however, a page break is automatically inserted. But even in the same output document, it works for some group when it has a few rows..
2. Since Crystal Reports prints more rows in one page than MS Excel does, Column header is not be printed at the top of the page.
I tried the way you advised me but it did not work.. -
Is there an easy way to see how many rows in a table? (selected or unselected)
Hi all,
Forgive me if this is a REALLY dumb question but I would love to know if there there is an easy way to to see how many rows there are in a table in InDesign?
(And I bet I am really going to kick myself when I hear the answer and how simple it probably is..lol !)
I am working on a huge catalog and am dealing with LOTS of tables...very long tables too at times. I am also doing a lot of copying and pasting back and forth between InDesign and Excel and it would REALLY help if I knew how many rows there are in a table without having to manually count them (TIRESOME!!).
Also, is there a way to see how many rows I have selected at any one time? It would be SO WONDERFUL if the info box could also provide this information.
Thank you SO MUCH in advance for your help:))
Christine
**UPDATE**
Oh boy I AM going to kick myself! Why only NOW that I wrote this question did I suddenly notice that the Table palette shows the number of rows and columns? lol.
Okay, then is there a way to see how many rows I have selected at any given time?@Christine – try the following ExtendScript (JavaScript):
if(app.selection.length === 0
|| !app.selection[0].constructor.name === "Cell"
|| !app.selection[0].constructor.name === "Table"){
exit(0);
var sel = app.selection[0];
if(sel.constructor.name === "Cell"){
var tableRowLength = sel.parent.rows.everyItem().getElements().length;
if(sel.constructor.name === "Table"){
alert("All "+sel.rows.everyItem().getElements().length+" rows selected in current table." );
exit(0);
var numberOfRowsSelected = sel.rows.length;
var indexOfSelectedRows = sel.rows.everyItem().index;
var startRowSel = indexOfSelectedRows[0]+1;
var endRowSel = indexOfSelectedRows.length+indexOfSelectedRows[0];
alert(numberOfRowsSelected +" row(s) selected.\r"+startRowSel+" to "+endRowSel+" out of "+tableRowLength+" of current table.");
You need not select whole rows, just one or a couple of cells.
Then run the script.
An alert message is telling you how many rows belong to the cell range you have selected and:
which rows of the table are selected…
A typical message would be:
6 row(s) selected.
3 to 8 out of 20 of current table.
The script does some basic checks of the selection.
If no cell or table is selected, it will just do nothing…
Uwe
Message was edited by: Laubender | Some language cosmetics in the alert message -
Copying many rows from one table to another
Could anyone tell me the best way to copy many rows (~1,000,000) from one table to another?
I have supplied a snipit of code that is currently being used in our application. I know that this is probably the slowest method to copy the data, but I am not sure what the best way is to proceed. I was thinking that using BULK COLLECT would be better, but I do not know what would happen to the ROLLBACK segment if I did this. Also, should I look at disabling the indexes while the copy is taking place, and then re-enable them after it is complete?
Sample of code currently being used:
PROCEDURE Save_Data
IS
CURSOR SCursor IS
SELECT ROWID Row_ID
FROM TMP_SALES_SUMR tmp
WHERE NOT EXISTS
(SELECT 1
FROM SALES_SUMR
WHERE sales_ord_no = tmp.sales_ord_no
AND cat_no = tmp.cat_no
AND cost_method_cd = tmp.cost_method_cd);
BEGIN
FOR SaveRec IN SCursor LOOP
INSERT INTO SALES_ORD_COST_SUMR
SELECT *
FROM TMP_SALES_ORD_COST_SUMR
WHERE ROWID = SaveRec.Row_ID;
RowCountCommit(); -- Performs a Commit for every xxxx rows
END LOOP;
COMMIT;
EXCEPTION
END Save_Data;
This type of logic is used to copy data for about 8 different tables, each containing approximately 1,000,000 rows of data.Your best bet is
Insert into SALES_ORD_COST_SUMR
select * from TMP_SALES_ORD_COST_SUMR;
commit;
Read this
http://asktom.oracle.com/pls/ask/f?p=4950:8:15324326393226650969::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:5918938803188
VG -
List of Value: Best practice when there are too many rows.
Hi,
I am working in JDev12c. Imagine the following scenario. We have an employee table and the organization_id as one of its attributes. I want to set up a LOV for this attribute. For what I understand, if the Organization table contains too many rows, this will create an extreme overhead (like 3000 rows), also, would be impossible to scroll down in a simple LOV. So, I have decided the obvious option; to use the LOV as a Combo Box with List of Values. Great so far.
That LOV will be use for each user, but it doesn't really depend of the user and the list of organization will rarely change. I have a sharedApplicationModule that I am using to retrieve lookup values from DB. Do you think would be OK to put my ORGANIZATION VO in there and create the View Accessor for my LOV in the Employees View?
What considerations should I take in term of TUNING the Organization VO?
RegardsHi Raghava,
as I said, "Preparation Failed" may be (if I recall correctly) as early as the HTTP request to even get the document for indexing. If this is not possible for TREX, then of course the indexing fails.
What I suggested was a manual reproduction. So log on to the TREX host (preferrably with the user that TREX uses to access the documents) and then simply try to open one of the docs with the "failed" status by pasting its address in the browser. If this does not work, you have a pretty good idea what's happening.
Unfortunately, if that were the case, this would the be some issue in network communications or ticketing and authorizatuions, which I can not tell you from here how to solve.
In any case, I would advise to open a support message to SAP - probably rather under the portal component than under TREX, as I do not assume that this stage of a queue error has anything to do with the actual engine.
Best,
Karsten -
Row cache lock aquired for more than 1 hour
Hi could some please let me know what is ROW CACHE LOCK, and at what situations does this happen. And also what does dc_histogram_defs enqueue means, what happening internally??
I am facing with a problem in our d/b(11g r1) with a code running more that 1 hr, but nothing is happening in our objects actually, only info i can see is ROW CACHE LOCK for more that 3000 seconds:
select p1text,p1,p2text,p2,p3text,p3 from v$session where event = 'row cache lock' and sid=37
P1TEXT P1 P2TEXT P2 P3TEXT P3
cache id 16 mode 0 request 3
select type,parameter,count,usage,gets,getmisses,scans,scanmisses,flushes,dlm_requests from v$rowcache where cache#=16
TYPE PARAMETER COUNT USAGE GETS GETMISSES SCANS SCANMISSES FLUSHES DLM_REQUESTS
PARENT dc_histogram_defs 4,497 4,497 12,426,122 1,446,845 0 0 210,040 1,706,801
SUBORDINATE dc_histogram_data 1,965 1,965 8,995,128 500,660 0 0 91,463 0
SUBORDINATE dc_histogram_data 297 297 3,500,090 46,371 0 0 6,591 0hi,
could you take a look at this topic
row cache lock
regards, -
ADF: Best way to find out how many rows are fetched?
Hello,
I have overridden method executeQueryForCollection of ViewObject in which I execute supper.executeQueryForCollection and after that want to find out how many rows are fetched during the execution.
If I try to use getFetchedRowCount I always get "0", if using getEstimatedRowCount, query are being re-executed.
What method is better to use for that?
Thank you,
Veniamin Goldin
Forbis, Ltd.I have a 'home-made' view called RBS, whose definition is this:
create view RBS as
select /*+ RULE */ substr(s.username,1,10) oracle,
substr(case when s.osuser like 'oramt%'
then nvl(upper(s.client_info),'client_info not set')
else substr(s.machine,instr(s.machine, '\')+1,length(s.machine))||':'||s.osuser
end
,1,20) CLIENT
, substr(''''||s.sid||','||s.serial#||''''||decode(s.status,'KILLED','*',''),1,12) kill_id
, lpad(to_char(t.xidusn),4) rbs#
, lpad(to_char(t.used_ublk),4) ublk
, lpad(to_char(t.used_urec),8) urecords
, i.block_gets
, lpad(to_number(round((sysdate - to_date(t.start_time,'MM/DD/YY HH24:MI:SS')) * 60 * 60 * 24)),9) time
, upper(substr(s.program,1,20)) PROGRAM
, to_char(s.LOGON_TIME,'HH24:MI:SS DD-MON') LOGIN_TIME
from sys.v_$transaction t
, sys.v_$session s
, sys.v_$sess_io i
, sys.v_$process p
where s.saddr = t.ses_addr
and i.sid = s.sid
and p.addr = s.paddr
/By monitoring the URECORDS column value of the row that corresponds to my session doing a transaction, I can see how it progresses.
Toon -
How to find out How many rows and coloumn exists in the Excel sheet
hai gurus,
present iam uploading data from presentation server to application server. when i use gui_upload the data is coming non-readable format.
so i used alsm_excel_to_int fm for that . but the problem is user can give any type of excel so... i need to know how many rows and coloumn s existed in that Excel
so is there any possiblity to get those values(Total rows and total coloumns).
plz help me..
if any one answered , appreciate with reward points,
thanks®ards,
Venu.Tsee you have to come into an agreement with other system before starting development..
Please dont do unnecessary coding for impractical things ..you may solve this but this is not good way of working in eRP packages..
Al least u can get final list of all Columns and which can be blank or non blank like this
then u can do coding for this scenerio
regards...
Message was edited by:
Madan Gopal Sharma -
To find out how many rows are processed/updated?
Hi Gurus,
I have given an update statement with parallelism. The data to be processed is huge around 50 million records.
I know that from OEM, we can find out how many rows it has processed or completed. I would like to know the dictionary view/ query to find out the same in SQL Plus.
Thanks
Cherrish VaidiyanI have a 'home-made' view called RBS, whose definition is this:
create view RBS as
select /*+ RULE */ substr(s.username,1,10) oracle,
substr(case when s.osuser like 'oramt%'
then nvl(upper(s.client_info),'client_info not set')
else substr(s.machine,instr(s.machine, '\')+1,length(s.machine))||':'||s.osuser
end
,1,20) CLIENT
, substr(''''||s.sid||','||s.serial#||''''||decode(s.status,'KILLED','*',''),1,12) kill_id
, lpad(to_char(t.xidusn),4) rbs#
, lpad(to_char(t.used_ublk),4) ublk
, lpad(to_char(t.used_urec),8) urecords
, i.block_gets
, lpad(to_number(round((sysdate - to_date(t.start_time,'MM/DD/YY HH24:MI:SS')) * 60 * 60 * 24)),9) time
, upper(substr(s.program,1,20)) PROGRAM
, to_char(s.LOGON_TIME,'HH24:MI:SS DD-MON') LOGIN_TIME
from sys.v_$transaction t
, sys.v_$session s
, sys.v_$sess_io i
, sys.v_$process p
where s.saddr = t.ses_addr
and i.sid = s.sid
and p.addr = s.paddr
/By monitoring the URECORDS column value of the row that corresponds to my session doing a transaction, I can see how it progresses.
Toon -
Many rows in DBA_UNDO_EXTENTS
Hi,
in my DBA_UNDO_EXTENTS I have many and many rows with status = UNEXPIRED
I'm reading these link:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/undo.htm
and Oracle® Database 2 Day DBA but i don't know how i can delete these row.
If I want introduce the Retention period i can set the RETENTION GUARANTEE to the UNDO TBS, right?
Thanks a lot, as usual
LainHello,
You can reduce the datafiles of the Undo Tablespaces as any kind of datafiles
but, you cannot go below the HWM.
Please, find enclosed a query to get the HWM position of every datafile (in Mo)
of the Undo Tablespace:
col file_name for A60
select B.file_name, (A.block_id + A.blocks)*{color:red}8{color}/1024 "Mo"
from dba_undo_extents A, dba_data_files B
where A.file_id = B.file_id
and A.block_id = (select max(C.block_id)
from dba_undo_extents C
where C.file_id = A.file_id
order by A.file_id
NB: In red the block size of your Database in Ko (here 8Ko).
You get a result like that:
FILE_NAME Mo
E:\ORACLE\ORADATA\PRD\LOG\UNDOTBS01.DBF 30.0703125
E:\ORACLE\ORADATA\PRD\LOG\UNDOTBS02.DBF 19.0703125
E:\ORACLE\ORADATA\PRD\LOG\UNDOTBS03.DBF 40.0703125So, in this example, you cannot resize the first datafile below ~ 31 Mo.
Hope this help.
Best regards,
Jean-Valentin
Edited by: Lubiez Jean-Valentin on Jan 25, 2010 8:33 PM -
Exception too many rows...
Hi
I am getting two different outputs with following code depending upon i declare the variable in first or second way...
when i declare the variable v_empno as number(10) and too many rows exception is raised....and after that i dbms this variable..it is null...
but when i declare the same variable as table.column%type....and the similar scenario happens and i dbms the value of variable...it is not null...rather the first value from output of the query...
declare
--v_empno number(10);
v_empno emp.empno%type;
begin
dbms_output.put_line('before '||v_empno );
select empno into v_empno from emp;
dbms_output.put_line('first '||v_empno);
exception when too_many_rows then
dbms_output.put_line('second '||v_empno);
dbms_output.put_line('exception'||sqlerrm);
end;
is there any specific reason for this....
ur comments plz
Thanks
SidhuIn 9i:
SQL> declare
2 --v_empno number(10);
3 v_empno emp.empno%type;
4 begin
5 dbms_output.put_line('before '||v_empno );
6 select empno into v_empno from emp;
7 dbms_output.put_line('first '||v_empno);
8 exception when too_many_rows then
9 dbms_output.put_line('second '||v_empno);
10 dbms_output.put_line('exception'||sqlerrm);
11 end;
12 /
before
second 7369
exceptionORA-01422: exact fetch returns more than requested number of rows
PL/SQL procedure successfully completed.
SQL> declare
2 v_empno number;
3 --v_empno emp.empno%type;
4 begin
5 dbms_output.put_line('before '||v_empno );
6 select empno into v_empno from emp;
7 dbms_output.put_line('first '||v_empno);
8 exception when too_many_rows then
9 dbms_output.put_line('second '||v_empno);
10 dbms_output.put_line('exception'||sqlerrm);
11 end;
12 /
before
second
exceptionORA-01422: exact fetch returns more than requested number of rows
PL/SQL procedure successfully completed.
SQL> edit
Wrote file afiedt.buf
1 declare
2 v_empno number(10);
3 --v_empno emp.empno%type;
4 begin
5 dbms_output.put_line('before '||v_empno );
6 select empno into v_empno from emp;
7 dbms_output.put_line('first '||v_empno);
8 exception when too_many_rows then
9 dbms_output.put_line('second '||v_empno);
10 dbms_output.put_line('exception'||sqlerrm);
11* end;
SQL> /
before
second 7369
exceptionORA-01422: exact fetch returns more than requested number of rows
PL/SQL procedure successfully completed.In 10G:
SQL> declare
2 v_empno number(10);
3 --v_empno emp.empno%type;
4 begin
5 dbms_output.put_line('before '||v_empno );
6 select empno into v_empno from emp;
7 dbms_output.put_line('first '||v_empno);
8 exception when too_many_rows then
9 dbms_output.put_line('second '||v_empno);
10 dbms_output.put_line('exception'||sqlerrm);
11 end;
12 /
before
second 7369
exceptionORA-01422: exact fetch returns more than requested number of rows
PL/SQL procedure successfully completed.
SQL> edit
Wrote file afiedt.buf
1 declare
2 v_empno number;
3 --v_empno emp.empno%type;
4 begin
5 dbms_output.put_line('before '||v_empno );
6 select empno into v_empno from emp;
7 dbms_output.put_line('first '||v_empno);
8 exception when too_many_rows then
9 dbms_output.put_line('second '||v_empno);
10 dbms_output.put_line('exception'||sqlerrm);
11* end;
SQL> /
before
second 7369
exceptionORA-01422: exact fetch returns more than requested number of rows
PL/SQL procedure successfully completed.
SQL> edit
Wrote file afiedt.buf
1 declare
2 --v_empno number;
3 v_empno emp.empno%type;
4 begin
5 dbms_output.put_line('before '||v_empno );
6 select empno into v_empno from emp;
7 dbms_output.put_line('first '||v_empno);
8 exception when too_many_rows then
9 dbms_output.put_line('second '||v_empno);
10 dbms_output.put_line('exception'||sqlerrm);
11* end;
SQL> /
before
second 7369
exceptionORA-01422: exact fetch returns more than requested number of rows
PL/SQL procedure successfully completed.Anyhow you should not rely on the fact Oracle fetches the first value into variable
and keeps it when the excaprion is raised.
Tom Kyte discusses the SELECT INTO issue here:
http://asktom.oracle.com/pls/ask/f?p=4950:8:7849913143702726938::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1205168148688
Rgds.
Maybe you are looking for
-
How to import appointments onto Calender
I need to put some new entries onto my calender on my iPod. To this date even though my iPod is formatted to a PC when I import into my calender its been through a Mac. How do you do it on a PC? Mac OS X (10.3.6) Mac OS X (10.3.6)
-
Camera Raw for the New Nikon 810
Shot with my new Nikon D810. Updated from Photoshop CS6 to CC and my Bridge to CC thinking I would be able to open the NEF (RAW) files. Is the Nikon D810 supported yet? Thanks!
-
Hello, we are using a BPS and a BCS(6.0) system. In our BPS we have to determine the Currency Key depending on the selected Company. The currency key should be read from the BCS Data model. Does someone have an example coding, which methods from whic
-
Cannot install latest update 2647
Hi I have tried 3 times to download and install the latest update for my Curve 9300 3G. It currently has 2391 installed and every time i download the latest 2647 - it appears to download okay and then just 'jams' with a message saying 'Please wait wh
-
Comment puis-je réinitialiser mon ipod touch si j'ai oublier mon mot de passe ?
Bonjour , je connecte mon ipod à itunes et il est écrit sur mon ipod qu'il est désactiver , alors quand je le connecte, ça ne fonctionne pas et ça écrit de réessayer .. je ne peux pas car j'ai oublier mon mot de passe ? comment puis-je faire ?