Amount of Undos
Hello...
I want to know if there's any way to increase the amount of undos I can make in Illustrator CS5.
Thanks in advance.
I wonder why should anyone need more than 200 undos...anyway if you're adventurous try hacking the preference file, I found these two entries in there
/undoDepth 5
/maxUndoDepth 200
I don't know what they do, let us know how it went please.
Similar Messages
-
How to find out amount of undo generated in 10g and 9i? for a given session
Hi All,
I am on v10.2 on Linux. How can I find out amount of undo generated in my session ?
I tried this
SQL> select a.STATISTIC#, a.VALUE, b.NAME from v$mystat a , v$statname b
2 where a.statistic# = b.statistic# and (b.name like '%undo%' or b.name like '%rollback%') ;
STATISTIC# VALUE NAME
5 0 user rollbacks
75 0 DBWR undo block writes
176 132 undo change vector size
177 0 transaction tables consistent reads - undo records applied
178 0 transaction tables consistent read rollbacks
179 0 data blocks consistent reads - undo records applied
182 0 rollbacks only - consistent read gets
183 0 cleanouts and rollbacks - consistent read gets
188 0 rollback changes - undo records applied
189 0 transaction rollbacks
200 0 auto extends on undo tablespace
202 0 total number of undo segments dropped
220 0 global undo segment hints helped
221 0 global undo segment hints were stale
222 0 local undo segment hints helped
223 0 local undo segment hints were stale
224 0 undo segment header was pinned
226 0 SMON posted for undo segment recovery
229 0 SMON posted for undo segment shrink
236 0 IMU undo retention flush
241 0 IMU CR rollbacks
242 488 IMU undo allocation size
22 rows selected.
SQL>
SQL> create table temp1 as select * from dba_objects where 1=2 ;
Table created.
SQL> select a.STATISTIC#, a.VALUE, b.NAME from v$mystat a , v$statname b
2 where a.statistic# = b.statistic# and (b.name like '%undo%' or b.name like '%rollback%') ;
STATISTIC# VALUE NAME
5 0 user rollbacks
75 0 DBWR undo block writes
176 30280 undo change vector size
177 0 transaction tables consistent reads - undo records applied
178 0 transaction tables consistent read rollbacks
179 8 data blocks consistent reads - undo records applied
182 8 rollbacks only - consistent read gets
183 0 cleanouts and rollbacks - consistent read gets
188 0 rollback changes - undo records applied
189 0 transaction rollbacks
200 0 auto extends on undo tablespace
202 0 total number of undo segments dropped
220 0 global undo segment hints helped
221 0 global undo segment hints were stale
222 0 local undo segment hints helped
223 0 local undo segment hints were stale
224 0 undo segment header was pinned
226 0 SMON posted for undo segment recovery
229 0 SMON posted for undo segment shrink
236 0 IMU undo retention flush
241 0 IMU CR rollbacks
242 560 IMU undo allocation size
22 rows selected.
SQL> insert /*+ APPEND */ into temp1 select * from dba_objects ;
91057 rows created.
SQL> commit ;
Commit complete.
SQL> select a.STATISTIC#, a.VALUE, b.NAME from v$mystat a , v$statname b
2 where a.statistic# = b.statistic# and (b.name like '%undo%' or b.name like '%rollback%') ;
STATISTIC# VALUE NAME
5 0 user rollbacks
75 0 DBWR undo block writes
176 171356 undo change vector size
177 0 transaction tables consistent reads - undo records applied
178 0 transaction tables consistent read rollbacks
179 166 data blocks consistent reads - undo records applied
182 91 rollbacks only - consistent read gets
183 0 cleanouts and rollbacks - consistent read gets
188 0 rollback changes - undo records applied
189 10 transaction rollbacks
200 0 auto extends on undo tablespace
202 0 total number of undo segments dropped
220 0 global undo segment hints helped
221 0 global undo segment hints were stale
222 0 local undo segment hints helped
223 0 local undo segment hints were stale
224 0 undo segment header was pinned
226 0 SMON posted for undo segment recovery
229 0 SMON posted for undo segment shrink
236 0 IMU undo retention flush
241 0 IMU CR rollbacks
242 1352 IMU undo allocation size
22 rows selected.
What exactly is "undo change vector size" ?
Also, if I am on v 9.2, this ( undo change vector size ) stat name is not there. What can be used in v9.2 ?
Thanks in advance.Hi..
>
SET LINESIZE 200
COLUMN username FORMAT A15
SELECT s.username,
s.sid,
s.serial#,
t.used_ublk,
t.used_urec,
rs.segment_name,
r.rssize,
r.status
FROM v$transaction t,
v$session s,
v$rollstat r,
dba_rollback_segs rs
WHERE s.saddr = t.ses_addr
AND t.xidusn = r.usn
AND rs.segment_id = t.xidusn
ORDER BY t.used_ublk DESC;
>
HTH
Anand -
Hello all,
For the life of me, I cant figure out where to set the undo/redo amount. I had thought it was under Edit > Preferences > Units & Undo, but when I go to Edit, then Preferences, I do not see the undo options.
I had to reinstall CS2 on my computer, but I had thought there was a way you could set the amount of undo's you're allowed.
Win / CS2
Sometimes I'd loose my head if it weren't attached.
Thanks in advance.Don't recall what 10 was/is, but I believe CS1's limit was 200. I do not recall if that was or could be user defined. I just remember seeing the number of undos available at the bottom left near the taskbar would stop increasing once I hit 200.
Haven't bothered with it in CS3 and while I had CS2 on my previous system before the MB fried, I never looked into that.
So is the number of undos available dependent on memory in CS4, too? -
How to measure undo at a session level
Below is what are trying to do.
We are trying to implement Oracle's table block compression feature.
In doing so, in one of our testing we discovered that the session performing the DML (inserts) generated almost 30x undo.
We measured this undo by using below query (before the transaction commited).
SELECT a.sid, a.username, used_ublk, used_ublk*8192 bytes
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
However, above is at a transaction level since it still not committed, we would lose this value once the transaction either committed or rolled back, for this reason, we are trying to find an equivalent statistic at a session level.
1. What we are trying to find it out whether if an equivalent session level statistic exist to measure the amount of undo generated?
2. Is the undo generated always in terms of "undo blocks?"
3. When querying v$statname for name like '%undo%' we came across several statistics, the closest one
undo change vector size -in bytes?
4. desc test_table;
Name Type
ID NUMBER
sql> insert into test_table values (1);
5. However when we run the query against:
SELECT s.username,sn.name, ss.value
FROM v$sesstat ss, v$session s, v$statname sn
WHERE ss.sid = s.sid
AND sn.statistic# = ss.statistic#
AND s.sid =204
AND sn.name ='undo change vector size'
SID USERNAME NAME BYTES
204 NP4 undo change vector size 492
6. Query against: v$transaction
SELECT a.sid, a.username, used_ublk, used_ublk*8192 bytes
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
SID USED_UBLK BYTES
204 1 8192
What are trying to understand is:
1. How can we or what is the correct statistic to determine how many undo blocks were generated by particular session?
2. What is the statistic: undo change vector size? What does it really mean? or measure?Any transaction that generates Undo will use Undo Blocks in multiples of 1 --- i.e. the minimum allocation on disk is 8KB.
Furthermore, an Undo_Rec does not translate to a Table Row. The Undo has to capture changes to Indexes, block splits, other actions. Multiple changes to the same table/index block may be collapsed into one undo record/block etc etc.
Therefore, a transaction that generated 492 bytes of Undo would use 8KB of undo space because that is the minimum allocation.
You need to test with larger transactions.
SQL> update P_10 set col_2='ABC2' where mod(col_1,10)=0;
250000 rows updated.
SQL>
SQL> @active_transactions
SID SERIAL# SPID USERNAME PROGRAM XIDUSN USED_UBLK USED_UREC
143 542 17159 HEMANT sqlplus@DG844 (TNS V1-V3) 6 5176 500000
Statistic : db block changes 1,009,903
Statistic : db block gets 1,469,623
Statistic : redo entries 502,507
Statistic : redo size 117,922,016
Statistic : undo change vector size 41,000,368
Statistic : table scan blocks gotten 51,954
Statistic : table scan rows gotten 10,075,245Hemant K Chitale -
Hi guys.
Our database has 50GB of undo tablespace. I decided to create a second undo tablespace and switch to the new one. Since doing that yesterday, the size of the old undo is 49GB (was thinking that the values will drop to zero) and the new tablespace keeps increasing in size! Its size now is about 20GB. I do have the following question.
a) If I restart the database, it the value of the old undo going to fall to zero?
b) undo_retention=86400. Setting this value to a lesser value say 800seconds, it is going to act the performance of the database? Is it going to release the space on the old undo?
Thanks and any help is appreciated.
DavidThe undo tablespace will not automatically shrink size, since you have a new undo tablespace in place. You can drop the old one if you don't plan to use it.
Set lower undo_retention will certainly help to contain the undo space usage. However you should query v$undostat and v$rollstat to estimate the amount of undo space required for the current workload then size the undo tablespace accordingly. Turn off the auto extend on undo tablespace. -
Query abort with ora-30036 after more than 20 hours and 180g of undo
Dear all,
A developper transmits me a query. It fails after more than 20 hours and an undotbs of 180g (i change undo-retention, size of undo tbs, without results). That query makes a lot of inserts. How can i rewrite it to be more performant (my database is on 10.2.0.3 and i can't change it).
here's the query :
set serveroutput on size unlimited
set pages 0
set trims on
set lines 1000
set feed off
set pagesize 50
set linesize 1000
set head off
set echo off
set verify off
set feedback off
WHENEVER SQLERROR EXIT SQL.SQLCODE
DECLARE
v_annee VARCHAR(4) := '2012';
v_dkm_id NUMBER := '108';
v_entretien NUMBER;
v_nb_feuilles_cr NUMBER := 0;
v_nb_etats_cr NUMBER := 0;
v_action_id NUMBER;
v_rm_id NUMBER;
v_personne_id NUMBER;
CURSOR c_evaluation IS
SELECT E.ID# AS E_ID, W.ID# AS WF_ID , E.NATURE_ECHELON AS ECHELON
FROM T_EVALUATION E
JOIN T_DKM_LOCALE L ON L.ID#=E.DKM_LOCALE_ID
JOIN T_WORKFLOW W ON (W.CODE=E.CODE_WORKFLOW_INITIAL AND W.ANNEE=v_annee )
WHERE L.DKM_NAT_ID=v_dkm_id;
r_evaluation c_evaluation%ROWTYPE;
BEGIN
SELECT ID#
INTO v_personne_id
FROM T_PERSONNE
WHERE ID_FONCTIONNEL = 'herve.collin';
dbms_output.put_line('===== MAJ evaluations / statut_harmo_shd =============');
dbms_output.put_line('===== Creation des feuilles ==========================');
SELECT ID# MOTIF_ID
INTO v_entretien
FROM T_REF_MOTIF_TENUE_ENTRETIEN
WHERE CODE='PLA';
OPEN c_evaluation;
LOOP
FETCH c_evaluation INTO r_evaluation;
EXIT WHEN c_evaluation%NOTFOUND;
IF r_evaluation.ECHELON = 'T'
THEN
SELECT ID#
INTO v_rm_id
FROM T_REF_REDUCMAJO
WHERE ANNEE = v_annee
AND CATEGORIE_GRADE = 'ET'
AND CODE = 'V1';
END IF;
IF r_evaluation.ECHELON = 'F' OR r_evaluation.ECHELON = 'V'
THEN
SELECT ID#
INTO v_rm_id
FROM T_REF_REDUCMAJO
WHERE ANNEE = v_annee
AND CATEGORIE_GRADE = 'FV'
AND CODE = 'R1';
END IF;
UPDATE T_EVALUATION
SET STATUT_HARMO_SHD = 'C' , REF_REDUCMAJO_PROP_SHD_ID = v_rm_id
WHERE DKM_LOCALE_ID IN ( SELECT ID# FROM T_DKM_LOCALE WHERE DKM_NAT_ID = v_dkm_id );
INSERT INTO T_FEUILLE(ID#, REF_MOTIF_TENUE_ENT_ID, EVALUATION_ID, WORKFLOW_ID)
VALUES (S_FEUILLE.NEXTVAL , v_entretien , r_evaluation.E_ID, r_evaluation.WF_ID);
v_nb_feuilles_cr := v_nb_feuilles_cr + 1;
END LOOP;
CLOSE c_evaluation;
dbms_output.put_line(' -> '||v_nb_feuilles_cr||' feuilles crees');
COMMIT;
END;
set serveroutput off
exit
What is the bester choice ? drop the indexes on the table before insert, start the insert without fetching the data in cursor ?
nb: sorry for my bad english
Best regards
Catherine Andre
@mail: [email protected]user4443606 wrote:
Thanks for your reply !
I'll try to grow the undo tbs space but i stay convinced that the problem is in the query.You can be convinced & wrong at the same time.
row by row INSERT is slow by slow.
It can be done as single INSERT; but that won't change the amount of UNDO that is required. -
UNdo error (ora-01555) - Snapshot too old error
Hi,
If undo get filled and if we get a snapshot too old error then what is the solution for this error, plz give step by step solution for this.You prevent ORA-01555 errors by
1) Setting UNDO_RETENTION equal to or greater than the length of the longest running query/ serializable transaction in the system
2) Ensuring the UNDO tablespace is large enough to accomodate the amount of UNDO generated over that period of time.
You would need to determine things like the length of your longest running query, the amount of UNDO getting generated, etc. and adjust your UNDO_RETENTION and UNDO tablespace accordingly.
Justin -
Update Multiple Columns when concerned about redo/undo log sizes.
Hi ,
I have update statements that updates multiple columns at once if any of them is changed. What I see that even though the value of column is not changed it still increases the redo size.
Below is a sample code similar to the ones in my code. Basically I check whether there is a difference in any of the columns to be updated and update all of them.
Is there a way to improve redo log size without splitting the update statement for every column that I will be updating. Redo/Undo log size is a concern for us..
For i In 1.rec.Count Loop
Update employees e
Set e.first_name = rec(i).first_name, e.last_name = rec(i).last_name
Where e.first_name != rec(i).first_name
Or e.last_name != rec(i).last_name;
End Loop;My database is 10g.Muhammed Soyer wrote:
Redo/Undo log size is a concern for us..You are worried about the wrong thing.
If you are concerned about the amount of undo and redo, you should be less concerned about the small diffrence between updating 1 or 3 columns and remove the loop that is contributing to a massive increase in both undo and redo.
Re: global temporary table row order
Name Run1 Run2 Diff
STAT...undo change vector size 240,500 6,802,736 6,562,236
STAT...redo size 1,566,136 24,504,020 22,937,884Run2 shows what adding a loop to a regular SQL statement will do to undo and redo. It made the redo used 15 times greater and the undo almost 30 times greater. -
SQL Statement using Undo Tablespace
Hi
Is there a way to find out which SQL Statement is using the highest amount of Undo Tablespace ?Query v$undostat.
Use column "MAXQUERYID" for identgifying long running queries.
Refer Metalink Note 262066.1 for undo information.
Regards
Sethu -
Find out who / what is creating undo
Hi,
Since migrating to oracle 10g we have a 20 times increase in the amount of undo that is generated on our database. Previously we had less then 2GB but now since the migration we have more then 40GB !! (undo is set for 24 hours on the new DB (5 days on the old 9i db!!!)
Is there a way I can find out who or what application is creating large amount of undo on the database. All I can determine at the moment is that there is a lot of it, not who did it.
I really wonder where all this undo is coming from..
thx.
Steve.One reason could be that you might have turned on all the automatic tunining features of 10g.
Monitor your v$undostat dynamic view.
Also worth to read,
Metalink Note:240746.1
Subject: 10g NEW FEATURE on AUTOMATIC UNDO RETENTION
and
Metalink Note:311615.1
Subject: Oracle 10G new feature - Automatic Undo Retention Tuning
Jaffar
null -
Segment shrinking and UNDO tablespace
When i issued
alter table <table_name> shrink space cascade;
I got the error 'Unable to extend UNDO ...by 8' . Does Segment shrinking consume space from UNDO tablespace?
Message was edited by:
for_good_reasonAs Jonathan said, shrinking segment generates redo and undo data.
But, this phoenomenon of undo shortage is not normal case.
Shrinking segment might invole continous DML(deleting and re-inserting),
but it's transaction seems to be committed internally at regular intervals.
For this reason, shrinking operation should not hold undo area that long.
But have no knowledge on exact behavior of shrink operation.
Someone else will shed a light.
How big your table and indexes?
Some cases are reorted that shrinkage on big segment generates really large amount of undo data.
This might be related with your problem. But not sure.
Visit metalink note# 3888229. -
Hello,
The size of my UNDOTS is 6 GB and undo_retention=21600.
After last application upgrade, we the size of UNDOTS are full and we can't fixit and it create a problem, I open a TAR within Oracle and after two days they don't propose any solution.
Is there any possibility (an Oracle view) that we can see all transaction inside the UDOTS?
For you information, even I increase the size to 9 GB and I reduce the undo_retention=14400, the result are the same full UNDOTS.
Thanks for your suggestion and proposition, it's in production
Regards,
Message was edited by:
user579652
Message was edited by:
user579652The following query/report will help you estimate the amount of undo you require, based on your undo_retention, largest transaction, and block size. If the report shows you need more undo space than you have allocated, then you probably need to increase the amount of undo space or reduce the undo_retention. Pay special attention to the occurances of snapshot too old (ora-01555) and no space.
== =====================================
set linesize 120
set pagesize 60
alter session set nls_date_format = "dd-Mon-yyyy hh24:mi:ss";
COL TXNCOUNT FOR 99,999,999 HEAD 'Txn. Cnt.'
COL MAXQUERYLEN FOR 99,999,999 HEAD 'Max|Query|Sec'
COL MAXCONCURRENCY FOR 9,999 HEAD 'Max|Concr|Txn'
COL bks_per_sec FOR 99,999,999 HEAD 'Blks per|Second'
COL kb_per_second FOR 99,999,999 HEAD 'KB per|Second'
COL undo_mb_required FOR 999,999 HEAD 'MB undo|Needed'
COL ssolderrcnt FOR 9,999 HEAD 'ORA-01555|Count'
COL nospaceerrcnt FOR 9,999 HEAD 'No Space|Count'
break on report
compute max of txncount -
maxquerylen -
maxconcurrency -
bks_per_sec -
kb_per_second -
undo_mb_required on report
compute sum of -
ssolderrcnt -
nospaceerrcnt on report
SELECT begin_time,
txncount-lag(txncount) over (order by end_time) as txncount,
maxquerylen,
maxconcurrency,
undoblks/((end_time - begin_time)*86400) as bks_per_sec,
(undoblks/((end_time - begin_time)*86400)) * t.block_size/1024 as kb_per_second,
((undoblks/((end_time - begin_time)*86400)) * t.block_size/1024) * TO_NUMBER(p2.value)/1024 as undo_MB_required,
ssolderrcnt,
nospaceerrcnt
FROM v$undostat s,
dba_tablespaces t,
v$parameter p,
v$parameter p2
WHERE t.tablespace_name = UPPER(p.value)
AND p.name = 'undo_tablespace'
AND p2.name = 'undo_retention'
ORDER BY begin_time;
show parameter undo
clear computes -
UNDO exhaustion whilst performing validate_layer_with_context
Trying to track down an ArcSDE problem, I've been performing validation on the layers we have loaded into Oracle Spatial. One table (part of OS Mastermap) containing 400 million features will not complete a validation, after several hours of processing and 24GB o consumed UNDO the following error is reported.
SQL> execute sdo_geom.validate_layer_with_context('TOPOGRAPHIC_LINE', 'SHAPE',
'TOPOGRAPHIC_LINE_VAL', 100);
--After 8+ hours of processing
BEGIN sdo_geom.validate_layer_with_context('TOPOGRAPHIC_LINE', 'SHAPE', 'TOPOGRA
PHIC_LINE_VAL', 100); END;
ERROR at line 1:
ORA-29400: data cartridge error
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'
ORA-06512: at "MDSYS.SDO_3GL", line 439
ORA-06512: at "MDSYS.SDO_GEOM", line 3860
ORA-06512: at line 1
The topographic_line_val contains no records before or after running the validate procedure.
I'm sure I've missed something fundamental, but I can't see why so much UNDO should be necessary. Nothing is running a long running query on the val table, the procedure doesn't modify the topographic_line table and the commit internval is set to only 100.
Why is so much UNDO being consumed?
Should I just keep increasing the amount of UNDO tablespace available until it completes?
Is it a particularly bad idea to fake validate_layer(_with_context), using validate_geometry and a loop?
Could I be running into a Spatial bug? The database is at patch level 9.2.0.6
With thanks, AlexIMHO, call support, it sounds like a bug (using it on a larger dataset than it was ever tested with).
Have you thought about using VALIDATE_GEOMETRY_WITH_CONTEXT instead? It might be a hair slower, but since it is called once per row, at least it won't blow up:
CREATE TABLE geom_errors (rowid_text CHAR(18), error_text CLOB);
INSERT INTO geom_errors (
SELECT * FROM
(SELECT a.ROWID, SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(a.shape, 0.5) the_error
FROM topographic_line a)
WHERE the_error != 'TRUE'); -
Guys,
I am updating 1 million rows in Oracle 10g platform, normally when I do it in oracle 9i I run it as batch process and commit after each batch. Obviously to avoid/control undo generation. But in Oracle 10g I am told undo management is automatic and I do not need run the update as batch process.
Is this right please throw some light on this new feature - automatic undo management
ThanksAutomatic undo management was available in 9i as well, and my guess is you were probably using it there. However, I'll assume for the sake of this writing that you were using manual undo management in 9i and are now on automatic.
Automatic undo management depends upon UNDO_RETENTION, a parameter that defines how long Oracle should try to keep committed transactions in UNDO. However, this parameter is only a suggestion. You must also have an UNDO tablespace that's large enough to handle the amount of UNDO you will be generating/holding, or you will get ORA-01555: Snapshot too old, rollback segment too small errors.
You can use the UNDO advisor to find out how large this tablespace should be given a desired UNDO retention, or look online for some scripts...just google for: oracle undo size
Oracle 10g also gives you the ability to guarantee undo. This means that instead of throwing an error on SELECT statements, it guarantees your UNDO retention for consistent reads and instead errors your DML that would cause UNDO to be overwritten.
Now, for your original question...yes, it's easier for the DBA to minimize the issues of UNDO when using automatic undo management. If you set the UNDO_RETENTION high enough with a properly sized undo tablespace you shouldn't have as many issues with UNDO. How often you commit should have nothing to do with it, as long as your DBA has properly set UNDO_RETENTION and has an optimally sized UNDO tablespace. Committing more often will only result in your script taking longer, more LGWR/DBWR issues, and the "where was I" problem if there is an error (if it errors, where did it stop?).
Lastly (and true even for manual undo management), if you commit more frequently, you make it more possible for ORA-01555 errors to occur. Because your work will be scattered among more undo segments, you increase the chance that a single one may be overwritten if necessary, thus causing an ORA-01555 error for those that require it for read consistency.
It all boils down to the size of the undo tablespace and the undo retention, in the end...just as manual management boiled down to the size, amount, and usage of rollback segments. Committing frequently is a peroxide band-aid: it covers up the problem, tries to clean it, but in the end it just hurts and causes problems for otherwise healthy processes. -
Unable to undo!!!!
New porblem, my project randomly stops me from using the undo function from command z to just trying to use it in the edit menu. Is this happening to anyone else?
Help how do i get around this problem?!how about this as an explanation? Even if it doesn't solve the problem. This is written by Red Truck in another discussion
https://discussions.apple.com/thread/3135736?tstart=0
His response comes around August 20.
"FYI: The maximum "open files" limit on Mac OS is 256. You can find this by opening Terminal and typing:
ulimit -n
Whenever you have a program that has an autosave, as well as an unlimited amount of undos, the computer has to keep every version of the file open to that you can return to it when you hit cmd-z. There actually isn't such a thing as "undo". It's a figure of speech. What's really happening is, the program reverts to the most recent "save". Therefore, any action you do in FCPX that is able to be undone creates a new open file. Once that limit reaches 256, you will not be able to save anymore. Sounds like most people are losing about 3 hours at the low end. I would assume, then, that it takes the average FCPX user about 3 hours (maybe less) to execute 256 undoable actions."
His discussion goes further about how you might be able to work around this.
dick glendon
Maybe you are looking for
-
How to extraxct data from a view.
hi, i'm tying to extrct data from a view vbdka, with select. but i;m getting a syntax error that it anot a DB Table or DB view, pls tell me how to extract data from that view. thanks.
-
CABLE CONNECTION FROM 6131 TO 3.5MM JACK PLUG INPU...
I have recently acquired a 6131, for which I have ordered a 2GB microSD card and USB cable in order to transfer music files into the phone. I am unable to find a Nokia cable, which will output these music files from the phone via 1 3.5 mm jack plug i
-
Blend in a new account with mutated vowel
I have problems to blend in a new account with mutated vowel - to write the mutated vowel with two letters is not the right solution. So - how can I solve this problem?
-
Java and Oracle Business Intelligence Tools
Never used Oracle Business Intelligence Tools before. Could anybody tell me how they are friendly to Java. Thanks in advance. Richard
-
Error message: X11 connection rejected because of wrong authentication
Until recently I was able to ssh from the Mac in my office -- runing MacOS 10.6.4, to a Sun Solaris server -- running Solaris 10 10/09 (update 8) and all available patches installed, and run X11 programs displaying on my Mac. I installed these patche