Decode performance.
i am using decode function, several columns in my select query.
if i am having 200 records its taking time to retrieve data.
these best way to eliminate this performance.
can i use CASe stmt. instead of DEcode. is this best way. or any possible solution.
Sorry to correct you Peter, but the article that you referenced states that decode also uses short circuit evaluation.
...however COALESCE, DECODE and both CASE expressions provide short circuit evaluations - once on expression returns true, there is no need to continue evaluating the superfluous entries, control returns. ...I never tested it, but this fits with my experiences.
Furthermore I remember very vaguely a performance test here in the forum between decode and case. The result was that decode was a tiny little bit faster. However this is a few years back and might have changed now. On the other hand if the OP needs to encapsulate several decodes against one single case expression, then I would expect a considerable time difference in favor of case.
But as Billy already said. The chances are high, that the performance issue is not from the decode function, but from something different.
Some more recent article: Re: CASE or DECODE - what is faster?
Edited by: Sven W. on Jun 1, 2011 11:05 AM
Similar Messages
-
New ffmpeg and h264 decoding performance (!)
It's just an impression or the latest mplayer performs incredibly well?
Some months ago i remember i benchmarked a sample 1280x536 h264 video and cpu usage was very near 100% on a little atom [email protected]
Using an mplayer compiled against latest ffpeg-mt, today i tried the same sample video on the same cpu and i got this results:
time mplayer 1280x536.mkv -nosound -vo null -benchmark -endpos 60 -lavdopts skiploopfilter=all:threads=2
BENCHMARKs: VC: 27.969s VO: 0.009s A: 0.000s Sys: 0.390s = 28.369s
BENCHMARK%: VC: 98.5920% VO: 0.0329% A: 0.0000% Sys: 1.3751% = 100.0000%
Exiting... (End of file)
real 0m28.459s
user 0m49.507s
sys 0m0.140s
28 seconds on a 60 seconds stream means less than 50% of cpu usage!
Note that atom n280 is not a multicore cpu, it just provides hyperthreading, launching mplayer with threads=1 increases the benchmark time by about 2 seconds, which is great too and is the result i get with standard archlinux mplayer package.
On a full frame 1280x720 video cpu usage reach about 67%
I think the new ffmpeg now easilly outperforms even coreAVC, i'm very impressed!
Last edited by kokoko3k (2010-06-16 08:22:08)Thanks, I missed that article on phoronix,
"The H.264 and Theora decoders are now significantly faster and the Vorbis decoder has seen important updates."
"significantly" is an euphemism
-edit-
with skiploopfilter=none:threads=2
BENCHMARKs: VC: 28.048s VO: 0.010s A: 0.000s Sys: 0.385s = 28.442s
BENCHMARK%: VC: 98.6129% VO: 0.0340% A: 0.0000% Sys: 1.3531% = 100.0000%
Exiting... (End of file)
real 0m28.547s
user 0m49.497s
sys 0m0.173s
skiploopfilter=none:threads=1
BENCHMARKs: VC: 33.144s VO: 0.008s A: 0.000s Sys: 0.300s = 33.452s
BENCHMARK%: VC: 99.0797% VO: 0.0242% A: 0.0000% Sys: 0.8962% = 100.0000%
Exiting... (End of file)
real 0m33.540s
user 0m33.131s
sys 0m0.197s
I'm very curious to compare those results again a coreavc enabled mplayer but i haven't the codec. someone?
Last edited by kokoko3k (2010-06-16 08:23:56) -
Which is the best decode or case
Hi,
When you check performance wise which is the best one decode or case?
Thanks,> You mean CPU processor speed or oracle buffer(SGA).
Neither. CPU architecture. RISC vs CISC vs ..
On a PA-RISC1 CPU a DECODE is just a tad faster than a CASE. On an AMD64 CPU, the reverse is true.
> When I increase memory, The case and decode performance will increase?
No. A CASE and a DECODE does not need memory to work faster. It is a set of machine code instructions that needs to compare values to determine a result. It depends on just how fast the CPU can execute this set of machine code instructions.
A faster CPU will make a very significant difference. An AMD64 Opteron CPU is a couple of times faster than a PA-RISC1 CPU.
I had this exact same conversation back in 2006 on this forum - and posted [url
http://forums.oracle.com/forums/thread.jspa?messageID=1346165�]this benchmark to show that the decision of using CASE or DECODE is not a decision that should be based on raw performance. -
Air Parrot performance with Ethernet cable ?
Hello Everyone,
I am very impressed by mirroring functionality provided by Airplay of apple tv. I have a 2010 Macbook pro purchased in mid 2010. Unfortunately, my mac doesn't have quicksynch tehncology hardware support for airplay to work in mountain lion. So, I never upgrader to ML. However, I have been playing around with Air Parrot. I have noticed a lot of choppiness while playing videos.
Both my macbook and apple tv are currently using Wi-Fi. I am guessing the lack of smoothness in video mirroring is because of wi-fi. Has any one tested by using ethernet cables. I mean connecting both MB and ATv to broadband modem using ethernet cables? Did it improve video quality ?
Thanks in advance.I got a Mid-2010 iMac (i3) and a ATV3. I faced some performance issues while streaming HD-video content from the web (3 Mbit DSL) and tranfering by WiFi (incl. a APExpress as repeater) to Mac and back to ATV via Airparrot (mirroring). So I have tried Dlan (Devolo 500) as an alternative to WiFi. Performance has improved significantly ! Now I am quite happy (but still anoyed that Apple does not support older machines with mirroring on ML).
But i guess my i3-iMac is now quite at the border of it's video coding/decoding performance.
---> give Ethernet (or Dlan) a try.
iMac 21" i3 (mid 2010)
iPad 2
ATV 3
iphone 4s
3x APExpress (for music airplay, 1 also as repeater)
AirportExtreme generating WiFi
Samsung on WXP
Dell on Win7
3Mbit DSL on Speedport700 Router
3x Devolo 500 (compact) for Dlan -
How to Compare Data length of staging table with base table definition
Hi,
I've two tables :staging table and base table.
I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
Can anyone tell me how to get desired results?
Thanks,
Mahender.This is a shot in the dark but, take a look at this example below:
SQL> DROP TABLE STAGING;
Table dropped.
SQL> DROP TABLE BASE;
Table dropped.
SQL> CREATE TABLE STAGING
2 (
3 ID NUMBER
4 , A VARCHAR2(40)
5 , B VARCHAR2(40)
6 , ERR_LENGTH VARCHAR2(1)
7 );
Table created.
SQL> CREATE TABLE BASE
2 (
3 ID NUMBER
4 , A VARCHAR2(25)
5 , B VARCHAR2(25)
6 );
Table created.
SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
SQL> UPDATE STAGING ST
2 SET ERR_LENGTH = 'Y'
3 WHERE EXISTS
4 (
5 WITH columns_in_staging AS
6 (
7 /* Retrieve all the columns names for the staging table with the exception of the primary key column
8 * and order them alphabetically.
9 */
10 SELECT COLUMN_NAME
11 , ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
12 FROM ALL_TAB_COLUMNS
13 WHERE TABLE_NAME='STAGING'
14 AND COLUMN_NAME != 'ID'
15 ORDER BY 1
16 ), staging_unpivot AS
17 (
18 /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
19 * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
20 * in the same order as the ROW_NUMBER() function in columns_in_staging
21 */
22 SELECT ID
23 , COLUMN_NAME
24 , DECODE
25 (
26 RN
27 , 1,A
28 , 2,B
29 ) AS VAL
30 FROM STAGING
31 CROSS JOIN COLUMNS_IN_STAGING
32 )
33 /* Only return IDs for records that have at least one column value that exceeds the length. */
34 SELECT ID
35 FROM
36 (
37 /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
38 * the check to see if there are any differences in the length if so set a flag.
39 */
40 SELECT STAGING_UNPIVOT.ID
41 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
42 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
43 FROM STAGING_UNPIVOT
44 JOIN ALL_TAB_COLUMNS ATC ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
45 WHERE ATC.TABLE_NAME='BASE'
46 ) A
47 WHERE COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
48 AND ST.ID = A.ID
49 )
50 /
2 rows updated.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX Y
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX Y
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
Thanks! -
Hi guys,
i try to render some .r3d 4k files in h264 5.2 but After effect dosent support Level 5.2. I can only choose Level 5.1.
I search around the web and finde some guys, they told me H264 Level 5.2 is only Supported by After Effects CC.
So i Download the After Effects CC Trial and try it there but again i can only choose H264 level 5.1.
Is it possible to Render H264 in After Effects CS6 or CC?
I would buy a CC abo, but i have to know if i can render H264 level 5.2!
Please Help meYou can search for the H.264 Level 5.2 specifications
http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC
There are only a few reasons to use that setting and, from what I can tell, 5.2 is practical nor necessary. Not many encoders are able.
As the term is used in the standard, a "level" is a specified set of constraints that indicate a degree of required decoder performance for a profile. For example, a level of support within a profile specifies the maximum picture resolution, frame rate, and bit rate that a decoder may use. A decoder that conforms to a given level must be able to decode all bitstreams encoded for that level and all lower levels. -
Logging multiple server requests?
We will have a "log server" that will receive data from various webservers.
The purpose is to store every event that takes place on the "log server".
A user clicks on a picture, we send a data record containing the picture name, picture owner, and other data, to the "log server", that stores the info in a database. Later, another program will come along and collect the data to generate reports.
My question is, what is the best way to go about this in Coldfusion? I'm looking for the most efficient way with least loss of data. And any pitfalls that others have had implementing a similar design.
Right now we're looking at creating a Web Service that is a template that waits for HTTP post operations and records them in a database.
Is there a better way?
Thanks in advance for any ideas or suggestions!
MikeIn my opinion web services are overkill in this situation. Just create
a simple cfm page that receives it's parameters via URL or FORM,
validates those parameters and saves the log entry to the database. A
simple HTTP request is lightweight compared to a soap request (both in
encoding-decoding performance and in length).
Mack -
Performing math on Decode function tags and moving averages
I have a query shown below to create columns for my report. Can I use the field names as shown in bold to perform math functions? Is there an easy way to do this?
select Data_date,
SUM(DECODE(tag_id,'SEF_F0348I',avg_value,NULL))"TE_Flow_mgd",
SUM(DECODE(tag_id,'L_STEbod_con',avg_value,NULL))"TE_BOD_mgl",
"TE_FLOW_mgd" * "TE_BOD_mgl" * 8.34*.45359 as "KG_BOD"FROM daily_tag_data
Where data_date >= to_date('12/31/2002','mm/dd/yyyy')
GROUP BY data_date;
Also how would I perform a seven day moving average on "KG_BOD"
Thanks - Very new at this.If you want avoid a sub-query, try this...
select Data_date,
SUM(DECODE(tag_id,'SEF_F0348I',avg_value,NULL))"TE_Flow_mgd",
SUM(DECODE(tag_id,'L_STEbod_con',avg_value,NULL))"TE_BOD_mgl",
SUM(DECODE(tag_id,'SEF_F0348I',avg_value,NULL)) * SUM(DECODE(tag_id,'L_STEbod_con',avg_value,NULL)) * 8.34*.45359 "KG_BOD"
FROM daily_tag_data
Where data_date >= to_date('12/31/2002','mm/dd/yyyy')
GROUP BY data_date;
just replace the 'column-alias' with the actual arithmetic fuction... -
2D decode vs. form data integration.export data performance
Hello
We have 2D forms that we accept in printed format or via electronic submission. When they are printed we use the Decode to extract the data before processing. When we get them back electronically, instead of using the Decode we use the formDataIntegration.Export data call. We have mapped a xsd to the form to facilitate this. An advantage of using the export data call is that we dont have to deal with print issues such as smudged barcodes, bleeding ink, thin paper, etc..
BUT im wondering performance wise is one better than the other?
A scanned form is 40kb while our electronic form is 400 kb (mainly because we only scan the barcoded page).
Is 1 more CPU intensive? Does 1 require more RAM and other system resources than the other?
Are the both multithreaded or single threaded?
I ask because we currently have a 70/30 split of scans/electronic submissions but because we will be outsourcing this and our pilot is showing a 99/1 split (scans/electronic).Hello
We have 2D forms that we accept in printed format or via electronic submission. When they are printed we use the Decode to extract the data before processing. When we get them back electronically, instead of using the Decode we use the formDataIntegration.Export data call. We have mapped a xsd to the form to facilitate this. An advantage of using the export data call is that we dont have to deal with print issues such as smudged barcodes, bleeding ink, thin paper, etc..
BUT im wondering performance wise is one better than the other?
A scanned form is 40kb while our electronic form is 400 kb (mainly because we only scan the barcoded page).
Is 1 more CPU intensive? Does 1 require more RAM and other system resources than the other?
Are the both multithreaded or single threaded?
I ask because we currently have a 70/30 split of scans/electronic submissions but because we will be outsourcing this and our pilot is showing a 99/1 split (scans/electronic). -
DECODE Vs CASE Performance Issue
Hi,
The comparison is on basis of performance. There are billions of records to be processed and millions to be updated.
Please go through the folowing queried to have general idea because the actual query is 2 page long.
DECODE(EXP,1,VAL1,VAL2)
Vs
CASE WHEN EXP=1
THEN VAL1
ELSE
VAL2
END
Update table1
set column=( select query)
where (item) IN (select item from table2 )
Please give inputs in term of performance.
Regards
Nitin BajajHu... I can understand your point also but it is not
diff to find the solution in the thread, you don't
need to read the whole thread.Even if you see that there are no entries in the duplicate postings, whenever you are using the search function, those will also be shown.
but put yourself in his position, he might be in need
of desparate help right...
Think in two ways...And posting the same problem again and again solves this?
cd -
Performance decission DECODE vs CASE
Hi Gurus,
I have a stored procedure that has to process around 2 million records. The performance is very slow at the moment.
I need advise on the following section :-)
CASE x
WHEN '1' THEN
y := 'A';
WHEN '2' THEN
y := 'B';
WHEN '3' THEN
y := 'C';
WHEN '...'
y := '...';
END CASE;
There are around 25 different cases, of course the values I put here are dummy...
Can I replace it with DECODE as its 1 to 1 comparison / return like
y := DECODE(x, '1', 'A', '2', 'B', '3', 'C', .... '...', '...');
Is it a faster executing code or CASE better? I know, CASE has its own advantages like readability, flexibility etc. but how about performance in my particular expression set?
Best Regards,
Faisal.You could also consider the following.
declare
v_dummy VARCHAR2(1);
v_start timestamp;
v_end timestamp;
begin
v_start := systimestamp;
for i in 1..100 loop
for j in (select decode(status,'INVALID','I','VALID','V') s
FROM dba_objects) loop
v_dummy := j.s;
END LOOP;
END LOOP;
v_end := systimestamp;
dbms_output.put_line('time : '||to_char(v_end - v_start,'SSSSS.FF'));
END;
time : +000000 00:00:25.454000000
time : +000000 00:00:25.422000000
time : +000000 00:00:25.515000000
===========================================================================
declare
v_dummy VARCHAR2(1);
v_start timestamp;
v_end timestamp;
begin
v_start := systimestamp;
for i in 1..100 loop
for j in (select case status when 'INVALID' then 'I'
when 'VALID' then 'V' end s
FROM dba_objects) loop
v_dummy := j.s;
END LOOP;
END LOOP;
v_end := systimestamp;
dbms_output.put_line('time : '||to_char(v_end - v_start,'SSSSS.FF'));
END;
time : +000000 00:00:25.187000000
time : +000000 00:00:25.031000000
time : +000000 00:00:25.141000000
===========================================================================
declare
v_dummy VARCHAR2(1);
v_start timestamp;
v_end timestamp;
begin
v_start := systimestamp;
for i in 1..100 loop
for j in (select status s
FROM dba_objects) loop
SELECT decode(j.s,'INVALID','I','VALID','V')
into v_dummy
FROM dual;
END LOOP;
END LOOP;
v_end := systimestamp;
dbms_output.put_line('time : '||to_char(v_end - v_start,'SSSSS.FF'));
END;
time : +000000 00:04:07.688000000
time : +000000 00:04:06.953000000
time : +000000 00:04:07.453000000
===========================================================================
declare
v_dummy VARCHAR2(1);
v_start timestamp;
v_end timestamp;
begin
v_start := systimestamp;
for i in 1..100 loop
for j in (select status s
FROM dba_objects) loop
case j.s when 'INVALID' then v_dummy := 'I';
when 'VALID' then v_dummy := 'V';
end case;
END LOOP;
END LOOP;
v_end := systimestamp;
dbms_output.put_line('time : '||to_char(v_end - v_start,'SSSSS.FF'));
END;
time : +000000 00:00:25.187000000
time : +000000 00:00:25.343000000
time : +000000 00:00:25.172000000This on a windows dual core laptop with 10gR2
Regards
Andre -
ILearning - Decode Certification performance status
Hi All,
We want to create an iLearning report that lists out users and their certification status and we are trying to understand what column/table the 'Repeat' and 'Expired' is decoded from. If we look in the 'Enrollmenrt List' for a Certfication Offering, we can see user staus such as, 'Repeat', 'Expired', etc, but we are unsure which table/column to decode this information from. The status field in Certification Performance, seems to have the same values as Performance table ('P','F','I','C')
ThanksHi,
The cert status you see in the enrollment list in iLearning is derived from information on the offering enrollment record including the cert period start date, cert period end date, initial certification days, renewal certification days, certification expire notification date, etc.
The certification_performance record only provides information about learner's progress in the current certification period. You should be able to determine the learner's status in the certification based on a comparison of the current date as it relates to the other information on the offerng_enrollment record as above. For example, if the cert period end date is less than the current date, the user would be expired.
Scott
http://www.seertechsolutions.com -
Decode affecting query performance
Hi. We have a system that holds two account numbers, in 8 digit and 12 digit formats. I have a screen with an input box where the user can enter either style account number.
So the select I need to do, must be conditional based on the type used. I've got it working by checking the length of the account number entered:
select column1 from table1
where decode(length(:P1_ACCNUM),12,accountno,accno_8digit) = upper(v('P1_ACCNUM'))
but the performance has gone from instant, to about 3 seconds. In TOAD, the query is unaffected, so maybe it's the subsitution of the variable. I've tried the way coded above, and v('P1_ACCNUM') but there is no difference in Apex.
Does anyone have any ideas on this?
Cheers
Carlton
(NR - Business as Usual.)Hi George. Initially, the code only supported the 12 digit account number,but you know what users are like! So the original query was coded to say
accountno = :p1_accnum
The decode was necessary because I had to do the comparison on another column. Both columns are indexed.
As I say, putting the length() function (and I also do an UPPER conversion too) in hidden fields and using them has made it perform perfectly again. If I had more time, I wouldn't have coded it like this, but it's an urgent throw away application to perform quick account queries.
Thanks for your response.
Carlton -
Audigy 2 ZS decoding DTS performa
I have posted a message before to resolve IRQ conflicts, well it's done and audigy decoding is working well. although there's some out of Sync in the audio, when i pause, rewind, forward, skip a chapter, etc. this is solved changing in powerdvd from SPDIF to 6 channels, and 6 channels to SPDIF again.
if i watch the complete movie there's no problem at all, it's really amazing the sound that produces audigy decoding AC3 over powerdvd decoding.
The problem i have is with dts decoding, for the first two minutes it works normal(Woow it's even better than AC3, the playback is smooth and the sound Louder) , but then, it pauses, i have to stop and then play to continue, the strange thing is that this happen in different times such us: two minutes, 5 seconds or even fi've minutes.
This tests i did in Zoomplayer with SPDIF OUT using Powerdvd audio and video decoding.
In WinDvd i test the same thing but there were pauses too,but it continue automaticaly, i don?t have to do 'the stop and play' as with Zoomplayer.
I checked the CPU usage and it's an increment of the use of the cpu DTS decoding over AC3 decoding, maybe my Pentium III 500 hz doesn't support it.(cause there's some peaks of the 00% porcent usage, when decoding DTS in Audigy zs 2.
i also have tried accelarating(increment and OFF) hardware accelaration in both Nvidia card and Audigy zs 2 card. i have noticed some longer playback but at the end, it pauses anyway.
) What is the Best setup in hardware accelaration in both Nvidia video card and audigy sound card?
2) What else do i have to setup to be able to see a complete movie with audigy DTS decoding without pauses.I hate pumbing threads like this , but I really need an answer , DTS stutters but AC3 is perfect , I am also using reclock?in case somebody suggests it .
-
Report Performance - Summary Fields Setting
I have a report data model with 3 groupings. Summary fields are needed for each group. There are 2 approaches for me to have the summary values :
1. i can set the data source of each summary field to be the corresponding column of my based Q_1 selection statement, and then set the "Reset At" for different data groups.
2. On the other hand, i can set the data source to be the summary field of the inner data group.
Could anyone tell me which method can give me a better performance? I want the report to run faster.Puvan,
I figured out the problem myself. It is not a version issue.
I was using a sum function on a decode expression in the sql command. The % total was applied on that sum expression. Every time I was reopening the report the sum expression was getting created in a new group.
When I gave an alias for the sum(decode(xyz)) expression it is working fine now.
I would still consider that as an issue with Reports Developer.
FYI...
Ritendra.
Maybe you are looking for
-
Error when install ORA OLE DB on platform Windows Vista
when i want to install ORA OLE DB 9.0.2.1 on platform Windows Vista, pop-up error message box. The error is "OUI cannot determine the paltform of this system. This may occur if OUI is running on a system that is not supported or there is a bug in OUI
-
The Durability of Macbooks is not worth the money.
I was playing xbox on my couch with my mac book beside me, i moved my elbow and is slid off the couch and landed on it's back and the screen slammed shut. This was probably a 1.5 ft drop into well-cushioned carpet. I pick it up and now my screen is t
-
Tab Order in Visual Composer Input Form
Hi, I am trying to set the tab order on my visual composer input form. I enter a number in "tab order" but it does not seem to have any effect on the tab sequences. Tabbing is sequential when I set layout to "horizontal" or "vertical" but not for "
-
Outlook Calendar and Blackberry not Synchronizing
I see this is a common problem but have not found the answer. Outlook 2002 on OS XP. Everything else synchs fine. Appears to skip right over calendar. I dont even see it when it is doing the snych to Outlook. I verified calendar is set up correc
-
Using adapter kit in Sweden?
Hi, Why doesn't Apple have a more complete list of countries in its "World Travel Adapter Kit"? I couldn't even find such a list on this web site. I will be in Sweden later this month. Does anyone know if I will have the right adapter for my iBook G4