Validate on allowed character set
Hi,
I want to validate the Character set. Means, Now my User Interface taking all characters which are possible through key board like Japanese or Chinese characters. But I want to validate the character set.
As per my knowledge , we need to check the characters manually by writing the logic to compare the each and every characters but it is laborious job and creates performance problems also. Is there any alternative way?
Is it possible to do it with WebDynpro inbuilt functionality?
Thanks and Best Regards,
Vijay
What kind of validation did you having in mind. Web Dynpro ABAP will automatically convert all characters to the character set of the ABAP Logon Language. I probably don't need to tell you, but the ultimate answer to any character set problems is to run a Unicode System and then all characters are supported.
Although this weblog is BSP specific, you might find some of the discussion useful:
[/people/thomas.jung3/blog/2004/07/13/bsp-150-a-developer146s-journal-part-vii--dealing-with-multiple-languages-english-german-spanish-thai-and-polish|/people/thomas.jung3/blog/2004/07/13/bsp-150-a-developer146s-journal-part-vii--dealing-with-multiple-languages-english-german-spanish-thai-and-polish]
You might be able to use SCP_TRANSLATE_CHARS or one of the other SCP function modules to do some sort of mass validation.
Similar Messages
-
B2B Validation contains characters not listed in the allowed character set
Hi,
Working on EDIFACT AS2 INVOIC D05A. B2B 10g with Document Editor Version: 6.6.0.2801
payload:
UNA:+.?*'
UNB+UNOB:4::1+A1103206+XXTPHOME+20110413:0456+30000000000545'
UNH+30000000000545+INVOIC:D:05A:UN'
BGM+389+5100256812'
DTM+137:20110413:102'
DTM+143:201104:610'
FTX+CUR+++Consignment stock accounting'
RFF+AOG:0'
NAD+BY+A1103206::234++XXXXXXXXX++81379+DE'
RFF+VA:DE260978148'
CTA+PD+GC SSC ARP:Yyyy Xxxx'
COM+49(2871) 91 3481:TE'
[email protected]:EM'
NAD+SU+A1082156::234++XXXXX:::[email protected]+Nnnnn 7+Llll+DE-BY+90443+DE'
RFF+VA:DE256058056'
NAD+UC+0000000100::92++Gggg Gggg:Hhhh+Kkkkk 2+Bocholt++46395+DE'
TAX+7+VAT:::19% VAT+DE:133:86++:::19.00'
CUX+2:EUR:4'
LIN+000001++UAA3595HN/C3:VP'
PIA+1+A5B00075796235:BP'
IMD+B++:::IC DECTUAA3595HN/C3 PA V20810C6323D:670'
QTY+47:6000.000:PCE'
MOA+203:660.00'
CUX'
I try to validate payload and get 2 errors:
1. Segment COM type an min length 1 max length 512 payload value: [email protected]
Error: ub-Element COM010-010 (Communication address identifier) contains characters not listed in the allowed character set. Segment COM is defined in the guideline at position 0350.
2. Segment group 27 Sub Element CUX payload value is null, value is exists at header level (Group 7 ub-Element CUX)
Error: Unrecognized data was found in the data file as part of Loop Group 27. The last known Segment was MOA at guideline position 1260 - Validator error - Extra data was encountered.
Thanks for any help
AdiWe fix it by change Character set to UNOC
-
Transport tablespaces between databases with different character sets
Hi everyone:
I have two 10R2 databases on the same hp-ux 64bit server, 1st one with NLS_CHARACTERSET=US7ASCII, 2nd one with
NLS_CHARACTERSET=AL32UTF8.
NLS_NCHAR_CHARACTERSET on both databases is AL16UTF16.
Can I transfer tablespaces from the 1st one to the 2nd. The data could be in English, French & Spanish.
If not what are my options?
Thanks in advance.First off, if you are storing French and Spanish data in database 1 where the character set is US7ASCII, you've got some serious problems. US7ASCII doesn't support non-English characters (accents, tildes, etc). If you're storing data this way, you've introduced data corruption that you'd have to resolve before copying the data data over to another machine.
Second, technically, the source and target character set have to be identical. Since US7ASCII is a strict binary superset of AL32UTF8, you could theoretically transport a US7ASCII tablespace to an AL32UTF8 database. In your case, though, since the data is not really US7ASCII, you'd end up with corruption.
Any of the Oracle built-in replication options is going to require that you resolve the corruption issue. Assuming that you can figure out what character set the source database really is, you could potentially dump the data to flat files (taking care not to allow character set conversion to take place) and SQL*Loader them into the destination system by identifying the proper character set in your control file. That's obviously going to be a rather laborious process, though.
Justin -
Validate that a string is composed from a defined character set
Hi experts,
I need to validate that a string enetered as parameter is composed of the following character set :
26 alphabets (both in capital or lower case), a (like OBrien), a blank between characters like (Mc Donald). So the total valid characters on any of these fields will be 54.
could You provide with an efficient code ?
Plz Help....
rewards gauranteed........Hi,
Check the below code.
data: var(52) type c Value 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'.
data: v1(14) type c value 'welcome to SDN'.
data: v2 type i.
data: v3 type i.
v3 = 0.
v2 = STRLEN( v1 ).
do v2 times.
if v1+v3(1) = space .
write:/ 'test contains character space'.
elseif v1+v3(1) ca var.
write:/ 'test'.
elseif v1(v3) = '/'.
write:/ 'test contains character /'.
endif.
v3 = v3 + 1.
enddo.
Regards,
Shravan G. -
Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster
In hopes that it might be helpful in the future, here's the procedure I followed to fix a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
BACKGROUND
Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
FIXING THE PROBLEM
How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
(As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields, and a second for CLOBs.
In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
alter system set global_names=false scope=memory;
CREATE PUBLIC DATABASE LINK OLD6
CONNECT TO DBUSERNAME
IDENTIFIED BY dbuserpass
USING 'restoreclone:1521/MYSID';
Testing the link...
SQL> select count(1) from users@old6;
COUNT(1)
454
Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
DUMP(TITLE)
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
By comparison, a dump of that row on PRODCLONE's my_contents gives:
PRODCLONE> select dump(title) from my_contents where pk1=117286;
DUMP(TITLE)
Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
Eventually, I located a clever workaround at this link:
https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
It works like this:
On RESTORECLONE you create a view, vv, with UTL_RAW:
RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
View created.
This turns the title to raw on the RESTORECLONE.
You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in US7ASCII, so it was unable to do its transparent character set conversion.
PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
PRODCLONE> select dump(title) from my_contents where pk1=117286;
DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
,101,115,116,105,111,110,115
Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
COUNT(1)
533
By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
COUNT(1)
10568
So 10568 rows have characters which were transformed into 191s as part of the original conversion.
[ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got CLOB
Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
create or replace procedure find_us7_strings
(table_name varchar2,
fix_col varchar2 )
authid current_user
as
orig_sql varchar2(1000);
begin
orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' != CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
-- Uncomment if debugging:
-- dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
And create a table to store the information as to which tables, columns, and rows have the bad characters:
drop table cnv_us7;
create table cnv_us7 (mytablename varchar2(50), myindx number, mycolumnname varchar2(50) ) tablespace myuser_data;
create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
--example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
set head off pagesize 1000 linesize 120
spool runme.sql
select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
where
data_type in ('CHAR','VARCHAR2')
and table_name in (select table_name from user_tab_columns where column_name='PK1' and table_name not in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
and char_length > 10
order by table_name,column_name;
spool off;
set echo on time on timing on feedb on serveroutput on;
spool output_of_runme
@./runme.sql
spool off;
Which eventually gives us the following inserted into CNV_US7:
20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
4 DESCRIPTION MY_FORUMS
21136 TITLE MY_CONTENTS
Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
We create our views on RESTOREDB:
create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
And then we can fix it directly via sql:
update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
where pk1 in (
select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
where taborig.pk1=tabnew.pk1
and myindx=tabnew.pk1
and mycolumnname='TITLE'
and mytablename='MY_CONTENTS'
and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
Note this part:
"and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there because if the users have changed TITLE -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
We can also create a stored procedure which will execute the SQL for us:
create or replace procedure fix_us7_strings
(TABLE_NAME varchar2,
FIX_COL varchar2 )
authid current_user
as
orig_sql varchar2(1000);
TYPE cv_type IS REF CURSOR;
orig_cur cv_type;
begin
orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
where pk1 in (
select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
where taborig.pk1=tabnew.pk1
and myindx=tabnew.pk1
and mycolumnname='''||FIX_COL||'''
and mytablename='''||TABLE_NAME||'''
and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
exec fix_us7_strings('MY_CONTENTS','TITLE');
commit;
To validate this before and after, we can run something like:
select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
create or replace procedure find_us7_clob
(table_name varchar2,
fix_col varchar2)
authid current_user
as
orig_sql varchar2(1000);
type cv_type is REF CURSOR;
orig_table_cur cv_type;
my_chars_read NUMBER;
my_offset NUMBER;
my_problem NUMBER;
my_lob_size NUMBER;
my_indx_var NUMBER;
my_total_chars_read NUMBER;
my_output_chunk VARCHAR2(4000);
my_problem_flag NUMBER;
my_clob CLOB;
my_total_problems NUMBER;
ins_sql VARCHAR2(4000);
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
open orig_table_cur for orig_sql;
my_total_problems := 0;
LOOP
FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
EXIT WHEN orig_table_cur%NOTFOUND;
my_offset :=1;
my_chars_read := 512;
my_problem_flag :=0;
WHILE my_offset < my_lob_size and my_problem_flag =0
LOOP
DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
my_offset := my_offset + my_chars_read;
IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
THEN
-- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
-- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
my_problem_flag:=1;
END IF;
END LOOP;
IF my_problem_flag=1
THEN my_total_problems := my_total_problems +1;
ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
execute immediate ins_sql;
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
END;
And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
exec find_us7_clob('MY_CONTENTS','DATA');
After completion, the CNV_US7 table looked like this:
RESTOREDB> set linesize 120 pagesize 100;
RESTOREDB> select count(1),mytablename,mycolumnname from cnv_us7
where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
where data_type='CLOB' )
group by mytablename,mycolumnname;
COUNT(1) MYTABLENAME MYCOLUMNNAME
69703 MY_CONTENTS DATA
On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
-- transforming CLOB to BLOB
l_off number default 1;
l_amt number default 4096;
l_offWrite number default 1;
l_amtWrite number;
l_str varchar2(4096 char);
begin
loop
dbms_lob.read ( p_clob, l_amt, l_off, l_str );
l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
utl_raw.cast_to_raw( l_str ) );
l_offWrite := l_offWrite + l_amtWrite;
l_off := l_off + l_amt;
l_amt := 4096;
end loop;
exception
when no_data_found then
NULL;
end;
We can test out the transformation of CLOBs to BLOBs with a single row like this:
drop table my_contents_lob;
Create table my_contents_lob (pk1 number,data blob);
DECLARE
v_clob CLOB;
v_blob BLOB;
BEGIN
SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
clob2blob (v_clob, v_blob);
END;
select dbms_lob.getlength(data) from my_contents_lob;
DBMS_LOB.GETLENGTH(DATA)
329
SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
UTL_RAW.CAST_TO_VARCHAR2(DATA)
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
create table my_contents_lob(pk1 number,data blob);
create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
create or replace procedure blob_conversion_my_contents
(table_name varchar2,
fix_col varchar2)
authid current_user
as
orig_sql varchar2(1000);
type cv_type is REF CURSOR;
orig_table_cur cv_type;
my_chars_read NUMBER;
my_offset NUMBER;
my_problem NUMBER;
my_lob_size NUMBER;
my_indx_var NUMBER;
my_total_chars_read NUMBER;
my_output_chunk VARCHAR2(4000);
my_problem_flag NUMBER;
my_clob CLOB;
my_blob BLOB;
my_total_problems NUMBER;
new_sql VARCHAR2(4000);
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
open orig_table_cur for orig_sql;
LOOP
FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
EXIT WHEN orig_table_cur%NOTFOUND;
new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
dbms_output.put_line(new_sql);
execute immediate new_sql;
-- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
-- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
-- dbms_output.put_line(new_sql);
select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
clob2blob(my_clob,my_blob);
END LOOP;
CLOSE orig_table_cur;
DBMS_OUTPUT.PUT_LINE('Completed program');
END;
exec blob_conversion_my_contents('MY_CONTENTS','DATA');
Verify that things work properly:
select dump( utl_raw.cast_to_varchar2(data)) from my_contents_lob where pk1=xxxx;
This should let you see see characters > 150. Thus, the method works.
We can now take this data, export it from RESTORECLONE
exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
and import the data on prodclone
imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
For paranoia's sake, double check that it worked properly:
select dump( utl_raw.cast_to_varchar2(data)) from my_contents_lob;
On our 10g PRODCLONE, we'll use these stored procedures:
CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
L_BLOB BLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
DBMS_LOB.CONVERTTOBLOB(L_BLOB,
L_CLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
1,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_BLOB;
END;
CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
L_CLOB CLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
DBMS_LOB.CONVERTTOCLOB(L_CLOB,
L_BLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
1,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_CLOB;
END;
And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
To find correct CSID for WE8ISO8859P1, we can use this query:
select nls_charset_id('WE8ISO8859P1') from dual;
Gives "31"
create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
L_CLOB CLOB;
L_SRC_OFFSET NUMBER;
L_DEST_OFFSET NUMBER;
L_BLOB_CSID NUMBER := 31; -- treat blob as WE8ISO8859P1
V_LANG_CONTEXT NUMBER := 31; -- treat resulting clob as WE8ISO8850P1
L_WARNING NUMBER;
L_AMOUNT NUMBER;
BEGIN
DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
L_SRC_OFFSET := 1;
L_DEST_OFFSET := 1;
L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
DBMS_LOB.CONVERTTOCLOB(L_CLOB,
L_BLOB,
L_AMOUNT,
L_SRC_OFFSET,
L_DEST_OFFSET,
L_BLOB_CSID,
V_LANG_CONTEXT,
L_WARNING);
RETURN L_CLOB;
END;
select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
Now, we can compare these:
select dbms_lob.compare(blob2clob(old.data),new.data) from my_contents new,my_contents_lob old where new.pk1=old.pk1;
DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
0
0
0
Vs
select dbms_lob.compare(blob2clobasc(old.data),new.data) from my_contents new,my_contents_lob old where new.pk1=old.pk1;
DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
-1
-1
-1
update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
Confirms that we're now working properly.
To run across all the _LOB tables we've created:
[oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
[oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
And then on PRODCLONE we can import:
imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
create or replace procedure fix_us7_CLOBS
(TABLE_NAME varchar2,
FIX_COL varchar2 )
authid current_user
as
orig_sql varchar2(1000);
bak_sql varchar2(1000);
begin
dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
execute immediate bak_sql;
orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
where pk1 in (
select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
where a.pk1=b.pk1
and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
-- dbms_output.put_line(orig_sql);
execute immediate orig_sql;
end;
Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
set serveroutput on time on timing on;
exec fix_us7_clobs('MY_CONTENTS','DATA');
commit;
After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
A summary:
1) We replaced the lossy characters by parsing a csscan output file
2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
Our actual error message:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00210: expected '<' instead of '�Error at line 1
31011. 00000 - "XML parsing failed"
*Cause: XML parser returned an error while trying to parse the document.
*Action: Check if the document to be parsed is valid.
Error at Line: 24 Column: 15
This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
Please advise if more information is needed from my end. -
I have project that uses a shared fonts. The fonts are all
contained in a single swf ("fonts.swf"), are embedded in that swf's
library and are set to export for actionscript and runtime sharing.
The text in the project is dynamic and is loaded in from
external XML files. The text is formatted via styles contained in a
CSS object.
This project needs to be localized into 20 or so different
languages.
Everything works great with one exception: I can’t
figure out how to set which character set gets exported for runtime
sharing. i.e. I want to create a fonts.swf that contains Korean
characters, change the XML based text to Korean and have the text
display correctly.
I’ve tried changing the language of my OS (WinXP) and
re-exporting but that doesn’t work correctly. I’ve also
tried adding substitute font keys to the registry (at:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\FontSubstitutes) as outlined here:
http://www.quasimondo.com/archives/000211.php
but the fonts I added did not show up in Flash's font menue.
I’ve also tried the method outlined here:
http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275
to no avail.
I know there must be a simple solution that will allow me to
embed language specific character sets for the fonts embedded in
the library but I have yet to discover what it is.
Any insight would be greatly appreciated.
http://www.quasimondo.com/archives/000211.php
http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275Thanks Jim,
I know that it is easy to specify the language you want to
use when setting the embed font properties for a specific text
field but my project has hundreds of text fields and I'm setting
the font globally by referencing the font symbols in a single swf.
I have looked at the info you've pointed out but wasn't
helped by it. What I'd like to be able to do is to tell Flash to
embed a language specific character-set for the font symbols in the
library. It currently is only embedding Latin characters even
though I know the fonts specified contains characters for other
languages.
For example. I have a font symbol in the libary named
"Font1". When I look at its properties I can see it is spcified as
Tahoma. I know the Tahoma font on my system contains the characters
for Korean but when I compile the swf it only contains Latin
characters (gylphs) - this corresponds to the language of my OS (US
English). I want to know how to tell Flash to embedd the Korean
language charaters rather than or as well as the Latin characters
for any given FONT SYMBOL. If I could do that, then, when I enter
Korean text into my XML files the correct characters will be
available to Flash. As it is now, the characters are not available
and thus the text doesn' t display.
Make sense?
Many thanks,
Mike -
Java Character set error while loding data using iSetup
Hi,
I am getting the following error while migrating settup data from R12 (12.1.2) Instance to another R12 (12.1.2) Instance, Both the Database has same DB character set (AL32UTF8)
we are getting this error while migrating any setup data
Actual error is
Downloading the extract from central instance
Successfully copied the Extract
Time taken to download Extract and write as zip file = 0 seconds
Validating Primary Extract...
Source Java Charset: AL32UTF8
Target Java Charset: UTF-8
Target Java Charset does not match with Source Java Charset
java.lang.Exception: Target Java Charset does not match with Source Java Charset
at oracle.apps.az.r12.common.cpserver.PreValidator.validate(PreValidator.java:191)
at oracle.apps.az.r12.loader.cpserver.APILoader.callAPIs(APILoader.java:119)
at oracle.apps.az.r12.loader.cpserver.LoaderContextImpl.load(LoaderContextImpl.java:66)
at oracle.apps.az.r12.loader.cpserver.LoaderCp.runProgram(LoaderCp.java:65)
at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
Error while loading apis
java.lang.NullPointerException
at oracle.apps.az.r12.loader.cpserver.APILoader.callAPIs(APILoader.java:158)
at oracle.apps.az.r12.loader.cpserver.LoaderContextImpl.load(LoaderContextImpl.java:66)
at oracle.apps.az.r12.loader.cpserver.LoaderCp.runProgram(LoaderCp.java:65)
at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
Please help in identifying and resolving the issue
SachinThe Source and Target DB character set is same
Output from the query
------------- Source --------------
SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
VALUE
AL32UTF8
And target Instance
-------------- Target----------------------
SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
VALUE
AL32UTF8
The Error is about Source and Target JAVA Character set
I will check the Prevalidator xml from How to use iSetup and update the note
Thanks
Sachin -
Character set mismatch in copying from oracle to oracle
I have a set of ODI scripts that are copying from a source JD Edwards ERP database (Oracle 10g) to a BI datamart (Oracle 10g) and all the original scripts work OK.
However I have mapped on to some additional tables in the ERP source database and some new BI tables in the target datamart database (oracle - to - oracle) but get an error when I try ro execute these.
The operator log shows that the error is in the 'INSERT FLOW INTO I$ TABLE' and the error is ORA-12704 character set mismatch.
The character set for both Oracle databases are the same (and have not changed) the main NLS_CHARACTERSET is AL332UTF8 and the national NLS_NCHAR_CHARACTERSET is AL16UTF16.
But this works for tables containing NCHAR and NUMBER in previous scripts but not for anything I write now.
The only other difference is that there was a recent upgrade of ODI to 10.1.3.5 - the repositories are also upgraded.
Any ideas ?Hi Ravi,
yes, a gateway would help. In 11.2 Oracle offers 2 kind of gateways to a SQL Server - a gateway for free which is based on 3rd party ODBC drivers (you need to get them from a 3rd party vendor, they are not included in the package) and called Database Gateway for ODBC (=DG4ODBC) and a very powerful Database Gateway for MS SQL Server (=DG4MSQL) which allows you also to execute distributed transactions and call remote SQL Server stored procdures. Please keep in mind that DG4MSQL requires a separate license.
As you didn't post which platform you're going to use, please check out On "My Oracle Support" (=MOS) where you'll find notes how to configure each gateway for all supported platforms - just look for DG4MSQL or DG4ODBC
On OTN you'll find the also the manuals.
DG4ODBC: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12070.pdf
DG4MSQL: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12069.pdf
The generic gateway installation for Unix: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12013.pdf
and for Windows: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12061.pdf -
How to set or change character set for Oracle 10 XE
Installing via RPM on Linux.
I need to have my database set to use UTF8 and WE8ISO8859P15 as the character set and national character set. (Think those are in the right order. If not, it's the opposite.)
If I do a standard "yum localinstall rpm-file-name," it installs Oracle. I then run the "/etc/init.d/oracle-xe configure" command to set my ports.
Every time I do this, I end up with AL32/AL16 character sets.
I finally hardcoded ISO-8859-15 as the Linux 'locale' character set and set this in the various bash profile config files. Now, I end up with WE8MSWIN1252 as the character set and AL16UTF16 as the national character set.
I've tried editing the createdb.sh script to hard code the character set types and then copied that file over the original while the RPM is still installing. I've tried editing the nls_lang.sh script to hard code the settings there and copied over the original shell script while the RPM is still installing.
Doesn't matter.
HOW can I do this? If I wait until after the RPM is installed and try running the createdb.sh file, then it ends up creating a database but not doing everything properly. I end up missing pfiles or spfiles. Various errors crop up.
If I try to change them from the sql command line, I am told that the new character set must be a superset of the old one. It fails.
I'm new to Oracle, so I'm treading water that's uncharted. In short, I need community help. It's important to the app I'm running and attempting to migrate from to maintain these character sets.
Thanks.I don't think you can change Oracle XE character set. When downloading Oracle XE you must choose to download:
- either the Universal Edition using AL32UTF8
- or the Western Euopean Edition using WE8MSWIN1252.
See http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABJACJJ
If you really need UTF8 instead of AL32UTF8 you need to use Oracle Standard Edition or Oracle Entreprise Edition:
these editions allow to select database character set at database creation time which is not really possible with Oracle XE
Note that changing environment variable NLS_LANG has nothing to do with changing database character set:
http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABBGFIC -
How to get the character set which i want it to be?
i confront a problem, when using jdbc to connect to oracle8i database on solaris; because i use getBinaryStream() to read from db in "byte" reading, seemly it would use default character set on OS or database(such as EUC,etc.), not which the client i want it to be. is there anyway to take control of such code writing(i mean to change the character set when i get byte out of database, thus the byte would use NLS_LANG the same as using sqlplus setting in user env)?
First of all read Note 581312 - Oracle database: licensing restrictions:
As of point 3, it follows that direct access to the Oracle database is only allowed for tools from the areas of system administration and monitoring. If other software is used, the following actions, among other things, are therefore forbidden at database level:
Creating database users
Create database objects
Querying/changing/creating data in the database
Using ODBC or other SAP external access methods
Are you trying this on the database server itself? If yes, then you need to install the hebrew codepages as well as hebrew fonts in order to display the data correctly.
Markus -
How to define the character set of an outbound EDI batch in BizTalk 2010?
I have some EDIFACT files with a character set of UNOC though lowercase strings and umlaute should be allowed. These files should be batched in an outbound EDIFACT file. But the batching orchestration of the related send port throws some validation errors.
If I convert the strings to uppercase characters everything's working fine. So it seems that the outbound batching orchestration uses the UNOA character set internally for the validation of the EDIFACT files...
How can I change the character set of the outbound batching orchestration in BizTalk 2010? No settings found regarding the character set of outbound batching orchestration in the party and agreement configuration so far. Thank youHi Philipp,
To define a character set in EDIFACT,
UNA segment is used. After defining see, How Validation of an EDI Interchange Is Configured and Outbound
EDI batching in BizTalk Server
Maheshkumar S Tiwari|User
Page | http://tech-findings.blogspot.com/ -
DIR7 Character Set Problem / Foreign Language
Hi there,
I am working on an app built using Director 7 that until now
has used the standard English (latin-1) character set.
However, I am required to deliver a new version including
some elements displayed in a second language, in this case Welsh,
which uses characters outside of the normal set. I believe those
required are included in Latin-1 Extended, otherwise in Unicode as
a whole, obviously.
I am having specific problems with two characters that appear
to be missing from Latin-1, which are: ŵ and ŷ
(w-circumflex, and y-circumflex [i think!]).
In a standard text box I create using Director, I am unable
either to paste either character in, or enter it using its
ALT+combination, let alone save to the associated database.
I have read that Dir 11 is the first version with full
Unicode support - which surprises me - however I would assume that
someone would likely have hit this, or a similar issue before the
release of this version and was wondering if there is a possible
solution without upgrade.
My possible thinking is either a declaration that allows
change of a Charset, as I might do in XHTML for example, or
deployment of an Xtra that allows me to use a different character
set.
If anyone could shed some light on the matter, it would be
very helpful! Thanks in advance!
Rich.Yes, this was always a problem for years. Back when I was
**** this, we had
some projects that needed text displayed in various
languages. Each
language presented its own challenges. Things like Greek
weren't too bad,
because the Symbol font works for most Greek text. (Only
problem was the
's' version of Sigma, which had to switch back to Times New
Roman.) Various
eastern European languages (Polish, Czech, Hungarian, etc.)
posed a problem
with some of the accents that were not available in standard
font sets. We
were forced to live without some of the more exotic accents,
but were told
that it would still be readable without them, if not exactly
correct. This
would probably be the closest to your situation, from what
little I know
about Welsh. It could be worse, though. Hebrew and Arabic
were challenging
as they are written right-to-left, and thus had to have code
written to
input them backwards. Russian was also tough, as the Cyrillic
alphabet has
more characters than the others, but I was able to find a
font to fake it.
(It replaced some of the lesser-used standard characters in
order to fill in
all the letters, which unfortunately meant that in the rare
cases where
those characters *were* needed, we had to improvise.) The
hardest by far
were any east Asian languages. In that case, I gave up on
trying to display
any of the text in text form, and just converted it all to
bitmaps. Without
Unicode, trying to display Mandarin or Japanese or Korean
correctly as text
is pretty much impossible. -
Character set conversion UTF-8 -- ISO-8859-1 generates question mark (?)
I'm trying to convert an XML-file in UTF-8 format to another file with character set ISO-8859-1.
My problem is that the ISO-8859-1 file generates a question mark (?) and puts it as a prefix in the file.
?<?xml version="1.0" encoding="UTF-8"?>
<ns0:messagetype xmlns:ns0="urn:olof">
<underkat>testv���rde</underkat>
</ns0:messagetype>
Is there a way to do the conversion without getting the question mark?
My code looks as follows:
public class ConvertEncoding {
public static void main(String[] args) {
String from = "UTF-8", to = "ISO-8859-1";
String infile = "C:\\temp\\infile.xml", outfile = "C:\\temp\\outfile.xml";
try {
convert(infile, outfile, from, to);
} catch (Exception e) {
System.out.println(e.getMessage());
System.exit(1);
private static void convert(String infile, String outfile,
String from, String to)
throws IOException, UnsupportedEncodingException
//Set up byte streams
InputStream in = null;
OutputStream out = null;
if(infile != null) {
in = new FileInputStream(infile);
if(outfile != null) {
out = new FileOutputStream(outfile);
//Set up character streams
Reader r = new BufferedReader(new InputStreamReader(in, from));
Writer w = new BufferedWriter(new OutputStreamWriter(out, to));
/*Copy characters from input to output.
* The InputSreamreader converts
* from Unicode to the output encoding.
* Characters that cannot be represented in
* the output encoding are output as '?'
char[] buffer = new char[4096];
int len;
while((len = r.read(buffer))!= -1) { //Read a block of output
w.write(buffer, 0, len);
r.close();
w.flush();
w.close();
}Yes the next character is the '<'
The file that I read from is generated by an integration platform. I send a plain file to it (supposedly in UTF-8 encoding) and it returns another file (in between I call my java class that converts the characterset from UTF-8 to ISO-8859-1). The file that I get back contains the '���' if the conversion doesn't work and '?' if the conversion worked.
My solution so far is to skip the first "junk-characters" when reading from the inputstream. Something like:
private static final char UTF_BOM = '\uFEFF'; //UTF-BOM = ?
String from = "UTF-8", to = "ISO-8859-1";
if (from != null && from.toLowerCase().startsWith("utf-")) { //Are we reading an UTF encoded file?
/*Read first character of the UTF-Encoded file
It will return '?' in the first position if we are dealing with UTF encoding If ? is returned we skip this character in the read
try {
r.mark(1); //Only allow to read one char for the reset function to work
char c;
int i = r.read();
c = (char) i;
if (String.valueOf(UTF_BOM).equalsIgnoreCase(String.valueOf(c))) {
r.reset(); //reset to start position
r.skip(1); //Skip first character when reading from the stream
else {
r.reset();
} catch (IOException e) {
e.getMessage();
//return null;
} -
Firefox Sometimes Does Not Recognize Character Set
Firefox cannot decode some characters of english text or symbols and instead of characters appear some kind of codes. Just like you would surf to chinese website without chinese character set.
[http://img15.imageshack.us/img15/1638/characterseterror.jpg Such as this here]
Why is it so in Firefox? I never saw this happening in Internet Explorer.Pages that use Unicode (UTF-8) display a little box with the hex code if the character can't be displayed.
That allows you to look up the character in a table.
If you see such a box with hex code in it then that means that Firefox can't map a character to a specific font and you have to install a font that covers the affected characters.<br />
In your case you are lacking font support and fonts if it happens on sites that use CJK.<br />
You will see something similar on Windows XP if you visit pages that use complex scripts (e.g. Indic and Arabic).<br />
Windows XP only has very basic language support installed by default.<br />
There are a lot of languages on the world and there are always languages that may need special fonts.<br />
You most likely can't read them, so it is your choice if you want to install a font for such pages or just accept the little squares with the hex code.
See http://en.wikipedia.org/wiki/Supplementary_Multilingual_Plane
* http://en.wikipedia.org/wiki/Help:Multilingual_support_%28East_Asian%29 Wiki: Help:Multilingual support (East Asian)
* http://en.wikipedia.org/wiki/Help:Multilingual_support_%28Indic%29 Wiki: Help:Multilingual support (Indic) -
Language Conversion from Unicode 8 to Character Set
Hi,
I am creating a file programmatically containing Vendor Master data (FTP interface).
The vendor name and vendor address is maintained in the local language (Taiwanese) in SAP System, these characters are in Unicode 8 character set.
The Unicode character set should be converted to BIG5 for Taiwanese, and then send this information in the file.
How can I perform this conversion and change the character set of the values I'm retrieving from table (LFA1) to character set BIG5.
Is is possible to does this conversion in SAP, does sap allows this?
/MikeHi Manik,
I am also having a similar requirement, as I need to convert the unicode chinese character to GB2312 encoded chinese character,. I already posted in forums but didnt get the required the solution.
Can you please provide the solution which you implemented and also confirm whether it can be used to solve the above problem.
Hoping for your good reply.
Regards,
Prakash
Maybe you are looking for
-
How can I use the old Apple TV with new iTunes? It tells me to input a code in iTunes but iTunes no longer has a spot to input this code to allow sync. I can access the iTunes Store fine, just none of my Library
-
Can i transfer money across itunes accounts
I want to transfer money across from one itunes account to another is this possible?
-
Does anyone have a good app for filling out PDF forms? I used Acrobat Pro in Windows, but then my employer supplied it for me. I'm not going to pay $300 to fill out PDF forms. I tried it in Preview, as people say that works, but I can't get it to
-
Need help for downloading java at mac book retina
i want to download java at my mac book pro retina but everytime we downloaded its not working..i need this because im going to use it to open my bank account...everytime i gonna check my bank account i need to use another computer not this my mac.why
-
HP Deskjet 2680 Pop-up problem.
I have a deskjet 2680 on Windows XP that gets about 15 pop up messages everytime a single page prints. I have tried stopping hp services, tried stopping the HP that start at startup, tried reinstalling the driver, aligning printer cartridges, etc.