Invalid Characters shown in UTF-8 character set
There is an XMLP report whose template output character set is ISO-8859-1. The character set ISO-8859-1 is required for this report as per Spanish Authorities. When the report is run, output gets generated in the output directory file of application server. This output file doesn't contain any invalid characters.
But when the output is opened from SRS window, which opens it in a browser, the invalid characters are shown for characters like Ñ , É etc.
Investigation done:
Found that the output generated on the server is having ISO encoding and hence doesn't contain any invalid characters. Whereas the output generated from SRS window, it is in UTF encoding, so it seems the invalid characters are displayed when conversion takes place from ISO to UTF-8 format.
Created the eText output using the data xml and template using BI publisher tool, the output is in ISO encoding. So if i go and change the encoding to UTF-8 by opening it in explorer or Notepad++, invalid charcters are shown for Ñ, É etc.
Is there any limitation, that output from SRS window will show only in UTF-8 encoding? If not then please suggest.
Thanks,
Saket
Edited by: 868054 on Aug 2, 2012 3:05 AM
Edited by: 868054 on Aug 2, 2012 3:05 AM
Hi Srini,
When customer is viewing output from the SRS window, then it contains invalid characters because it is in UTF-8 character set. Customer is on Oracle OnDemand so they cannot take the output generated on the server.Every time they have to raise a request to Oracle for the output file. So the concern here is, why don't the output from SRS window show output with valid characters ?
The reason could be conversion of ISO format to UTF-8. How could this be resolved ? Does SRS window output cannot generate in ISO format ?
A quick reply will be appreciated as customer is chasing for an update.
Thanks,
Saket
Edited by: 868054 on Aug 7, 2012 11:08 PM
Similar Messages
-
UTF/Japanese character set and my application
Blankfellaws...
a simple query about the internationalization of an enterprise application..
I have a considerably large application running as 4 layers.. namely..
1) presentation layer - I have a servlet here
2) business layer - I have an EJB container here with EJBs
3) messaging layer - I have either Weblogic JMS here in which case it is an
application server or I will have MQSeries in which case it will be a
different machine all together
4) adapter layer - something like a connector layer with some specific or
rather customized modules which can talk to enterprise repositories
The Database has few messages in UTF format.. and they are Japanese
characters
My requirement : I need thos messages to be picked up from the database by
the business layer and passed on to the client screen which is a web browser
through the presentation layer.
What are the various points to be noted to get this done?
Where and all I need to set the character set and what should be the ideal
character set to be used to support maximum characters?
Are there anything specifically to be done in my application code regarding
this?
Are these just the matter of setting the character sets in the application
servers / web servers / web browsers?
Please enlighten me on these areas as am into something similar to this and
trying to figure out what's wrong in my current application. When the data
comes to the screen through my application, it looks corrupted. But the asme
message when read through a simple servlet, displays them without a problem.
Am confused!!
Thanks in advance
ManeshHello Manesh,
For the database I would recommend using UTF-8.
As for the character problems, could you elaborate which version of WebLogic
are you using and what is the nature of the problem.
If your problem is that of displaying the characters from the db and are
using JSP, you could try putting
<%@ page language="java" contentType="text/html; charset=UTF-8"%> on the
first line,
or if a servlet .... response.setContentType("text/html; charset=UTF-8");
Also to automatically select the correct charset by the browser, you will
have to include
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in the
jsp.
You could replace the "UTF-8" with other charsets you are using.
I hope this helps...
David.
"m a n E s h" <[email protected]> wrote in message
news:[email protected]...
Blankfellaws...
a simple query about the internationalization of an enterpriseapplication..
>
I have a considerably large application running as 4 layers.. namely..
1) presentation layer - I have a servlet here
2) business layer - I have an EJB container here with EJBs
3) messaging layer - I have either Weblogic JMS here in which case it isan
application server or I will have MQSeries in which case it will be a
different machine all together
4) adapter layer - something like a connector layer with some specific or
rather customized modules which can talk to enterprise repositories
The Database has few messages in UTF format.. and they are Japanese
characters
My requirement : I need thos messages to be picked up from the database by
the business layer and passed on to the client screen which is a webbrowser
through the presentation layer.
What are the various points to be noted to get this done?
Where and all I need to set the character set and what should be the ideal
character set to be used to support maximum characters?
Are there anything specifically to be done in my application coderegarding
this?
Are these just the matter of setting the character sets in the application
servers / web servers / web browsers?
Please enlighten me on these areas as am into something similar to thisand
trying to figure out what's wrong in my current application. When the data
comes to the screen through my application, it looks corrupted. But theasme
message when read through a simple servlet, displays them without aproblem.
Am confused!!
Thanks in advance
Manesh -
Converting Unicode to UTF-8 character set through Oracle forms(10g)
Hi,
I am working on oracle forms (10g) where i need to load files containing unicode character set (multilingual characters) to database.
but while loading the file , junk characters are getting inserted into the database tables.
while reading the file through forms , i am using utl_file.fopen_nchar,utl_file.get_line_nchar functions to read the unicode characters ...
the application server , and database server characterset are set to american utf8 characteset.
In fact , when i change the text file characterset to utf8 through an editor(notepad ++,etc) , in that case , data is getting inserted into database properly,(at least working for english characters) , but not with unicode ...
Any guidance in this regard are highly appreciated
Thank you in advance
Sanuhi
please check out the following link.
http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm
sarah -
Removing non UTF-8 character set from xml in OSB
Hi,
We have a OSB service where we are receiving lot special characters like (~, – ) in the data between xml tags. As a result these messages are failing in our subsequent EDI systems (though they are getting processed successfully in our OSB). How do I remove these non UTF-8 characters when processing a xml message in OSB?
If I set the Request-Encoding to UTF-8 to the proxy service that is receiving these messages, would these messages be rejected?
Thanks,
AdityaHi,
No silver bullet here... I think you will need a java call in order to clean up the special characters from your message...
Cheers,
Vlad -
PCG Seed Data Invalid characters displayed in FRC
Hi, I'm not certain this is the correct forum or not.
Customer has PCG 7.3.2 on 11i, using FRC.
Many menus and concurrent program names & messages (all seed data) that use accents, are displaying garbled characters.
eg: GRC Controls : Règles associées aux écrans
when viewed through TOAD, Data looks the same, yet non PCG seed data with same words have the correct french accented characters.
NLS character set is WE8ISO8859P1, not UTF8. (which is standard in 11i FRC installs).
As per the PCG install guide, I have tried using both files (for UTF8 & WE8ISO8859P1) in $ORACLE_HOME/forms60/admin/resource/FRC, but the results do not change.
I think this may have had something to do with environment setting used during the installation process.
I would like to experiment with this, but cannot find the scripts/files that actually seed the FRC data for seeded menus and concurrent programs.
Can anybody guide to to these ? or provide an alternative or known (I can't find one) solutiuon ?Hi Srini,
When customer is viewing output from the SRS window, then it contains invalid characters because it is in UTF-8 character set. Customer is on Oracle OnDemand so they cannot take the output generated on the server.Every time they have to raise a request to Oracle for the output file. So the concern here is, why don't the output from SRS window show output with valid characters ?
The reason could be conversion of ISO format to UTF-8. How could this be resolved ? Does SRS window output cannot generate in ISO format ?
A quick reply will be appreciated as customer is chasing for an update.
Thanks,
Saket
Edited by: 868054 on Aug 7, 2012 11:08 PM -
How to change character set to arabic in Develper suite forms 10g
Dear all,
Our company wants migrate oracle forms 4.5/6i applications to Oracle developer suite 10g version.
They also want there database to upgrade from 9i to 10g.
They gave me test machine, on which windows xp is installed and i did the following:-
1,Installed Oracle 10g Xe edition database.
2,Installed Oracle 10g Devloper suite(oracle forms, Oracle reports).
3, Configured the connection of Oracle Developer suite 10g to Oracle Database 10g.
4, Loaded Data into the 10g database. *( they are few columns like DEPARTMENT_NAME_ARB, FUNCTION_NAME_ARB which is supposed to show in arabic fonts, as it is in arabic in 9i database, now they are showing in some special characters)*
What I would like to know is: Is there a way through which i can set characterset?
Is it in the database in which i have to make character set change?
Is it in the oracle developer suite application in which i have to make character set change?
Is it in the registry in which i have to make changes?
please help.Hi freinds,
It is very encouraging to see your replies, i apologize for the late reponse, i still got no success with updating PROPS$,
I relgiously followed all the instructions given to me by all of you. Like u could see in my previous posts
Luckiily, i am able to insert one row at a time manaually iin arabic and in english by pressing (ALT+SHIFT).
When i create datablocks in forms builder, i do see output in arabic.
When i create report in group style i do see ouptput in arabic.
i have thousands of rows(in GB's) which needs to be inserted in this new database 10g XE edition (downloaded from oracle)
I have attempted multiple times insertion of data by just running script, or simply copying numerous insert statement rows from notepad to sql*plus, unluckily it alwayz retreived the special charachters rather than retreiving arabic characters.
Is there a way to insert data in this new oracle 10g xe editon database via oracle developer suite 10g forms/reports?
Do I have to use inbuilt data load/unload utilities in oracle 10g xe edition?
Do I have to install sql*loader separately to load the data?
Do you think TOAD can help in this?
Could you please tell me how to add snap shots in this post?(user10947262)
Here are the following details of National Language Parameter Value
Before
NLS_CALENDAR GREGORIAN
NLS_CHARACTERSET AL32UTF8 (IS this multibyte (UTF-8) character set SARAH?)
NLS_COMP BINARY
NLS_CURRENCY $
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_DUAL_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_LANGUAGE AMERICAN
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_SORT BINARY
NLS_TERRITORY AMERICA
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
After
NLS_CALENDAR GREGORIAN
NLS_CHARACTERSET AL32UTF8
NLS_COMP BINARY
NLS_CURRENCY ر.س.
NLS_DATE_FORMAT DD/MM/RR
NLS_DATE_LANGUAGE ARABIC
NLS_DUAL_CURRENCY ر.س.
NLS_ISO_CURRENCY SAUDI ARABIA
NLS_LANGUAGE ARABIC
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_SORT ARABIC
NLS_TERRITORY SAUDI ARABIA
NLS_TIME_FORMAT HH12:MI:SSXFF PM
NLS_TIMESTAMP_FORMAT DD/MM/RR HH12:MI:SSXFF PM
NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH12:MI:SSXFF PM TZR
NLS_TIME_TZ_FORMAT HH12:MI:SSXFF PM TZR
Certainly Christian, i dont want to screw my oracle 10g database xe edition software and installation, and i agree and hope with this that creating new database and doing exporting and importing will work for me. (XE edition doesnt give option to create new database, i need to install 10g release2 media pack from edelivery)
However, with the above informatiion provided, Is it really needed?
Please help me.
Thanks and Regards -
How to handle all UTF-8 char set in BizTalk?
Can any one let me know how to handle UTF-8 char set in BizTalk.
My receive file can contain any character set like ÿÑÜÜŒöäåüÖÄÅÜ . I have to support all char set under the umbrella of UTF-8.
But when i am trying to convert flat file data to xml its converting special character to ??????????.
Thanks,That won't work because the content has been modified simply by posting it.
Let's start form the beginning:
No component will ever replace any character with '?', that just doesn't happen.
Some programs will display '?' if the byte value does not fall within the current character set, UTF-x, ANSI, ANSI+Code Page, etc.
You need to open the file with an advanced text editor such as Notepad++.
Please tell us exactly where you are seeing the '?'.
The Code Page is not an encoding itself, it is a special way of interpreting ANSI, single byte char values 0-254, in a way that supports characters beyond the traditional Extended Character Set.
You need to be absolutely sure what encoding and possibly Code Page the source app is sending. Notepad++ is pretty good at sniffing this out or just ask the sender. If you determine that it's really UTF-8, you must leave
the Code Page property blank. -
Language Conversion from Unicode 8 to Character Set
Hi,
I am creating a file programmatically containing Vendor Master data (FTP interface).
The vendor name and vendor address is maintained in the local language (Taiwanese) in SAP System, these characters are in Unicode 8 character set.
The Unicode character set should be converted to BIG5 for Taiwanese, and then send this information in the file.
How can I perform this conversion and change the character set of the values I'm retrieving from table (LFA1) to character set BIG5.
Is is possible to does this conversion in SAP, does sap allows this?
/MikeHi Manik,
I am also having a similar requirement, as I need to convert the unicode chinese character to GB2312 encoded chinese character,. I already posted in forums but didnt get the required the solution.
Can you please provide the solution which you implemented and also confirm whether it can be used to solve the above problem.
Hoping for your good reply.
Regards,
Prakash -
Hi Everyone.
We are evaluating switching for MySQL4.0.x (native support
via CF) to MySQL5.0.x (support via JDBC ConnectorJ) and we are
having some character set issues with on our evaluation server.
When we had it configured with MySQL4.0.x using the built in MySQL
driver we always used the connection string to use the UTF-8
character set:
useUnicode=true&characterEncoding=utf-8
We have tried using this with the JDBC driver but it doesn't
appear to have any effect, all the special character are coming out
as mangle multiple character string, which is the same as we see if
we connect to the server from the command prompt using the default
"Latin1" character set. If we connect from the command prompt using
UTF-8 everything looks ok, so I'm guessing the connection string
has changed syntax. I've checked the ConnectorJ documentation and
it appears the connection string should now be:
characterEncoding=UTF-8
However, this did seem to make any difference.
Any ideas?andrewdixon wrote:
> Hi Everyone.
>
> We are evaluating switching for MySQL4.0.x (native
support via CF) to
> MySQL5.0.x (support via JDBC ConnectorJ) and we are
having some character set
> issues with on our evaluation server. When we had it
configured with MySQL4.0.x
> using the built in MySQL driver we always used the
connection string to use the
> UTF-8 character set:
>
> useUnicode=true&characterEncoding=utf-8
>
> We have tried using this with the JDBC driver but it
doesn't appear to have
> any effect, all the special character are coming out as
mangle multiple
> character string, which is the same as we see if we
connect to the server from
> the command prompt using the default "Latin1" character
set. If we connect from
> the command prompt using UTF-8 everything looks ok, so
I'm guessing the
> connection string has changed syntax. I've checked the
ConnectorJ documentation
> and it appears the connection string should now be:
>
> characterEncoding=UTF8
>
> However, this did seem to make any difference.
>
> Any ideas?
>
> Kind regards,
>
> Andrew.
>
try:
1) add the following to the end of JDBC URL in CF Admin DSN
config
screen for your db:
?useUnicode=true&characterEncoding=utf8&characterSetResults=UTF-8
(note: NOT in the "connection string" box, but at the end of
jdbc url!)
2) in your Application.cfm file add the following lines right
after
<cfapplication> tag:
<cfscript>
SetEncoding("form","utf-8");
SetEncoding("url","utf-8");
</cfscript>
<cfcontent type="text/html; charset=utf-8">
3) on every cfm page in your application add the line:
<cfprocessingdirective pageencoding="utf-8">
as the first line of code
all three or combination of 2 of the above usually solve the
problems
with displaying utf-8/unicode encoded text from db. which
combination
works depends on your db setup...
Azadi Saryev
Sabai-dee.com
Vientiane, Laos
http://www.sabai-dee.com -
Oracle Database Character set and DRM
Hi,
I see the below context in the Hyperion EPM Installation document.
We need to install only Hyperion DRM and not the entire Hyperion product suite, Do we really have to create the database in one of the uft-8 character set?
Why it is saying that we must create the database this way?
Any help is appreciated.
Oracle Database Creation Considerations:
The database must be created using Unicode Transformation Format UTF-8 encoding
(character set). Oracle supports the following character sets with UTF-8 encoding:
l AL32UTF8 (UTF-8 encoding for ASCII platforms)
l UTF8 (backward-compatible encoding for Oracle)
l UTFE (UTF-8 encoding for EBCDIC platforms)
Note: The UTF-8 character set must be applied to the client and to the Oracle database.
Edited by: 851266 on Apr 11, 2011 12:01 AMSrini,
Thanks for your reply.
I would assume that the ConvertToClob function would understand the byte order mark for UTF-8 in the blob and not include any parts of it in the clob. The byte order mark for UTF-8 consists of the byte sequence EF BB BF. The last byte BF corresponds to the upside down question mark '¿' in ISO-8859-1. Too me, it seems as if ConvertToClob is not converting correctly.
Am I missing something?
BTW, the database version is 10.2.0.3 on Solaris 10 x86_64
Kind Regards,
Eyðun
Edited by: Eyðun E. Jacobsen on Apr 24, 2009 8:26 PM -
OWB 11.1.0.6.0 with database character set AL32UTF8 is not working
Hi ,
we are working for a project for Turkey.
if we insert Turkish characters in database ,in sqldevelpoer we are able to see the correct data. but when i load a file from preprocessor in OWB Process Flow, the characters which are in Turkish got changed to different characters in database. our databse character set is AL32UTF8. could you please throw some light on this please.
Many thanks,
kiranmai.hi ,
ya we are using the correct dataset only in preprocessr configuration. actually it was a problem with OWB only ,
i have changed database character set to WE8ISO8859P9,then i am able to se correct Trukey chars in database. i think it was a SR for oracle . -
Hi,
I am migrating the database from 9i to 10g.9i database is on windows and 10g would be on solaris.Now,We have some encryted data which uses the windows characterset WE8MSWIN1252 to AL32UTF8(solaris).Could anyone pls let me know how can i go about it?
Thanks!Is the encrypted data stored in VARCHAR2 columns? Or did you store it in RAW columns?
One of the (many) reasons that I would strongly advocate RAW for encrypted data is that you don't have to worry about character set transforms.
If the data is stored in VARCHAR2 columns, you would generally have to decrypt it in the source database, copy it over to the new database in the clear, and re-encrypt it in the destination database. Unless you happen to have chosen an encryption algorithm that guarantees the output to have the same representation in both Windows-1252 and UTF-8 character sets, which would seem exceptionally unlikely.
Justin -
Database character set = UTF-8, but mismatch error on XML file upload
Dear experts,
I am having problems trying to upload an XML file into an XMLType table. The Database is 9.2.0.5.0, with the character set details:
SELECT *
FROM SYS.PROPS$
WHERE name like '%CHA%';
Query results:
NLS_NCHAR_CHARACTERSET UTF8 NCHAR Character set
NLS_SAVED_NCHAR_CS UTF8
NLS_NUMERIC_CHARACTERS ., Numeric characters
NLS_CHARACTERSET UTF8 Character set
NLS_NCHAR_CONV_EXCP FALSE NLS conversion exception
To upload the XML file into the XMLType table, I am using the command:
insert into XMLTABLE
values(xmltype(getClobDocument('ServiceRequest.xml','UTF8')));
However, I get the error:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00200: could not convert from encoding UTF-8 to UCS2
Error at line 1
ORA-06512: at "SYS.XMLTYPE", line 0
ORA-06512: at line 1
Why does it mention UCS2, as can't see that on the Database character set?
Many thanks for your help,
MarkUSC2 is known as AL16UTF16(LE/BE) by Oracle...
Try using AL32UTF8 as the character set name
AFAIK The main difference between Oracle's UTF8 and AL32UTF8 character set is that is the UTF8 character set does not support those UTF-8 characteres that require 4 bytes..
-Mark -
Character set conversion UTF-8 -- ISO-8859-1 generates question mark (?)
I'm trying to convert an XML-file in UTF-8 format to another file with character set ISO-8859-1.
My problem is that the ISO-8859-1 file generates a question mark (?) and puts it as a prefix in the file.
?<?xml version="1.0" encoding="UTF-8"?>
<ns0:messagetype xmlns:ns0="urn:olof">
<underkat>testv���rde</underkat>
</ns0:messagetype>
Is there a way to do the conversion without getting the question mark?
My code looks as follows:
public class ConvertEncoding {
public static void main(String[] args) {
String from = "UTF-8", to = "ISO-8859-1";
String infile = "C:\\temp\\infile.xml", outfile = "C:\\temp\\outfile.xml";
try {
convert(infile, outfile, from, to);
} catch (Exception e) {
System.out.println(e.getMessage());
System.exit(1);
private static void convert(String infile, String outfile,
String from, String to)
throws IOException, UnsupportedEncodingException
//Set up byte streams
InputStream in = null;
OutputStream out = null;
if(infile != null) {
in = new FileInputStream(infile);
if(outfile != null) {
out = new FileOutputStream(outfile);
//Set up character streams
Reader r = new BufferedReader(new InputStreamReader(in, from));
Writer w = new BufferedWriter(new OutputStreamWriter(out, to));
/*Copy characters from input to output.
* The InputSreamreader converts
* from Unicode to the output encoding.
* Characters that cannot be represented in
* the output encoding are output as '?'
char[] buffer = new char[4096];
int len;
while((len = r.read(buffer))!= -1) { //Read a block of output
w.write(buffer, 0, len);
r.close();
w.flush();
w.close();
}Yes the next character is the '<'
The file that I read from is generated by an integration platform. I send a plain file to it (supposedly in UTF-8 encoding) and it returns another file (in between I call my java class that converts the characterset from UTF-8 to ISO-8859-1). The file that I get back contains the '���' if the conversion doesn't work and '?' if the conversion worked.
My solution so far is to skip the first "junk-characters" when reading from the inputstream. Something like:
private static final char UTF_BOM = '\uFEFF'; //UTF-BOM = ?
String from = "UTF-8", to = "ISO-8859-1";
if (from != null && from.toLowerCase().startsWith("utf-")) { //Are we reading an UTF encoded file?
/*Read first character of the UTF-Encoded file
It will return '?' in the first position if we are dealing with UTF encoding If ? is returned we skip this character in the read
try {
r.mark(1); //Only allow to read one char for the reset function to work
char c;
int i = r.read();
c = (char) i;
if (String.valueOf(UTF_BOM).equalsIgnoreCase(String.valueOf(c))) {
r.reset(); //reset to start position
r.skip(1); //Skip first character when reading from the stream
else {
r.reset();
} catch (IOException e) {
e.getMessage();
//return null;
} -
Cannot convert character sets for one or more characters
Hello,
Issue:
Source File: Test.csv
Source file dropped on application server as csv file via FTP
Total records in csv file 102396
I can able to load only 38,000 records after that I am getting error message " convert character sets for one or more characters" while loading into PSA .
If i load same csv file from local workstation I am not getting any error message I can able to load total 102396 records.
I am guessing while FTP the file into Application server some invalid codes adding?
But I am FTP 7 files, out of 7 I am unable to load only 1 file.
Anybody have faced this kind of problem please share with me I will assign full points
ThanksI checked lower case option for all IO
When i checked the PSA failed log no of records processed is "0 of 0" my question is with out processing single record system is througing this error message.
When i use same file loading from local workstation no error message
I am thinking when I FTP the file from AS/400 to BI Application Server(Linex) some invalid characters are adding but how can we track that invalid char?
Gurus please share your thoughts on this I will assign full points.
Thanks,
Maybe you are looking for
-
Click on web link in my Thunderbird 3.1.11 does not transfer link to my FF 5.0. Very frustrating. I may need to scrap these and go to another client. Have been a long time user of your products. Please help. HP laptop and Dell desktop. Laptop is Vist
-
Can you add more than 1 music choice to a slideshow in elements 13?
-
How to Print this as it is- In Detail
can somebody kindly suggest me a way to print the value of the variable 'str' in the code part.Thanking in advance for your great solutions. to explain in detail I have a select query inside this str variable(which now contains 'aaaa....aa' ),and lat
-
WIFI set up on Powershot sx510
I just got a Powershot SX%!) for Christmas. One of the reasons I asked for this camera is it is supposed to be able to upload to facebook directly from the camera and the specs say it will automatically sync with my computer after initial set up. I
-
Hi, I am using Apple push notification service.Its working fine,But when application run in background and got the notification,the delegate method didFinishLaunchingWithOptions didn't get call when launch the application by pressing application icon