UNICODE characters in JAVA

i have a doubt about the following codes
public class Attempt
// char a='\u000A';
this code is not compiling even i the line is commented in the code.
it gives the following compile time error:
---------- javac ----------
D:\saurabh\Attempt.java:4: unclosed character literal
^
D:\saurabh\Attempt.java:4: <identifier> expected
^
2 errors
Normal Termination
Output completed (1 sec consumed).
IS this a bug in JAVA
plz reply to this
rajeev

This happens because the unicode literals are translated prior to compilation;
\u000a is the LF character, so what you actually get the compiler trying to
handle is:
char c = '    // the unicode has been replaced by the character it represents
';or something similar.
It might have been nicer if these literals were preserved during compilation and
substituted at run time, but that would probably cause all kinds of other
technical problems.

Similar Messages

  • Printing unicode characters in Java - help

    Hi there,
    I want to print out unicode characters through java programming language in windows system. For example, I want to print Devanagari characters. I found out that '\u0900' to '\u0975' represent devanagari characters. So I tried following,
    out = new PrintStream(System.out, true, "UTF-8");
    out.println('\u0911');
    but they print characters like ��� and not the actual devanagari characters. Just to be more clear, devanagari script is used by Hindi, Nepali and similar languages.
    If you knew about it and could give any suggestions, that would be very helpful.
    Thanks in advance!

    priyankabhar wrote:
    I am not sure, it is just a windows system and I am trying to print to the command line. Please suggest me how I can find out if my console supports it.Use the CHCP command to find out what code page your console uses. And as already suggested, Google is a good resource if you don't know what a "code page" is.

  • How do I get unicode characters out of an oracle.xdb.XMLType in Java?

    The subject says it all. Something that should be simple and error free. Here's the code...
    String xml = new String("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<x>\u2026</x>\n");
    XMLType xmlType = new XMLType(conn, xml);
    conn is an oci8 connection.
    How do I get the original string back out of xmlType? I've tried xmlType.getClobVal() and xmlType.getString() but these change my \u2026 to 191 (question mark). I've tried xmlType.getBlobVal(CharacterSet.UNICODE_2_CHARSET).getBytes() (and substituted CharacterSet.UNICODE_2_CHARSET with a number of different CharacterSet values), but while the unicode characters are encoded correctly the blob returned has two bytes cut off the end for every unicode character contained in the original string.
    I just need one method that actually works.
    I'm using Oracle release 11.1.0.7.0. I'd mention NLS_LANG and file.encoding, but I'm setting the PrintStream I'm using for output explicitly to UTF-8 so these shouldn't, I think, have any bearing on the question.
    Thanks for your time.
    Stryder, aka Ralph

    I created analogic test case, and executed it with DB 11.1.0.7 (Linux x86), which seems to work fine.
    Please refer to the execution procedure below:
    * I used AL32UTF8 database.
    1. Create simple test case by executing the following SQL script from SQL*Plus:
    connect / as sysdba
    create user testxml identified by testxml;
    grant connect, resource to testxml;
    connect testxml/testxml
    create table testtab (xml xmltype) ;
    insert into testtab values (xmltype('<?xml version="1.0" encoding="UTF-8"?>'||chr(10)||'<x>'||unistr('\2026')||'</x>'||chr(10)));
    -- chr(10) is a linefeed code.
    commit;
    2. Create QueryXMLType.java as follows:
    import java.sql.*;
    import oracle.sql.*;
    import oracle.jdbc.*;
    import oracle.xdb.XMLType;
    import java.util.*;
    public class QueryXMLType
         public static void main(String[] args) throws Exception, SQLException
              DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
              OracleConnection conn = (OracleConnection) DriverManager.getConnection("jdbc:oracle:oci8:@localhost:1521:orcl", "testxml", "testxml");
              OraclePreparedStatement stmt = (OraclePreparedStatement)conn.prepareStatement("select xml from testtab");
              ResultSet rs = stmt.executeQuery();
              OracleResultSet orset = (OracleResultSet) rs;
              while (rs.next())
                   XMLType xml = XMLType.createXML(orset.getOPAQUE(1));
                   System.out.println(xml.getStringVal());
              rs.close();
              stmt.close();
    3. Compile QueryXMLType.java and execute QueryXMLType.class as follows:
    export PATH=$ORACLE_HOME/jdk/bin:$PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    export CLASSPATH=.:$ORACLE_HOME/jdbc/lib/ojdbc5.jar:$ORACLE_HOME/jlib/orai18n.jar:$ORACLE_HOME/rdbms/jlib/xdb.jar:$ORACLE_HOME/lib/xmlparserv2.jar
    javac QueryXMLType.java
    java QueryXMLType
    -> Then you will see U+2026 character (horizontal ellipsis) is properly output.
    My Java code came from "Oracle XML DB Developer's Guide 11g Release 1 (11.1) Part Number B28369-04" with some modification of:
    - Example 14-1 XMLType Java: Using JDBC to Query an XMLType Table
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb11jav.htm#i1033914
    and
    - Example 18-23 Using XQuery with JDBC
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb_xquery.htm#CBAEEJDE

  • Java class names which contain unicode characters

    I need to create, compile and load java classes which have class names that contain unicode characters.
    I am using Win2k but will neet to support unix* in future.
    When I try to create a fooXbar.jar where X is a unicode character which is not ascii I get an error when creating the file.
    My question is: how do I map java class names and package names which contain non ascii characters into
    names that the files systems will like AND that the java VM will use when trying to load .class file from the class path.
    for example what would the .java and .class file be for the following class?
    class \u6587\u66f8 {

    You could make names for .java and .class that is understandable by the filesystem. E.g. you could prepend with % and then digits for the unicode character. The problem is then how to compile the class, and how to load the class.
    You can load the class with a custom classloader, which will translate the unicode class name to the escaped file name (using %).
    The problem is then reduced to how you can compile your code (you have to map the file name to the class name somehow). I think it can be done, but I don't know the solution to that.
    Alternatively you can use meaningful names for the classes, and then make an obfuscator that can change the bytecodes so the classnames are changed to some obscure unicode names. Perhaps there is already obfuscators out there you can use that will use unicode characters.

  • Java Editor and Unicode Characters

    I'm trying to use unicode characters in strings with the Java Editor.
    String myOption = "\u2666 " + myDirName;
    This works nicely, displaying a diamond in my web application. Source HTML contains a &9830;
    However, if I make changes in the Visual Designer (e.g., add a button), the \u2666 in my Java code changes to a diamond symbol. When that happens, I get a "?" in my web application.
    How do I prevent Java Editor from switching to the display character?

    Thanks for your response.
    After more research, I found that the .java file is corrupted. These means the problem is not related to the browser.
    When I have "\u2666" in the Java Editor window, I see "\u2666" in the .java file in Word Pad. This is properly compiled and the entity &#9830; appears in the HTML.
    When the diamond symbol is in the Java Editor window, I see and ascii ? (character #3F). I get a ? in the browser. It is really a question mark (#3F) not an unknown character.
    So, it appears that when the JSP is changed by adding a button, the .java code is rewritten to disk but unicode characters are not translated properly. Or - something like that. If I could keep the ascii string "\u2666" from converting to the diamond, I would be all set.
    I can edit the Java Code, close, save and reopen all I want and the unicode character doesn't translate to the symbol. This ONLY happens when the JSP is modified.
    Any ideas?

  • Do java supports all unicode characters including telugu..??

    Hello to Everybody..!
    Iam rather new to java technologies and I want to know whether java supports telugu language and if it supports it ,how can we implement it? and is there any system requirements for it.?
    Please Kindly consider this query and Thankks in advance.

    Yes, Java supports all Unicode characters. If you want to know more about Unicode, have a look at its website:
    http://www.unicode.org/
    Telugu characters are in their code charts here:
    http://www.unicode.org/charts/PDF/U0C00.pdf
    You don't have to "implement" anything. System requirements would include a font that can render those characters. I don't know about keyboards.

  • Insert Unicode Characters Into Oracle 8.1.5

    Hello,
    First off, here are the specs:
    Oracle 8.1.5
    JDK 1.2.1
    Oracle8i 8.1.6.2.0 JDBC Drivers for use with JDK 1.2.x for Solaris
    I'm running into a problem with insert Unicode characters into Oracle via the JDBC driver. As you can see above, I am using the Oracle 8.1.6.2.0 JDBC driver because it is the first driver with supports the JDK 1.2.x. So I think I should be okay.
    I can retrieve data with special characters from Oracle by calling the getBytes() method from the ResultSet with all special characters being intact. I am using getBytes because calling getString() would throw the following exception: "java.sql.SQLException(): Fail to convert between UTF8 and UCS2: failUTF8Conv". However, with that value that I just retrieved, or any other data with special characters (unicode) in which I try to insert into Oracle does not get converted properly.
    What appears to be happening is that data with special characters (unicode), are not being treated as a single double byte character, but rather two single byte characters. Thus, R|ckschlagventil becomes RC<ckschlagventil once it is inserted. (Hopefully, my example will be rendered properly).
    According to all documentation that I have found, the JDBC driver should not have any problem with converting UCS2 Java Strings to Oracle's UTF8 character set.
    I have set Oracle's NLS_NCHAR_CHARACTERSET to UTF8. I am also setting the environment variable NLS_LANG to AMERICAN_AMERICA.UTF8. Perhaps there is some other environment setting in which I am missing?
    Any help would be appreciated,
    Christian
    null

    Import has a lot of options, so it depends on what you want to do.
    C:\> imp help=y
    will show you all possible options. An example of full import :
    C:\> imp <username>/<password>@<TNS alias> file=<DMP file> full=y log=<LOG file>
    Message was edited by:
    Paul M.
    ...and there is always [url http://download-uk.oracle.com/docs/cd/F49540_01/DOC/index.htm]The documentation

  • Unicode support in java.io.File.listFiles()

    I am trying to list all the files in a given directory using the File.listFiles() method, yet for some reason the File objects returned have invalid paths when a file has unicode characters in its filename.
    example .. a test directory has these files
    tiga-dj_kicks.mp3
    tr\374by_trio-elevator_music.mp3
    if i call dir.listFiles() i will get ...
    tiga-dj_kicks.mp3
    tr?by_trio-elevator_music.mp3
    why is java converting my unicode into "?"s?
    i have seen other java apps that don't have this problem ... does anyone know why this is happening?
    in terms of my system ... i am on linux 2.4.20 in the US and i have tried may jdk versions including 1.3.1 and the latest 1.4.2 as well as blackdown 1.4.1

    It's true that System.out may not like unicode characters, so I instead I have been using a JTextArea and I know that supports my unicode.
    **eek ... after a preview it seems that my unicode isn't going to show up here either :( i'll wrap brackets around what was actually displayed as unicode in my environment.
    here is another test I have run ... println(String) just appends to the text area
    String original = new String("A" + "\u00ea" + "\u00f1" +
                   "\u00fc" + "C");
    println(original);
    File f = new File(original);
    println(f.getPath());
    f.createNewFile();
    String[] files = new File(System.getProperty("user.dir")).list();
    for(int i=0; i < files.length; i++)
    println(files);
    Output ...
    [[A???C]]
    [[home/ag92114/workspace/test/unicode-test/A???C]]
    Test.java
    Test.java~
    Test.class
    A???C
    As you can see ... the text area correctly displayed the unicode for the String and File objects that I constructed myself, but when I list back the file I just created then my unicode is lost.
    listing the directory on the shell yields ...
    [ag@home:~/workspace/test/unicode-test ] ls
    ./ ../ A???C Test.class Test.java Test.java~
    So possibly the file is not even written to the native filesystem with the unicode??
    Finally I tried opening xemacs and touching a file with a unicode filename and it works fine and displays in the shell just fine, yet when I list the dir contents in java then the unicode is lost. I touched the file [[A?]] using xemacs.
    shell view ...
    [ag@home:~/workspace/test/unicode-test ] ls
    ./ A???C Test.class Test.java~
    ../ [[A?]] Test.java
    java lists files as ...
    Test.java
    Test.java~
    Test.class
    A???C
    A?
    normally this is where I would concede that java just isn't capable of handling unicode in the filesystem, but I know that isn't true because I have tried other applications (jEdit, limewire) that both seem capable of listing and displaying directories and files that contain unicode. i just wish i knew how they were doing it.

  • Displaying unicode characters

    Dear all,
    I am currently dealing with a character displaying problem on the MAM.
    We will soon go live in China. Until now we only had European countries, with a Latin alphabet.
    Now however this changes, so we need to use Unicode to display all characters correctly.
    Therefor I have converted all our custom language files to language files with Unicode escape characters.
    e.g.:
    EQUIPMENTS_EQU_MAT_NR=设备材料号码
    Now the strange thing is that when we login in Chinese, everything is displayed correctly, but when we login in German or Polish (countries which also have some special characters), we don't see everything displayed correctly.
    This is the code how we display an entry from the language file on the screen:
    <%@page language="java" contentType="text/html; charset=UTF-8"%>
    meta http-equiv="Content-Type" content="text-html; charset=UTF-8"
    <jsp:useBean id="res" class="com.sap.ip.me.api.services.MEResourceBundle" scope="session"></jsp:useBean>
    <%=PageUtil.ConvertISO8859toUTF8(res.getString("CONFIRMATIONS_HEADER_DETAIL"))%>
    For Chinese language, the characters are displayed correctly in this way.
    e.g.: 最后一次同步时间
    However Polish characters and German characters are not (always) displayed correctly.
    e.g.: Wskaźnik pierwszego usuniÄu2122cia usterki
    The only 'difference' that I can see is that for China, every character in the language file has a special Unicode notation, while for Polish and German characters, only the special characters are displayed in special Unicode notation.
    e.g.:
      EQUIPMENTS_EQU_MAT_NR=Numer materia\u00c5‚u sprz\u00c4™tu
    FYI, I've converted the files to Unicode escape characters with the java util native2ascii.exe.
    Is there anyone who knows how to solve this issue?
    Thanks already in advance!
    Best regards,
    Diederik
    Edited by: Diederik Van Wassenhove on Jul 6, 2009 2:34 PM

    Dear all,
    I've found the reason for this problem.
    Thanks anyway for your time!
    The problem was that when converting the language files to Unicode escape characters, the source format was wrong. The files where saved as UTF-8, but the JAVA tool native2ascii is not taking UTF-8 as standard input format. So the resulting Unicode file was not containing the correct Unicode characters.
    I've re-generated the language files with the parameter -encoding UTF-8, and now everything is displayed correctly!
    Have a good day!
    Diederik

  • Query regarding Handling Unicode characters in XML

    All,
    My application reads a flat file in series of bytes, I
    create a XMl document out of the data. The data contains Unicode characters.
    I use a XSLT to create XML file. While creating it I don't face any issues
    but later if I try to parse the constructed XMl file, i get a sax parsing exception
    (Caused by: org.xml.sax.SAXParseException: Character reference _"<not visible clearly in Browser>"_ is an invalid XML character.)
    Can some one advice on how to tackle this.
    regards,
    D
    Edited by: user9165249 on 07-Jan-2011 08:10

    How to tackle it? Don't allow your transformation to produce characters which are invalid in XML. The XML Recommendation specifies what characters are allowed and what characters aren't, in section 2.2: http://www.w3.org/TR/REC-xml/#charsets. The invalid characters can't come from the XML which you are transforming so they must be coming from code in your transformation.
    And if you can't tell what the invalid characters are by using your browser, then send the result of the transformation to a file and use a hex editor to examine it.
    By the way, this isn't a question about Unicode characters in XML, since all characters in Java are Unicode and XML is defined in terms of Unicode. So saying that your data contains Unicode characters is a tautology. It couldn't do anything else. If your personal definition of Unicode is "weird stuff that I don't understand" then do yourself a favour, take a couple of days out and learn what Unicode is.

  • Exporting unicode characters to PDF using JRC not working.

    We have a requirement to support unicode characters (Russian) in our reports. We are using the JRC with the R2 release. When I view the report in the viewer, the characters are correct, but when I export to PDF, they show as ???&#39;s. Is this a bug? When I export from the report designer to pdf, they show correctly, but I have heard it uses a different reporting engine fromt the JRC.

    Solution is quite simple don't worry too much about it.
    JRC PDF Export engine only support for windows-1252 encoding scheme. If your character set using encoding scheme other than windows-1252 you will get bunch of ????. There is simple way to convert this encoding scheme in Java.
    As a example Arabic character scheme using windows-1256 character scheme and we can covert this to JRC supported windows-1252 by
    JRCSupportedCharacterString = new String((InputCharacterString.toString()).getBytes("windows-1256"), "windows-1252");
    InputCharacterString - windows-1256 encoded
    JRCSupportedCharacterString  - windows-1252 encoded (JRC Suppoted)
    Now JRC will correctly process your character string.
    Note: make sure to set font type of Fields in your report template for relevant font style (ex. Arabic, Chinese or whatever)
    Java encoding names and more information about conversion are available at
    http://mindprod.com/jgloss/encoding.html#CONVERSION
    Happy coding............

  • Passing command line unicode argument to Java code

    I have to pass command line argument which is Japanese to Java main method. If I type Unicode characters on command-line window, it displays '?????' which is OK, but the value passed to java program is also '?????'. How do I get the correct value of argument passed by the command window? Below is sample program which writes to a file the value supplied by command line argument.
    public static void main(String[] args) {
    String input = args[0];
    try {
    String filePath = "C:/Temp/abc.txt";
    File file = new File(filePath);
    OutputStream out = new FileOutputStream(file);
    byte buf[] = new byte[1024];
    int len;
    InputStream is = new ByteArrayInputStream(input.getBytes());
    while ((len = is.read(buf)) > 0) {
    out.write(buf, 0, len);
    out.close();
    is.close();
    } catch (Exception e) {
    e.printStackTrace();
    }

    To clarify a little:
    If the "command line" means a console opened from the operating system, then the problem is that your machine's operating system can't handle the Unicode characters you want (at least, not in the console mode). If you can't configure the machine to accept Unicode on the command line, you'll have to investigate some other means of passing the argument to your app, as the other poster mentioned.

  • How to write to file Unicode characters

    I have PDF files that I need to copy some strings out of and put them in various fields in a Postgres database. The goal is a Java screen into the database, whiere I mark and copy the PDF text and then paste it into a field in a Swing window, and from there into the database.
    I am unsuccessful at reading a PDF file, so I have opted to cut and paste the PDF file into an MS word file. This results in errors in certain unicode characters. I am trying to rectify them by a simple program, a start of which is below, by a replacement of the erroneous char by the proper unicode symbol. As, shown by the following, I cannot figure out how to write out a unicode character. Do I need to wrap (which I don't know much about yet)? Or do I have a file problem? (I have a Vista machine.) I don't think it should be impossible to write unicode into a file, as I am able to write into MS Word files phonological symbols, Russian, and Sanskrit. So, it must be in the java.
    P.S.: I am reading Schildt's Java: A Beginner's Guide and am through chapter10, but remaining chapters are on threads, enumerations, autoboxing, static import, annotations; generics,; applets, events, and miscellaneous topics, and, finally Swing. Maybe its in the autoboxing?
    Any help would be most appreciated.
    import java.io.FileReader;
    import java.io.FileWriter;
    import java.io.PrintWriter;
    import java.io.IOException;
    public class CopyCharacters {
    public static void main(String[] args) throws IOException {
    FileReader inputStream = null;
    //FileWriter outputStream = null;
    PrintWriter outputStream = null;
         char longa = 0x0101;
         int longc = 0x0101;
         char capA = 0x0041;
         char longb = 0x0111;
    // Unicode for uppercase Greek omega character char uniChar = '\u039A'
         char uniChar = '\u039A';
    // Character ca = new Character('0x0101'); // illegal
         Character cb = new Character('\u0101');
         Character cc = '\u0101';
    int c;
    try {
    inputStream = new FileReader("Cardona1.txt");
    outputStream = new
                   PrintWriter(new FileWriter(
                        "characteroutput.txt"));
              outputStream.println( "character1 " + capA); //yields A
              outputStream.println( "character2 " + longa); //yields ?
              outputStream.println( "character3 " + '\u0101'); //yields ?
              outputStream.println( "character4 " + longc); //yields 257
              outputStream.println( "character5 " + "S\u00ED Se\u00F1or"); // yields character Sí Señor
              outputStream.println( "character6 " + "S'\u00ED' Se\u00F1or"); // yields S'í' Señor
              outputStream.println( "character7 " + "S\u0121 Se\u00F1or"); // yields character S? Señor
              outputStream.println( "character8 " + "S'\u0121' Se\u00F1or"); // yields character S'?' Señor
              outputStream.println( "character9 " + uniChar);// yields character ?
              outputStream.println( "character10 " + '\u00FF');// yields character ÿ but fails on \u0100.
    // only 0-255!!
              outputStream.println( "character11 " + cc);// yields ?
              outputStream.println( "character12 " + cb);// yields ?
              outputStream.println( "character13 ?");// yields ?
    while ((c = inputStream.read()) != -1) {
         // outputStream.writeln(c);- error
    } finally {
    if (inputStream != null) {
    inputStream.close();
    if (outputStream != null) {
    outputStream.close();
    }

    I am unsuccessful at reading a PDF file, so I have opted to cut and paste the PDF file into an MS word file. This results in errors in certain unicode characters.Stop right there. You are digging a hole. Stop digging. Fix the problems with reading the PDF file.

  • Performance of JEditorPane with unicode characters

    Hi,
    I'm using a JEditorPane to edit rather large (> 15000 words) but simple HTML files. Everyting is fine until I add even a single unicode character to the text with a character code higher than 255, like a Greek omega (\u03A9). With the unicode character the control starts to take an incredibly long time to redraw (sometimes minutes) when you resize it, for instance. The strangest thing is that removing the character again does not restore performance. Can anyone explain why this is happening?
    import javax.swing.*;
    import javax.swing.text.html.HTMLEditorKit;
    public class EditorPaneTest {
    public static void main(String[] args) {
    StringBuffer html = new StringBuffer();
    html.append("<html><body>");
    // Uncomment next line, run and resize frame to see problem
    // html.append("<p>\u03A9</p>");
    for (int i = 0; i < 2000; i++) {
    html.append("<p>Testing, testing, testing...</p>");
    html.append("</body></html>");
    JFrame jFrame = new JFrame("Test");
    jFrame.setSize(300, 300);
    JEditorPane jEditorPane = new JEditorPane();
    jEditorPane.setEditorKit(new HTMLEditorKit());
    jFrame.add(new JScrollPane(jEditorPane));
    jFrame.setDefaultCloseOperation(JInternalFrame.EXIT_ON_CLOSE);
    jFrame.setVisible(true);
    jEditorPane.setText(html.toString());
    }Any help would be much appreciated.
    Thanks,
    Rasmus

    In the meantime, I had to solve my problem one way or another, and the only thing that came up to my mind was to use JavaMail API.
    It is not quite what I was hoping for, because it doesn't provide opening of default e-mail client on local machine, but at least it can send e-mail with Unicode characters in the subjects line, recipient addresses, etc.
    Make a new message using JavaMail and then set it's properties in a fairly simple manner, like this:
    message.setSubject( MimeUtility.encodeText("+ ... some Unicode text with Cyrillic symbols ... +", "UTF-8", "B") );I'd still like to see if there are any suggestions on how to do the similar thing with java.awt.Desktop.
    Regards,
    PS

  • Unicode Characters in J2ME.....Urgent

    Hello...
    I want to developed a simple text message service application. It's a simple SMS application but i want to translate character into other language. For example when the button M is selected on the keypad, the alternative Ethiopian alphabet will show up. Can anybody knows how to use Java Unicode characters. I want to Map english characters to Ethopian alphabate.
    Thanks in Advance

    Hi
    Read this post: http://www.j2meforums.com/yabbse/index.php?board=2;action=display;threadid=9553;start=msg43827#msg43827
    Mihai

Maybe you are looking for

  • Special character no longer works after updates

    Hi, I updated my system yesterday and it broke something. My PS1 used to look like: http://imgur.com/a/GboiZ#0 , with a special - character. When I logged in today, my PS1 looked like this: http://imgur.com/a/GboiZ#2, and the line in my .bashrc with

  • Slew of bugs (similar to failed to download script thread)

    This is a bug report listing all the problems I’ve encountered with Story and what I was doing when they occurred: I logged into Adobe Story using the AIR application in online mode. I attempted to open a script, which was originally imported from Mo

  • Setting up 5510 printer

    I have mostly set up my new 5510 printer, but towards the end it asks about 'proxy server'  and 'port'. What does this mean in plain English? I think I may need to do something here if I want to print from my mobile phone, for instance. (I might add,

  • CP5 - W7 - QUESTION - Motion paths, send project to other person.

    Hi Me and my college work with the same CP files. We have an internal flow 01-Preproduction 02-Post production 03-Ready for export 04-Exported and approved I am responsible for the preproduction, creating animations and recording stuff My college is

  • Reader won't open a PDF

    Adobe keeps saying the following "before viewing PDF documents in this browser you must launch adobe reader and accept the end user license aggrement, then quit and restart the browser" How do I get this to stop and also how do I get it so I can open