Add non-Roman characters to PDEText

Hi everyone,
Is there a way other than PDETextAddGlyphs to add non-Roman text to a PDETextObject. I've been trying to get Arabic and Cyrillic characters added to a PDEText object by adding the glyphs with no success. I'll list the method here so maybe someone can tell me where I'm going wrong. Most of this is straight out of the PDETextAddGlyph snippet.
1. Create a PDEFont from a system font that supports the characters I need. I'm using Arial, since the character map has the characters I want. For example, I'm using a Cyrillic character with Unicode code 0434. Arial supports that character, so I embed or subset Arial, using the createUnicode flag.
2. Create all of the objects I need to make this work, like the graphics state, text matrix, and text state.
3. I used the createGlyphRun from the snippet exactly to create the glyph run.
4. Stuff the glyph run, text state, text matrix, graphics state into a PDEText object by using PDETextAddGlyphs. That's where the cloud of smoke comes in. This returns "Bad Parameter".
I use all of the parameters for the PDETextAdd method, and it works fine. That leads me back to the createGlyphRun method. I got that from Adobe's SDK and it should work. I'm little stuck and would really appreciate some help.
Joe

I haven't looked at that yet. I was researching earlier today how to change the encoding from WindowsAnsi to some other encoding, but which one and how I haven't figured out yet. I need something that can support as many characters as possible, since we are looking to support Arabic, Cyrillic, and Greek for right now, then branch to CJK later. That's a bit more cut and dried by the snippet. I guess I'll have to find an encoding to match for now If you have any suggestions, I'd appreciate it.
Thanks
Joe

Similar Messages

  • How to add non English characters

    I install weblogic 10.3.3.0, SOA Suite 11.1.1.2, and SOA Suite 11.1.1.3 on Windows 2008 (English version). The Location of Regional and language option is set to local place. I also add non-Unicode.
    I can add new holiday rules using non English characters in the BPM workspace. Then, I shutdown my computer as normal. However, after I restart my computer and weblogic domain as well as SOA Suite 11g. The non English characters become ??????
    How to configure Weblogic domain and SOA Suite to show non English characters?

    is both English and Korean letters are in the same column without any space gab or any separator, ? Pl re check

  • Non-Roman Characters (Korean/Japanese/Chinese) appearing as boxes

    Hey,
    I've had this problem before, but it has seemed to fix itself in the past. But ever since the release of iTunes 9, I've been having a constant problem with any Kanji, Kana, or Hangul appearing in my library for the program. Instead of appearing as the foreign song title, the names only appear as squares or little boxes.
    When I load the songs to my iPod, the problem does not seem to persist there. Doing a lot of research, I feel that this is a problem having to do with unicode. I know for the fact that my computer is capable of reading Korean/Japanese, since it works in other programs as Office 2007, and even Windows Media Player.
    On another note, right clicking these problematic songs while browsing my library will bring up "Play 'suchandsuchsong'" in the right characterization. These songs show up normal in windows explorer. However, Right clicking it in while in windows explorer and selecting properties will give me the little boxes as well.
    I hope I've given enough information to explain my problem, and I will be very grateful for a solution to this problem which many people seem to have.
    Edit: Additionally, solutions like changing the ID3 tags have not helped, but this might be because they were rather ambiguous in their directions (such as the solution given by iTunes in the iTunes Help).
    Message was edited by: svarogexodeus

    I didn't have the problem of showing boxes, but I had a similar problem of the korean characters showing in symbols. To fix, I (1) right clicked on the song. (2) Chose Convert ID3 Tags. (3) Checked "Reverse Unicode" Box, then hit "OK". You can highlight as many songs as you want at once and still perform the same thing, to save you time.
    Hope this helps you.

  • Text Messaging in Japanese (or other language with non-Roman script)

    Is it possible to send/receive text messages written in non-Roman characters with Verizon? 
    More specifically, I'm using a Droid 2 with Verizon, and I'm trying to text a friend in Japanese.  She has an iPhone (AT&T), and is physically in the U.S. (so I'm not asking about international text messaging).  I've already installed apps on my Droid (e.g. OpenWNN, or Simeji) to input Japanese, which work fine in and of themselves, and the Droid of course displays Japanese text or other languages just fine on the Browser or in E-mails.
    However, I'm having lots of trouble with Text Messaging.  When I send a message containing Japanese text (typed in perfectly fine with OpenWNN or whatever), she either never gets the message, or the text comes out unreadable (as ???????).  Messages with a mix of Roman and non-Roman characters typically show the Roman characters ok, but the non-Roman ones (i.e. Japanese) are garbled.  If she sends me a message containing any non-Roman characters, I generally don't get the message at all (i.e. not even garbled-- just no message at all).   This deficiency seems to be specific to Text Messaging, as far as I can tell.  I can send and receive e-mails containing Japanese just fine, read foreign language web pages, and type in Japanese into search boxes on those web pages and so on with the Droid.  However, sending an e-mail message to (myphone#)@vtext.com, predictably, results in any Japanese text being garbled, though Roman characters come through just fine.
    Is this perhaps something specific to Verizon's network?  Or is text messaging in non-Roman scripts just inherently impossible?  My friend says she knows others who can successfully text in non-Roman scripts (e.g. Korean Hangul, Japanese Kana or Kanji), and claims some of these folks are Verizon customers too.  A search on droidforums.com yielded someone who supposedly could text in Korean within Verizon.  So, I'm hoping this is possible, and I'm just missing something. However, this is perhaps all secondhand information and rumor.
    (Incidentally, I know for certain that messaging with non-Roman scripts is possible in general.  During my last vacation in Japan, I rented a phone there, and could send messages in English or Japanese scripts.  However, I believe the phones there actually have their own e-mail addresses, and so the service there is more like regular e-mail-- my understanding is that text messaging as we know it in the U.S. is a distinct technology, though I could well be wrong.)
    So, to restate my question: is it possible to text message with non-Roman scripts with Verizon?  Has anyone out there done this successfully-- if so what did you do, or what app did you use?  Or is it in fact impossible? (e.g. Maybe text messaging here only uses 7 or 8 bits, instead of 14 or 16 needed to encode all the various Asian, European, and other scripts?)
    Thanks for any help available.

    I'm trying to figure this out as well, but my understanding on the matter is that Verizon's CDMA network uses unicode which will display, roman characters, loa, thai, and serveral other languages, but not hirigana, kanji, katakana, or simplified or traditional chinese.  The Romaji should work however.  It would be nice to hear some input from a Verizon rep on the subject, because like you my info is all second hand.
    If my understanding is correct, the simple answer is no, it's not possible.  

  • How to retrieve non-english characters from a query

    Hello,
    My apologies if this post is not in its proper place, but I was a bit confused where to add it.
    I'm running a query using SQL Developer on a table which contains several companies names from many different countries, and one of the checks I need to make to ensure data consistency is to search for all rows which the name of company contains special or non-english characters (like ç, ã, ä as example).
    I don't know what can I use to do this. I tried to collate using NLS_SORT but it didn't work.
    Is there someway to select only the rows that contain these special or non-english characters, excluding from the results the rows that only have english characters? Please have in mind that we have many languages in this table.
    The field I would like to make the conditions on is VARCHAR2.
    Please let me know if there is any extra information I should provide you so that you can help me.
    Thank you in advance for the help.
    Regards,
    Luís

    Hi Luis,
    My apologies if this post is not in its proper place, but I was a bit confused where to add it.This is the Forum for the SQL Developer Data Modeler product.
    I suggest you try using the SQL and PL/SQL Forum: PL/SQL
    David

  • Non-english characters in file names show as question marks

    It's probably iocharset=utf8 question, but it's not only cd-rom - native partitions with "defaults" options behave no better. What is the correct solution? Current locale is en_US.UTF-8, which should be OK.

    native partitions with "defaults" options behave no better
    What filesystems?
    For ntfs read http://wiki.archlinux.org/index.php/HAL
    Policies
    NOTE: this is deprecated from hal => 0.5.10
    and
    mount.ntfs linking
    As of hal => 0.5.10 the above policy may not work. This is a workaround forcing hal to use the ntfs-3g driver instead of the standard ntfs driver. Please note that this method will use the ntfs-3g driver for all NTFS drives on your system! As root create a symbolic link from mount.ntfs to mount.ntfs-3g.
    # ln -s /sbin/mount.ntfs-3g /sbin/mount.ntfs
    Possible issues using this method:
    if mount is called with "-i" option it doesn't work
    possible issues with the kernels ntfs module
    Locale issues
    If you use KDE, you may have problem with filenames containing non-latin characters. This happens because kde's mounthelper is not parsing correctly the policies and locale option. There is a workaround for this:
    1) Remove the "/sbin/mount.ntfs-3g" which is a symlink. code: rm /sbin/mount.ntfs-3g
    2) Replace it with a new bash script containing:
    #!/bin/bash
    /bin/ntfs-3g $1 $2 -o locale=en_US.UTF-8 #put your own locale here
    3) Make it executable: chmod +x /sbin/mount.ntfs-3g
    There is only a problem with partition labels containing spaces, so if you have such a label, replace the space with an underscore, otherwise when you try to mount it you will get an error.
    4) Add NoUpgrade=sbin/sbin/mount.ntfs-3g to pacman.conf.
    I think your understand Russian. Welcome to http://linuxforum.ru/index.php?showtopic=53488 and
    http://archlinux.org.ru/

  • Problem with the Non-English Characters

    Hello,
    I have been using Adobe Illustrator  but I have a huge problem with the non-english characters with Standart Fonts. With the Professional font's I have no problem with them. But when I'm using any standart font in font folio library I cannot type any "ğ-İ-ş". I can add those letters in fontlab with the glyphs (scedilla, idotaccent, gbreve). Most of the fonts have those letters already prepeared so I dont even have to redraw. But I can't add those glyph to every single font because I dont have that kind of time and patience. Is there any better solution for this? Or is there any font folio pack that all fonts are PRO.
    I'm looking forward for your answers
    Thanks.

    Joel wrote: I'm told that this is the exact difference between Adobe's Standard and Pro fonts — the Pro fonts have additional glyphs, including those necessary for extended Latin script.
    Exactly. The Pro fonts have at a minimum the Adobe Western 3 character set, which is essentially western European + Adobe CE.
    > Standard fonts just have the basic English character set, with maybe a bit of help for Spanish and French.
    A lot more than that!
    > You're doing Turkish, right? Adobe's coverage for Turkish in its fonts is not great - some of the Pro fonts have Turkish coverage, many do not.
    This is false. Every single Adobe Pro font supports Turkish.
    To be clear:
    All Adobe Standard fonts support the following languages: Afrikaans, Basque, Breton, Catalan, Danish, Dutch, English, Finnish, French, Gaelic, German, Icelandic, Indonesian, Irish, Italian, Norwegian, Portuguese, Sami, Spanish, Swahili and Swedish.
    Adobe Pro fonts support those languages, plus AT LEAST: Croatian, Czech, Estonian, Hungarian, Latvian, Lithuanian, Polish, Romanian, Serbian (Latin), Slovak, Slovenian and Turkish. Some Pro fonts have more language support than this, such as Greek and/or Cyrillic, and additional extended Latin.
    See: http://www.adobe.com/type/browser/info/charsets.html
    Cheers,
    T

  • Flash player will not play flvs from folder with name containing non-Roman Chars

    We're trying to play flash as part of a program that can be installed to locations including non-Roman chars (European, Arabic, Chinese, etc.).  When the installation path includes any non Roman chars, video will not play.  The non-roman chars are garbled when Flash tries to load the flvs. Correct non-Roman chars are: ńśćłł.  When loading video it gets translated to: ńśćłł
    We've found this happens whether we are trying to access the video from the xulrunner app or by taking the activities out and running them within an html frame.
    Audio on the other hand is no problem
    The path information for the xulrunner app from Process monitor is:
    Video not found from the video player.
    10:41:25.3337898    xulrunner.exe    2316    CreateFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\video\VI-d9e37af511faa\Engplus1.flv    PATH NOT FOUND    Desired Access: Generic Read, Disposition: Open, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: n/a, ShareMode: Read, AllocationSize: n/a
    10:41:25.3339452    xulrunner.exe    2316    CreateFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\video\VI-d9e37af511faa\Engplus1.flv    PATH NOT FOUND    Desired Access: Generic Read, Disposition: Open, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: n/a, ShareMode: Read, Write, AllocationSize: n/a
    Audio found.
    10:41:59.7659648    xulrunner.exe    2316    ReadFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\audio\AU-fa6ed5d31f577\LG0CD1Track06.mp3    SUCCESS    Offset: 0, Length: 4,096, Priority: Normal
    10:41:59.7660482    xulrunner.exe    2316    ReadFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\audio\AU-fa6ed5d31f577\LG0CD1Track06.mp3    SUCCESS    Offset: 4,096, Length: 4,096
    Video not found from a flash interactive
    10:43:50.0804572    xulrunner.exe    2316    CreateFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\flash_interactive\content\FL-123456780l0l\assets\video\video. flv    PATH NOT FOUND    Desired Access: Generic Read, Disposition: Open, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: n/a, ShareMode: Read, AllocationSize: n/a
    10:43:50.0806413    xulrunner.exe    2316    CreateFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\flash_interactive\content\FL-123456780l0l\assets\video\video. flv    PATH NOT FOUND    Desired Access: Generic Read, Disposition: Open, Options: Sequential Access, Synchronous IO Non-Alert, Non-Directory File, Attributes: n/a, ShareMode: Read, Write, AllocationSize: n/a
    Audio found from a flash interactive
    10:43:50.1928413    xulrunner.exe    2316    ReadFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\flash_interactive\content\FL-123456780l0l\assets\audio\journa list.mp3    SUCCESS    Offset: 0, Length: 4,096, Priority: Normal
    10:43:50.1929148    xulrunner.exe    2316    ReadFile    C:\Pro\ńśćłł\Flash dll Test\.shared\content\assets\flash_interactive\content\FL-123456780l0l\assets\audio\journa list.mp3    SUCCESS    Offset: 0, Length: 25,206, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal

    Not exactly sure what you are doing, but it is obvious that some part of the program you are running does not support multibyte characters like Unicode.
    If you are developing that application, make sure it is Unicode enabled.  If it is Flash (or Flash Player) that doesn't understand it, then you may not be able to do anything about it.
    What I would try if I were to encounter the situation, I would try to use the DOS (8.3) path names, like C:\Documents and Settings would normally be C:\DOCUME~1 or C:\Documents and Settings\All Users would be C:\DOCUME~1\ALLUSE~1 etc.
    Use the DIR /X command to find out what the DOS folder names are.

  • PDF generation for Non English Characters from ADF

    Hi
    We are using below piece of code to generate pdf from ADF Managed bean. It works fine. However for non English Characters(eg. Japanese,Vietnamese,Arabic)  it puts
    I got few blogs
    https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears
    However we are not using BI Publisher product . We are using its API's
    Can anyone tell where do we need to setup fonts within ADF or Weblogic or Server ?
    Input Parameters are
    a)xml Data
    b)InputStream  ie rtf Template
    import oracle.apps.xdo.XDOException;
    import oracle.apps.xdo.template.FOProcessor;
    import oracle.apps.xdo.template.RTFProcessor;
        public static byte[] genPdfRep(String pOutFileType,byte[] pXmlOut ,InputStream pTemplate)
            byte[] dataBytes = null;
            try {
                //Process RTF template to convert to XSL-FO format
                RTFProcessor rtfp = new RTFProcessor(pTemplate);
                ByteArrayOutputStream xslOutStream = new ByteArrayOutputStream();
                rtfp.setOutput(xslOutStream);
                rtfp.process();
                //Use XSL Template and Data from the VO to generate report and return the OutputStream of report
                ByteArrayInputStream xslInStream = new ByteArrayInputStream(xslOutStream.toByteArray());
                FOProcessor processor = new FOProcessor();
                ByteArrayInputStream dataStream = new ByteArrayInputStream((byte[])pXmlOut);  
                processor.setData(dataStream);
                processor.setTemplate(xslInStream);
                ByteArrayOutputStream pdfOutStream = new ByteArrayOutputStream();
                processor.setOutput(pdfOutStream);
                byte outFileTypeByte = FOProcessor.FORMAT_PDF;
                processor.setOutputFormat(outFileTypeByte); //FOProcessor.FORMAT_HTML
                processor.generate();
                dataBytes = pdfOutStream.toByteArray();
            } catch (XDOException e) {
                e.printStackTrace();
            return dataBytes;
    Appreciate your help.
    Thanks,
    Abhijit

    Fonts are defined in the template you use to generate the pdf. Your application add the data and both is processed yb the FOP processor. Now there are two possible causes of the '???' :
    1. the data you sent to the template contains the '???' already
    2. the template can't digest the data (the special characters) and puts '???' in the pdf.
    Before going on you have to find out which one is your problem. The 2nd is the problem you better ask this in a FOP forum as you have to solve it by changing the template.
    Timo

  • Word Replacements for Non- English Characters

    Hi
    Does anyone have an idea on implementing Word Replacements for non- english characters in TCA- DQM 11i.
    We are trying to identify, capture and cleanse common accented characters like à, â , ê
    However, the default language for replacement is American English , So even if we add these in the existing lists it will not take any effect
    Is creating a new Word replacement list for every language the solution ?? any patch recommendations???
    Thanks in advance

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • Hexidecimal Non-Allowed Characters in a Unicode System

    We have a function module that we've written to replace non-permitted characters with a space in transfer rules.  We see a lot of invisible hexidecimal characters coming in free form text fields.  This work great for English.  However, we have a Unicode system with other languages installed.  We are also getting the hex characters in other character sets. 
    Has anyone dealt with this issue and if so what was you solution?
    Thanks!
    Al

    Hello aLaN,
    how r u ?
    Hey we have faced the problem with Hexadecimal characters, but not the same issue. In our case the problem was in the Source System. In the DB Tables we had some unwanted characters, that was showing some errors while data loading, particularly ERROR 18.
    So we resolved it by changing the Source System data.
    I have already posted for the hexadecimal issue.... the replies was
    I think this is related to Invalid character issues.. or SPaces setting in RSKC..
    may be you want to look at eh following post..
    Re: invalid characters
    /people/siegfried.szameitat/blog/2005/07/18/text-infoobjects-part-1
    Example:
    let us say..
    1. Check in RSKC for allowed characters..
    2. Add a code in the update rule to restrict the texts contains..
    !"%&''()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' are allowed characters in RSKC transaction then other than the above character is 'Invalid' including the smaller case letters and will throw the hex.. error.
    since this is from database system even 'NULL' datatype from there is not visble to the eyes and can cause the failure.
    Hope this helps
    Best Regards....
    Sankar Kumar
    91 98403 47141

  • [SOLVED] KDEmod - problem with mounting b/c of non-ASCII characters

    Hi guys!
    I finally set aside a few gigabites for Archlinux - it is no more in a virtual machine So far I managed to configure everything with the excellent wiki. It's runnin' and kickin'. I run accross only one problem:
    When I insert a CD with a label that has non-ASCII characters (some Polish ones in my case) and I click on it's icon in Konqueror I get the message that "file such-and-such doesn't exist" - and the Polish characters are clearly misspelled (it is not a fonts' problem - I double checked). I can access the folder either via console or via konqueror if I go to the /media folder, though.
    Any ideas how I can fix it? If you need more info, let me know.
    Last edited by JeremyTheWicked (2008-05-31 14:46:07)

    You're welcome . Now it's advisable for you to edit the title of your initial post: add [SOLVED]. Perhaps more clear wording would be in order, too, for the benefit of the search engine. The problem seems to be a trifle in retrospect, but somehow it takes some effort to find the solution, doesn't it ?

  • How can I remove non-numeric characters from a cell?

    I have a file an rtf file that I can open in Numbers. It puts each line in a separate cell. Each cell contains non-numeric and numeric characters. I'd like to delete the non-numeric characters so that I can add the numbers together. Is there a way to do this easily in Numbers that doesn't require doing it manually?
    Thanks,
    David

    Ok, David,
    This solution will work for vlaues up to 99,000 and if there is a space in front of your amount. There are two parts for clarity but you could wrap them up into one formula if you wanted to.
    B2 =FIND(" ",A2,LEN(A2)−9)
    C2 =MID(A2,B2,10)
    If there is a return before your amount (certain cells in your screenshot got me wondering) then the formula in column B
    =FIND("
    ",A2,1)
    It looks funny because it is finding the return.
    Let me know if this works for you.
    quinn

  • Servlet Displaying Quotation Marks as Non-Printable Characters

    I have a servlet which is reading an HTML file and displaying it's contents. My problem is that, in the output, quotation marks in the source html (" and ') are being reproduced as non-printable characters (). Furthermore, the same servlet prints the quotation marks fine under the Linux OS and Apache Web Server, but does not under the Windows (2000) OS and IIS Web Server (running j2sdk-1_3_0_02-win). Any suggestions would be appreciated. Code in question is below. "str" is the line from the file. :
         FileReader freader = new FileReader (filePath);
         BufferedReader breader = new BufferedReader(freader);
         String str = null;
         while ((str = breader.readLine()) != null) {
         document = document + str + "\n";
         freader.close();

    Technically, you don't need to add the "\n" in there anyway. Newlines mean nothing to an HTML file if all you're doing is displaying that file. The lack of a carriage return, when the HTML is parsed, is completely irrelevant.
    Also, when handling large String concatenations, it's always going to be more efficient to use StringBuffer.
    StringBuffer sbDocument = new StringBuffer();
    while((str = breader.readLine()) != null)
       sb.append(str);
    String document = sbDocument.toString()

  • Removing non printable characters from an excel file using powershell

    Hello,
    anyone know how to remove non printable characters from an excel file using powershell?
    thanks,
    jose.

    To add - Excel is a binary file.  It cannot be managed via external methods easily.  You can write a macro that can do this.  Post in the Excel forum and explain what you are seeing and get the MVPs there to show you how to use the macro facility
    to edit cells.  Outside of cell text "unprintable" characters are a normal part of Excel.
    ¯\_(ツ)_/¯

Maybe you are looking for

  • Adobe Document Services Error

    Hello,    We have installed NW2004s SP7 on AIX. I checked the configuration for Adobe Document Services and is fine. Also I installed Adobe Reader credentials and registered the password. I tried to run a test web dynpro application which uses adobe

  • Object not found in lookup of JPA_DEFAULT.

    Hello, On deploying the EJB, deployment finished with warnings: Can somebody please help? Many thanks, Dharmi      Description:           1. Exception has been returned while the 'sap.com/ProdinSQLEJBEAR' was starting. Warning/Exception : [ERROR CODE

  • IOS 7 not available

    IOS 7 was available on my iPhone 5 but when I clicked install I got an error message that stated an error occurred. Now when I select software upgrade the message reads your software is up to date 6.1.4??  Where did the update go??

  • How about some recent observations from a user who completed an update to iOS 5.0.1 on their iPhone 4 on Nov. 10, 2011?

    1) Battery was fully charged after update, and just under 6 hours later with no owner interaction (sleeping), the battery level was down to 9%. 2) Discovered on the Usage screen, that both usage and standby were exactly the same at just under 6 hours

  • OnLocation system requirements

    Hey Guys, I am currently in the market to use OnLocation, and I am wondering what exactly the requirements are to run this software.  I only really need to run this software on my laptop, I have a really nice Mac pro for editing through Premiere.  I