Unicode Supports, change encoding setting

hello
I want to know how can I use Unicode supports. when I type activities as local language (farsi) after scheduling i can't see the original characters. i see unknown characters (????? ???? ??? ????????).
I added URIEncoding="UTF-8" parameter in connector setting and i change font in Primavera web access. but i didn't see any improvement.

BillyDBrown
This is a duplicate question which has already been answered in your other thread.
how to access editing of a publishing preset
What you were seeking was the Bitrate Settings/VBR, 2 pass in Premiere Elements 12/12.1 for
Publish+Share/Computer/AVCHD with what appears to be Presets = MP4 - H.264 1920 x 1080p30.
That setting was found under the Advanced Button/Video Tab of the preset.
In Premiere Elements 13/13.1, Bitrate Settings/VBR, 2 pass is not offered.
ATR

Similar Messages

  • Trying to put two video onto 1 disc, change encoder setting problems

    Video tape two differ events, trying to put them on same disc. First project in imovie on my MacBook Pro. was short (1min. 27sec.) the
    Second project was long 184.31 (med) Dimensions 640x360 size 2.3 GB (This is what it display for the long movie)  These video wasn't done in HD.
    When trying to put them on same disc (standard Disc 4.7 GB) I get this message: Your project exceeds the max. content duration. To burn your DVD, change the encoder setting in the Project Infro window. I try to put the long one by it self onto a disc still get the same message. Heip please

    Update: I just made sure that the AP's go through the main router (AEBS) and do not talk to each other directly (which is what they were trying to). I managed to restrict access to them so that only that AEBS (and the other computers through the AEBS, not directly) talks to them. After this everything was smooth sailing.

  • Where to change encoding setting?

    Here are my custom encoding settings for video: 03.06.2015-17.27.37 - HOLLY_HOUSE's library
    I wish to change bitrate encoding from VBR1 to VBR2. I'm sure it's possible but I can't find where to do that. Can you direct me there?
    thanks in advance!

    BillyDBrown
    This is a duplicate question which has already been answered in your other thread.
    how to access editing of a publishing preset
    What you were seeking was the Bitrate Settings/VBR, 2 pass in Premiere Elements 12/12.1 for
    Publish+Share/Computer/AVCHD with what appears to be Presets = MP4 - H.264 1920 x 1080p30.
    That setting was found under the Advanced Button/Video Tab of the preset.
    In Premiere Elements 13/13.1, Bitrate Settings/VBR, 2 pass is not offered.
    ATR

  • When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    Thanks for that information!
    I'm sure I will be calling AppleCare, but the problem is, they charge for the phone calls don't they? Because I don't have money to be spending to be on the phone with a support service.
    In other things, it seemed like the only time my MacBook was working was when I had Snow Leopard without the 10.6.8 update download that was supposed to be done to prepare for OS X Lion.
    When I look at the information of my HD it says that I have 10.6.8 but that was the install that it claimed to have failed and caused me to restart resulting in all of the repeated problems.
    Also, because my computer is currently down, and I've lost all files how would that effect the use of my iPhone? Because if it doesn't get fixed by the time OS 5 is released, how would I be able to upgrade?!

  • Changes to Unicode support in LabVIEW 8.6

    There seems to have been a change in the way LabVIEW handles alternative text input between 8.5.1 and 8.6.
    In 8.6, if I try to use type something in Hebrew (alt+shift in Windows changes the input language), the words come out in the wrong order (the first word appears on the left instead of on the right). The actual order of the letters in the word appears correctly (right to left), although sometimes the first letter in the text gets stuck on the left size.
    If I try to use Hebrew in a label, I get a message telling me that I can't type a Unicode string into labels and enums.
    In all of these cases, copying a string from Notepad and pasting it into LabVIEW displays it correctly (including in labels).
    I have tried doing this with UseUnicode INI key set both to T and to F and it happens in both cases.
    This SUCKS. Big time. Really.
    I know that LabVIEW doesn't officially have support for right to left languages, but at least in previous versions it would sort of behave. I'm pretty sure this isn't what people who were expecting more Unicode support were thinking about.
    Does anyone have any idea for a workaround? The only two I can currently think of is copying and pasting or writing a small VI which will reverse the text on demand, both of which are bad solutions.
    If there's a patch planned, fixing this will definitely get my vote as something that deserves going in there.
    P.S. I'm not sure why the Hebrew input is treated as Unicode. If I'm not mistaken, changing the input language in Windows should still result in ASCII characters and in previous versions it does (the Hebrew chars in the code page LabVIEW uses in 7.0 start at ASCII E0 and in 8.6 it starts at D005 with the next letter being D105).
    P.P.S Did I mention this sucks?
    Try to take over the world!

    I'm going to suggest something really strange as a workaround, but, trust me, I have historical precedent for suggesting this...
    Try editing the keyboard shortcuts for LabVIEW and change Quick Drop to be something other than ctrl+spacebar. 
    Why might this help the problem? When I developed the  alignment grid, I thought that ctrl+# was a very intuitive shortcut key for toggling the grid on and off (since the hash symbol looks like a grid). What we found was that when the OS language was set to French, users could no longer type many of the extended grammer characters. The reason was that a critical scan code for the extended character sets overlapped with the scancode for ctrl+shift+3. In LV 8.6, we added ctrl+space, which is the first non-alphabetic shortcut key we've added since the ctrl+#. It might be that whatever keyboard scancode is used for Hebrew happens to overlap, and the code is being translated into a shortcut key for LV instead of a typed character.
    I have no evidence that this is what is happening, but it is the only analagous situation that I know of, so I suggest it as something to try. 

  • [SOLVED] Termite terminfo not reporting proper unicode support?

    I've just switch from urxvt to termite due to the proper support for fontconfig. However, I'm noticing an issue in some ncurses programs that make use of the "Extended ASCII character set" printing functionality
    According to this question ncurses determines what characters to print for a specific extended ascii character based on the current terminal type.
    I suspect that termite isn't properly reporting unicode support since when a ncurses program wants to (for example) print arrows it will print (<, ^, v, >) instead of (←, ↑, ↓, →).
    I've been trying to read terminfo(5) to understand what I might need to change in the terminfo file (xterm-termite) for it to properly report that it supports unicode, but it's fairly cryptic to me.
    Last edited by EvanPurkhiser (2013-05-16 07:53:26)

    I've made a little bit of progress on determining why Termite doesn't report support for _some_ extended ASCII characters.
    Looking at Termites terminfo file there is an option in there named 'acsc', which describes the terminals support for extended ascii characters. There is some documentation on the options available for this, and I noticed that the arrow characters are not included in termites terminfo.
    If I switch my $TERM to screen-256color (which DOES print the extended ASCII arrows properly) and run `infocmp` I see that it does include these characters
    acsc=++\,\,--..00``aaffgghhiijjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~,
    I thought I might be able to simply add the proper characters to termites terminfo acsc option and it would fix it, but it didn't. Instead now when it trys to print arrows instead of printing the wedge (^, v) or the unicode characters (like I want), it just prints the corresponding extended ASCII mapping characters (-, .).
    The next thing I tried was copying ALL of the screen-256color terminfo options into the source termite.terminfo file. This had the same effect as before.
    I also tried completely replacing the (already compiled) xterm-termite terminfo file with the (already compiled) screen-256color terminfo file (`cp /usr/share/terminfo/s/screen-256color /usr/share/terminfo/x/xterm-termite`). This DID work, but obviously this is no different than just `export TERM=screen-256color `
    What am I missing? is there some kind of character map that gets encoded into the terminfo file during compilation?
    Last edited by EvanPurkhiser (2013-05-16 07:05:02)

  • Change character set

    Hi
    is anyone can tell me how to change characterset.
    i try with alter session but it doesnt work.
    thanks

    Article from Metalink
    Doc ID:      Note:66320.1
    Subject:      Changing the Database Character Set or the Database National Character Set
    Type:      BULLETIN
    Status:      PUBLISHED
         Content Type:      TEXT/PLAIN
    Creation Date:      23-OCT-1998
    Last Revision Date:      12-DEC-2003
    PURPOSE ======= To explain how to change the database character set or national character set of an existing Oracle8(i) or Oracle9i database without having to recreate the database. 1. SCOPE & APPLICATION ====================== The method described here is documented in the Oracle 8.1.x and Oracle9i documentation. It is not documented but it can be used in version 8.0.x. It does not work in Oracle7. The database character set is the character set of CHAR, VARCHAR2, LONG, and CLOB data stored in the database columns, and of SQL and PL/SQL text stored in the Data Dictionary. The national character set is the character set of NCHAR, NVARCHAR2, and NCLOB data. In certain database configurations the CLOB and NCLOB data are stored in the fixed-width Unicode encoding UCS-2. If you are using CLOB or NCLOB please make sure you read section "4. HANDLING CLOB AND NCLOB COLUMNS" below in this document. Before changing the character set of a database make sure you understand how Oracle deals with character sets. Before proceeding please refer to [NOTE:158577.1] "NLS_LANG Explained (How Does Client-Server Character Conversion Work?)". See also [NOTE:225912.1] "Changing the Database Character Set - an Overview" for general discussion about various methods of migration to a different database character set. If you are migrating an Oracle Applications instance, read [NOTE:124721.1] "Migrating an Applications Installation to a New Character Set" for specific steps that have to be performed. If you are migrating from 8.x to 9.x please have a look at [NOTE:140014.1] "ALERT: Oracle8/8i to Oracle9i Using New "AL16UTF16"" and other referenced notes below. Before using the method described in this note it is essential to do a full backup of the database and to use the Character Set Scanner utility to check your data. See the section "2. USING THE CHARACTER SET SCANNER" below. Note that changing the database or the national character set as described in this document does not change the actual character codes, it only changes the character set declaration. If you want to convert the contents of the database (character codes) from one character set to another you must use the Oracle Export and Import utilities. This is needed, for example, if the source character set is not a binary subset of the target character set, i.e. if a character exists in the source and in the target character set but not with the same binary code. All binary subset-superset relationships between characters sets recognized by the Oracle Server are listed in [NOTE:119164.1] "Changing Database Character Set - Valid Superset Definitions". Note: The varying width character sets (like UTF8) are not supported as national character sets in Oracle8(i) (see [NOTE:62107.1]). Thus, changing the national character set from a fixed width character set to a varying width character set is not supported in Oracle8(i). NCHAR types in Oracle8 and Oracle8i were designed to support special Oracle specific fixed-width Asian character sets, that were introduced to provide higher performance processing of Asian character data. Examples of these character sets are : JA16EUCFIXED ,JA16SJISFIXED , ZHT32EUCFIXED. For a definition of varying width character sets see also section "4. HANDLING CLOB AND NCLOB COLUMNS" below. WARNING: Do not use any undocumented Oracle7 method to change the database character set of an Oracle8(i) or Oracle9i database. This will corrupt the database. 2. USING THE CHARACTER SET SCANNER ================================== Character data in the Oracle 8.1.6 and later database versions can be efficiently checked for possible character set migration problems with help of the Character Set Scanner utility. This utility is included in the Oracle Server 8.1.7 software distribution and the newest Character Set Scanner version can be downloaded from the Oracle Technology Network site, http://otn.oracle.com The Character Set Scanner on OTN is available for limited number of platforms only but it can be used with databases on other platforms in the client/server configuration -- as long as the database version matches the Character Set Scanner version and platforms are either both ASCII-based or both EBCDIC-based. It is recommended to use the newest Character Set Scanner version available from the OTN site. The Character Set Scanner is documented in the following manuals: - "Oracle8i Documentation Addendum, Release 3 (8.1.7)", Chapter 3 - "Oracle9i Globalization Support Guide, Release 1 (9.0.1)", Chapter 10 - "Oracle9i Database Globalization Support Guide, Release 2 (9.2)", Chapter 11 Note: The Character Set Scanner coming with Oracle 8.1.7 and Oracle 9.0.1 does not have a separate version number. It reports the database release number in its banner. This version of the Scanner does not check for illegal character codes in a database if the FROMCHAR and TOCHAR (or FROMNCHAR and TONCHAR) parameters have the same value (i.e. you simulate migration from a character set to itself). The Character Set Scanner 1.0, available on OTN, reports its version number as x.x.x.1.0, where x.x.x is the database version number. This version adds a few bug fixes and it supports FROMCHAR=TOCHAR provided it is not UTF8. The Character Set Scanner 1.1, available on OTN and with Release 2 (9.2) of the Oracle Server, reports its version number as v1.1 followed by the database version number. This version adds another bug fixes and the full support for FROMCHAR=TOCHAR. None of the above versions of the Scanner can correctly analyze CLOB or NCLOB values if the database or the national character set, respectively, is multibyte. The Scanner reports such values randomly as Convertible or Lossy. The version 1.2 of the Scanner will mark all such values as Changeless (as they are always stored in the Unicode UCS-2 encoding and thus they do not change when the database or national character set is changed from one multibyte to another). Character Set Scanner 2.0 will correctly check CLOBs and NCLOBs for possible data loss when migrating from a multibyte character set to its subset. To verify that your database contains only valid codes, specify the new database character set in the TOCHAR parameter and/or the new national character set in the TONCHAR parameter. Specify FULL=Y to scan the whole database. Set ARRAY and PROCESS parameters depending on your system's resources to speed up the scanning. FROMCHAR and FROMNCHAR will default to the original database and national character sets. The Character Set Scanner should report only Changless data in both the Data Dictionary and in application data. If any Convertible or Exceptional data are reported, the ALTER DATABASE [NATIONAL] CHARACTER SET statement must not be used without further investigation of the source and type of these data. In situations in which the ALTER DATABASE [NATIONAL] CHARACTER SET statement is used to repair an incorrect database character set declaration rather than to simply migrate to a new wider character set, you may be advised by Oracle Support Services analysts to execute the statement even if Exceptional data are reported. For more information see also [NOTE:225912.1] "Changing the Database Character Set - a short Overview". 3. CHANGING THE DATABASE OR THE NATIONAL CHARACTER SET ====================================================== Oracle8(i) introduces a new documented method of changing the database and national character sets. The method uses two SQL statements, which are described in the Oracle8i National Language Support Guide: ALTER DATABASE [<db_name>] CHARACTER SET <new_character_set> ALTER DATABASE [<db_name>] NATIONAL CHARACTER SET <new_NCHAR_character_set> The database name is optional. The character set name should be specified without quotes, for example: ALTER DATABASE CHARACTER SET WE8ISO8859P1 To change the database character set perform the following steps. Note that some of them have been erroneously omitted from the Oracle8i documentation: 1. Use the Character Set Scanner utility to verify that your database contains only valid character codes -- see "2. USING THE CHARACTER SET SCANNER" above. 2. If necessary, prepare CLOB columns for the character set change -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. Omitting this step can lead to corrupted CLOB/NCLOB values in the database. If SYS.METASTYLESHEET (STYLESHEET) is populated (9i and up only) then see [NOTE:213015.1] "SYS.METASTYLESHEET marked as having convertible data (ORA-12716 when trying to convert character set)" for the actions that need to be taken. 3. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all. 4. Execute the following commands in Server Manager (Oracle8) or sqlplus (Oracle9), connected as INTERNAL or "/ AS SYSDBA": SHUTDOWN IMMEDIATE; -- or NORMAL <do a full database backup> STARTUP MOUNT; ALTER SYSTEM ENABLE RESTRICTED SESSION; ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; ALTER SYSTEM SET AQ_TM_PROCESSES=0; ALTER DATABASE OPEN; ALTER DATABASE CHARACTER SET <new_character_set>; SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP RESTRICT; 5. Restore the parallel_server parameter in INIT.ORA, if necessary. 6. Execute the following commands: SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP; The double restart is necessary in Oracle8(i) because of a SGA initialization bug, fixed in Oracle9i. 7. If necessary, restore CLOB columns -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET. You can issue both statements together if you wish. Error Conditions ---------------- A number of error conditions may be reported when trying to change the database or national character set. In Oracle8(i) the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: ORA-01679: database must be mounted EXCLUSIVE and not open to activate - if you do not enable restricted session - if you startup the instance in PARALLEL/SHARED mode - if you do not set the number of queue processes to 0 - if you do not set the number of AQ time manager processes to 0 - if anybody is logged in apart from you. This error message is misleading. The command requires the database to be open but only one session, the one executing the command, is allowed. For the above error conditions Oracle9i will report one of the errors: ORA-12719: operation requires database is in RESTRICTED mode ORA-12720: operation requires database is in EXCLUSIVE mode ORA-12721: operation cannot execute when other sessions are active Oracle9i can also report: ORA-12718: operation requires connection as SYS if you are not connect as SYS (INTERNAL, "/ AS SYSDBA"). If the specified new character set name is not recognized, Oracle will report one of the errors: ORA-24329: invalid character set identifier ORA-12714: invalid national character set specified ORA-12715: invalid character set specified The ALTER DATABASE [NATIONAL] CHARACTER SET command will only work if the old character set is considered a binary subset of the new character set. Oracle Server 8.0.3 to 8.1.5 recognizes US7ASCII as the binary subset of all ASCII-based character sets. It also treats each character set as a binary subset of itself. No other combinations are recognized. Newer Oracle Server versions recognize additional subset/superset combinations, which are listed in [NOTE:119164.1]. If the old character set is not recognized as a binary subset of the new character set, the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: - in Oracle 8.1.5 and above: ORA-12712: new character set must be a superset of old character set - in Oracle 8.0.5 and 8.0.6: ORA-12710: new character set must be a superset of old character set - in Oracle 8.0.3 and 8.0.4: ORA-24329: invalid character set identifier You will also get these errors if you try to change the characterset of a US7ASCII database that was started without a (correct) ORA_NLSxx parameter. See [NOTE:77442.1] It may be necessary to switch off the superset check to allow changes between formally incompatible character sets to solve certain character set problems or to speed up migration of huge databases. Oracle Support Services may pass the necessary information to customers after verifying the safety of the change for the customers' environments. If in Oracle9i an ALTER DATABASE NATIONAL CHARACTER SET is issued and there are N-type colums who contain data then this error is returned: ORA-12717:Cannot ALTER DATABASE NATIONAL CHARACTER SET when NCLOB data exists The error only speaks about Nclob but Nchar and Nvarchar2 are also checked see [NOTE:2310895.9] for bug [BUG:2310895] 4. HANDLING CLOB AND NCLOB COLUMNS ================================== Background ---------- In a fixed width character set codes of all characters have the same number of bytes. Fixed width character sets are: all single-byte character sets and those multibyte character sets which have names ending with 'FIXED'. In Oracle9i the character set AL16UTF16 is also fixed width. In a varying width character set codes of different characters may have different number of bytes. All multibyte character sets except those with names ending with FIXED (and except Oracle9i AL16UTF16 character set) are varying width. Single-byte character sets are character sets with names of the form xxx7yyyyyy and xxx8yyyyyy. Each character code of a single-byte character set occupies exactly one byte. Multibyte character sets are all other character sets (including UTF8). Some -- usually most -- character codes of a multibyte character set occupy more than one byte. CLOB values in a database whose database character set is fixed width are stored in this character set. CLOB values in an Oracle 8.0.x database whose database character set is varying width are not allowed. They have to be NULL. CLOB values in an Oracle >= 8.1.5 database whose database character set is varying width are stored in the fixed width Unicode UCS-2 encoding. The same holds for NCLOB values and the national character set. The UCS-2 storage format of character LOB values, as implemented in Oracle8i, ensures that calculation of character positions in LOB values is fast. Finding the byte offset of a character stored in a varying width character set would require reading the whole LOB value up to this character (possibly 4GB). In the fixed width character sets the byte offsets are simply character offsets multiplied by the number of bytes in a character code. In UCS-2 byte offsets are simply twice the character offsets. As the Unicode character set contains all characters defined in any other Oracle character set, there is no data loss when a CLOB/NCLOB value is converted to UCS-2 from the character set in which it was provided by a client program (usually the NLS_LANG character set). CLOB Values and the Database Character Set Change ------------------------------------------------- In Oracle 8.0.x CLOB values are invalid in varying width character sets. Thus you must delete all CLOB column values before changing the database character set to a varying width character set. In Oracle 8.1.5 and later CLOB values are valid in varying width character sets but they are converted to Unicode UCS-2 before being stored. But UCS-2 encoding is not a binary superset of any other Oracle character set. Even codes of the basic ASCII characters are different, e.g. single-byte code for "A"=0x41 becomes two-byte code 0x0041. This implies that even if the new varying width character set is a binary superset of the old fixed width character set and thus VARCHAR2/LONG character codes remain valid, the fixed width character codes in CLOB values will not longer be valid in UCS-2. As mentioned above, the ALTER DATABASE [NATIONAL] CHARACTER SET statement does not change character codes. Thus, before changing a fixed width database character set to a varying width character set (like UTF8) in Oracle 8.1.5 or later, you first have to export all tables containing non-NULL CLOB columns, then truncate these tables, then change the database character set and, finally, import the tables back to the database. The import step will perform the required conversion. If you omit the steps above, the character set change will succeed in Oracle8(i) (Oracle9i disallows the change in such situation) and the CLOBs may appear to be correctly legible but as their encoding is incorrect, they will cause problems in further operations. For example, CREATE TABLE AS SELECT will not correctly copy such CLOB columns. Also, after installation of the 8.1.7.3 server patchset the CLOB columns will not longer be legible. LONG columns are always stored in the database character set and thus they behave like CHAR/VARCHAR2 in respect to the character set change. BLOBs and BFILEs are binary raw datatypes and their processing does not depend on any Oracle character set setting. NCLOB Values and the National Character Set Change -------------------------------------------------- The above discussion about changing the database character set and exporting and importing CLOB values is theoretically applicable to the change of the national character set and to NCLOB values. But as varying width character sets are not supported as national character sets in Oracle8(i), changing the national character set from a fixed width character set to a varying width character set is not supported at all. Preparing CLOB Columns for the Character Set Change --------------------------------------------------- Take a backup of the database. If using Advanced Replication or deferred transactions functionality, make sure that there are no outstanding deferred transactions with CLOB parameters, i.e. DEFLOB view must have no rows with non-NULL CLOB_COL column; to make sure that replication environment remains consistent use only recommended methods of purging deferred transaction queue, preferably quiescing the replication environment. Then: - If changing the database character set from a fixed width character set to a varying with character set in Oracle 8.0.x, set all CLOB column values to NULL -- you are not allowed to use CLOB columns after the character set change. - If changing the database character set from a fixed width character set to a varying width character set in Oracle 8.1.5 or later, perform table-level export of all tables containing CLOB columns, including SYSTEM's tables. Set NLS_LANG to the old database character set for the Export utility. Then truncate these tables. Restoring CLOB Columns after the Character Set Change ----------------------------------------------------- In Oracle 8.1.5 or later, after changing the character set as described above (steps 3. to 6.), restore CLOB columns exported in step 2. by importing them back into the database. Set NLS_LANG to the old database character set for the Import utility to avoid IMP-16 errors and data loss. RELATED DOCUMENTS: ================== [NOTE:13856.1] V7: Changing the Database Character Set -- This note has limited distribution, please contact Oracle Support [NOTE:62107.1] The National Character Set in Oracle8 [NOTE:119164.1] Changing Database Character set - Valid Superset definitions [NOTE:118242.1] ALERT: Changing the Database or National Character Set Can Corrupt LOB Values <Note.158577.1> NLS_LANG Explained (How Does Client-Server Character Conversion Work?) [NOTE:140014.1] ALERT: Oracle8/8i to Oracle9i using New "AL16UTF16" [NOTE:159657.1] Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i (incl. 9.2) [NOTE:124721.1] Migrating an Applications Installation to a New Character Set Oracle8i National Language Support Guide Oracle8i Release 3 (8.1.7) Readme - Section 18.12 "Restricted ALTER DATABASE CHARACTER SET Command Support (CLOB and NCLOB)" Oracle8i Documentation Addendum, Release 3 (8.1.7) - Chapter 3 "New Character Set Scanner Utility" Oracle8i Application Developer's Guide - Large Objects (LOBs), Release 2 - Chapter 2 "Basic Components" Oracle8 Application Developer's Guide, Release 8.0 - Chapter 6 "Large Objects (LOBs)", Section "Introduction to LOBs" Oracle9i Globalization Guide, Release 1 (9.0.1) Oracle9i Database Globalization Guide, Release 2 (9.2) For further NLS / Globalization information you may start here: [NOTE:150091.1] Globalization Technology (NLS) Library index .
         Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.     
    Joel P�rez

  • Unicode support

    It seems there is no way to display UTF-8 encoded ePub files in DE 1.5 correctly. If uncompressed, HTML content looks fine in a browser. In DE all common accented characters are substituted with ? character.
    There are two similar topics on this forum:
    http://www.adobeforums.com/webx/.59b6343e
    http://www.adobeforums.com/webx/.59b60d2d
    My ePub file was generated via DocBook XSL Stylesheets and there is no problem to display it in FBReader.
    I suppose it is caused by default font, which contains probably very limited range of characters. Unfortunately there is no way to select the better one (as it is possible in FBReader).
    Something about changing the default font was "discussed" here: http://www.adobeforums.com/webx/.59b5e8b7
    If I met this issue 10 years ago, I would understand. If anybody in these days offer software for reading text with no support for non-roman languages, it is hard to believe.
    What I would suggest is to deliver DE with any very complete font, for example MinionPro (latin, cyrillic, CE, greek etc.). If this is impossible, please add a feature for selecting the default font.
    If I am totally wrong, clarification of this issue is welcomed.
    OS - Win XP Pro SP2
    Browser - Firefox 3.0.1

    Hi Jim--
    Thanks very much for the explanation on this...
    You know, Adobe could save us all a lot of time if the default faux fonts would include more unicode support....
    (I've spent probably a day overall trying to figure out what the issue is, and now I will spend the rest of the day learning/testing embedding a custom font in an epub, just for Digital Editions...)
    Even Mobipocket and Kindle are getting these entities right in the three books I'm working on, with their default fonts. (We're not talking about super-obscure characters. Try "Milosevic" with the proper characters, and you get a ? in DE.)
    Is there any chance Adobe can get the default fonts smarter in 1.8..?
    Thanks for any help/thoughts on this front...I'm sure I don't understand all the issues...it just seems like such an obvious straightforward thing to fix.
    --Kate

  • HT1645 whether i use a DL DVD or just a DVD R, it tells me there was an error during multiplexing.  I have tried changing every setting, best performance, professional quality, etc. and still get the same error message every time i try to burn to dvd.

    whether i use a DL DVD or just a DVD R, it tells me there was an error during multiplexing.  I have tried changing every setting, best performance, professional quality, etc. and still get the same error message every time i try to burn to dvd.

    Welcome to the forums.
    Looks like that error message does refer to encoding (see http://support.apple.com/kb/HT1645) but a problem could be that your video has to be reloaded into iDVD if you started the encoding process and then switched settings.  You could start a new iDVD project, or try deleting encoded assets.
    Let's start from the beginning -- what length of video are you trying to fit on the DVD? (this will indicate what encoding options you have).  Double layer DVDs can be finicky and are used for video more than 2 hrs long.
    Use good quality media -- Verbatim is often recommended around here, and I've used Sony DVD-R with good results.
    For a detailed treatise, see:  https://discussions.apple.com/thread/3926901?tstart=0
    John

  • Unicode support - unsupported items?

    Hello everyone,
    I've been looking around for some detailed information concerning the support of Unicode in LabVIEW.
    There is some valuable information out there, work-around's and whatnot, but there are things that need to be covered still.
    Some valuable LabVIEW & Unicode information can be found here:
    A List of Tips and Tools for using Unicode in LabVIEW
    After reading the document above, I came to realize that there are some LabVIEW items that are not fully supported.
    Some of these items are essential for application development and therefore, someone must have realized and come up with a solution.
    I am looking forward to regroup all those solutions, and this will hopefully help others in the future.
    For now, I really hope that someone can enlighten me with some of their valuable "tricks" since I need to deploy my application soon.
    Here's a list of items that I've had trouble with when it comes to localizing them:
    - Run-time menus: As far as I know, there is no Unicode support for them. Is there a way to display multi-byte characters in them?
                                   I have even tried changing my system locale settings and nothing. My run-time menu is generated dynamically.
    - Page Captions (names of the tabs in a tab control): I found ONE work-around for this item, overlapping transparent string controls over the page captions,
                                                                                       but is that the only way around ?
                                                                                       I have lost the link to this solution, if anyone can spare me the link to the document, it'd be great!
    - Window titles: Even when changing the locale settings, bizarre characters appear instead of the correct value. Even when the value doesn't have special characters
                             (like in the picture below - where the title should be "Configuration Panel" only the Letter "C" appears. I wonder if this related to the "spacing"
                             between the Unicode characters? LabVIEW displays "C o n f i g u r a t i o n  P a n e l " in a string control when I read the Unicode data from my .xml file.
                             Apparently, someone has had the same problem:
                             Unicode supportable VI title and 'Tip strips'. I would like to know more about the "tweaking the language settings of your host computer" method
    - Tips and Descriptions of ANY control & indicator: I found nothing here, but I am probably blindfolded
    - Ring control "drop-down" list: This might just be a misconception on my part, but take a look at the attached picture:
        Apparently, the drop-down list on the ring control only displays the Chinese language correctly, and only displays 1 letter on the other languages
        (English, Français and Español). My application loads the available languages depending on the localization files that it found on a specific directory.
        In this case, Chinese.xml, English.xml, French.xml and Spanish.xml were in the folder at the time of the screenshot.
        The xml files are encoded in UCS-2 Little Endian. I have tried prepending the "FFFE" BOM on each value, and no success.
    If any of you have dealt with the problems listed above, I would GREATLY appreciate your input. Also, if there are any other controls that I didn't mention
    and are not fully supported by LabVIEW when Unicode is enabled, please let me know, and I'll add them to the list!
    Things to know:
    My application is currently being developed in LabVIEW 2010 SP1
    I'm using Arial Unicode MS as my default font everywhere on my application.
    Other posts & references:
    How to make my application programmatically switch between English and Russian
    Thanks for your time,
    Jorge
    Attachments:
    Language selector.png ‏32 KB

    Thanks for your reply Josh,
    I had already looked at that article. That's where I got the idea of building my Run-time menu dynamically instead of having multiple ones (one per language). However, it is not revelant to the Unicode issue I'm afraid. As far as I know, run-time menus simply DO NOT support Unicode (for now).
    My question is, how did others work around this issue? Did they decide not to include any RTM in their application? Did they give up and tried using a different approach? If so, which?
    I am mainly looking for solutions available today, preferably from people that have had the same problems. We all know that LabVIEW might eventually support Unicode completely (let's hope it will) but in the meantime, I am sure there are some good "tricks" out there.
    Jorge

  • The "always allow" button is grayed out in settings regarding cookies, and I can not find where to change the setting.  (Restrictions are not on.)

    The "always allow" button is grayed out in settings regarding cookies, and I can not find where to change the setting.  (Restrictions are not on.)  Do you know where I go to change the setting to allow me to "always allow" cookies?

    Hi lisaarnett111,
    If you are having issues turning on Always Allow for cookies in Safari on your iPad, you may want to check to make sure that you don't have Private Browsing enabled, as noted in the following article:
    Turn Private Browsing on or off on your iPhone, iPad, or iPod touch - Apple Support
    Regards,
    - Brenden

  • I am trying to find out if I can change a setting of the calendar in my iPhone.   When I view calendar, in month, I would like to view it with the starting day of the week being Monday, not Sunday.  Is it possible to make this change? SS

    I am trying to find out if I can change a setting of the calendar in my iPhone. 
    When I view calendar, in month, I would like to view it with the starting day of the week being Monday, not Sunday.  Is it possible to make this change?

    Hello SMEvans32
    You can use iCloud to share the Calendar, that way she will always be up to date on that particular section of your work calendar. If you want to use iCloud, I would recommend backing up so you have a safe copy of your data.
    iCloud: Calendar sharing overview
    http://support.apple.com/kb/PH2689
    iCloud Setup
    http://www.apple.com/icloud/setup/
    Thanks for using Apple Support Communities.
    Regards,
    -Norm G.

  • Viewing Chinese Characters / Encoding setting in SQL Developer

    Hi all,
    I am new to SQL Developer 1.1. I have just downloaded the tool yesterday.
    I have a table where there "should" be chinese characters in a NVARCHAR2 column. But I see only inverted question marks when displaying that data in SQL Developer.
    I know from a ressource on the web, that SQL Developer is able to display chinese characters ( see http://awads.net/wp/2006/07/06/sql-developer-and-utf8/ ).
    The NLS_NCHAR_CHARACTERSET is set to AL16UTF16; the NLS_CHARACTER_SET is set to WE8ISO8859P1.
    The encoding under Tools->Preferences->Environment is set to "Cp1252".
    What is actually the influence of this Encoding setting and do I have to make changes to that to view the data?
    In addition I have to admit, that we actually do not really know if the data entered the DB correctly, that is as chinese characters. Maybe there have occured conversion errors that were made in a application that writes the data into the database. Actually I want to verify that. So, if I have the right settings configured which should be fine to see chinese characters and then I see only inverted question marks I can conclude that the data actually entered the database corrupted and the error is not an display issue with SQL Developer but rather an error in the application that writes the data to the database.
    Thanks to any answers in advance!
    Regards,
    Philipp Hinnah

    Hi,
    Am able to view chinese characters in VARCHAR2, FUNCTIONS & PROCEDURES.
    My Settings are :
    1. Developer encoding is X-ORACLE-AL32UTF8.
    2. Control Panel :
       Regional& Language Options:
                Standard & Format = English (US)
                Location                 = US
                Under Advanced = China (PRC)
            System :
                Environmental Varaibles
                  Variable Name  = NLS_LANG
                  Variable Value = SIMPLIFIED CHINESE_CHINA.ZHS16GBK 3. Windows Registry for oracle's NLS_LANG all using
    SIMPLIFIED CHINESE_CHINA.ZHS16GBK.
    4. Oracle database when creating using SIMPLIFIED CHINESE_CHINA.ZHS16GBK
    and AL32UTF8.
    HTH
    Zack

  • Problem: slow connection. I have an iMac (2007) running on Mac OS X 10.6.8. I just had my internet speed increased from 1 to 3 Mbps but my connection slowed down considerably. Do I have to change any setting?

    Problem: slow connection. I have an iMac (2007) running on Mac OS X 10.6.8. I just had my internet speed increased from 1 to 3 Mbps but my connection slowed down considerably. Do I have to change any setting on the computer, and how?

    One possibility is that you may need to "Upgrade" your Cable /DSL Modem if you have an older unit.  ISP's that provide your Internet service in recent years have been switching over from DOCSIS 2.1 version Modems to DOCSIS 3.0 version Modems, which handle the new higher data streaming speeds the ISP's are offering.  I would suggest you call your ISP's Technical Support and inquire about this possibility.

  • My Hard Disk setting has been changed into no access for everyone and i can't open my mac. please tell me how can i login as an admin to change the setting cause i have a lot of date in my hard drive.

    My Hard Disk setting has been changed into no access for everyone and i can't open my mac. please tell me how can i login as an admin to change the setting cause i have a lot of date in my hard drive.

    Read and follow Apple Support Communities contributor Niel's User Tip: kmosx: I accidentally set a disk's permissions to No Access

Maybe you are looking for

  • Script Alert "You should only run scripts from a trusted source."

    I created a small javascript which opens Photoshop and resizes some images. I want to be able to double-click the .JSX file from Windows Explorer, and have Photoshop execute the script. Likewise, I want to be able to run the .JSX file from the comman

  • DML in stored procedure

    Greetings: If I simply want run a DML statement containing a variable, do I need to use the DBMS_SQL package or is there an easier way. For example: PROCEDURE test_procedure IS data_table_name VARCHAR2(20) := 'Table1'; BEGIN DELETE FROM data_table_na

  • Remove log in password

    I have just updated to yosmite and in the setup it puts a password in to open your mac. I would like to know how to remove this please.

  • Format utility in Solaris 10

    During the preparation of my SCSA exam I found the folowing: Place each format utility by the right function Updates the disks VTOC ---- Partition Reads and displays labels ---- Disk Allows you to select a new disk ---- Label Saves disk and slice inf

  • How to get KM Layout Set Description

    Hello, Can we get the Layout Set description through KM API? We are able to get all existing Layout sets using String[] ids=layoutService.getAllLayoutSetIDs(); but we want descriptions also for these Layout Sets Please let me know if anybody have an