Single and multi byte settings

Hello,
We are trying to implement multibyte char loading and I have a few questions:
1) Our current char coding is in UTF-8. What char coding should we use for multi byte loading?
2) In DDL, the column can be declared as a BYTE or CHAR, such as varchar2(20 CHAR). For multi byte, we can either change the size of the column or change from BYTE to CHAR for column definition. Which is a better way of implementation?
3) Any other setting changes we need to be aware of from single to multi bye implementation?
Regards

First off, I'm a bit confused. If your database's character set us UTF-8, you already have a multi-byte character set. I'm not sure what it is that you're converting in this case.
As to changing the table definition-- that depends primarily on your application(s). Generally, I find it easier to declare a field with character length semantics, which gives users in every language certainty about the number of characters a field can support. There are probably people that think the other way because they're allocating memory in a client application based on bytes and want to ensure that the definitions on the client and the server match.
Since I don't quite understand what it is that you're converting, I'm hard pressed to come up with what "other setting changes" might be appropriate.
Justin

Similar Messages

  • JDBC2.0 API and Multi-Bytes Characters

    I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
    it works well with English characters ,
    but i get wrong with Multi-Bytes Characters.
    Does anyone else know the reason?
    Thanks in advance.

    I have the same problem!!!!!!!!!!!
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by huang Jian-chang:
    I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
    it works well with English characters ,
    but i get wrong with Multi-Bytes Characters.
    Does anyone else know the reason?
    Thanks in advance.<HR></BLOCKQUOTE>
    null

  • Handling unicode and multi-byte/ANSI strings in same application

    I'm creating my environment handle using OCIEnvNlsCreate so all strings passed to/from oracle are supposed to be in wide string format.
    This is fine until I want to bind a variable that contains an ANSI character string. My application can use mixed string types.
    Which SQLT_ type do I use to bind my ANSI character string? What's the difference between SQLT_STR and SQLT_AVC?
    Or do I have to convert each ANSI string to a wide string before I bind?
    The SQL Server ODBC API handles this problem without any trouble by specifying the data type when binding to SQL_VARCHAR or SQL_WVARCHAR.
    Any help greatly appreciated as I'm totally stuck!
    Thanks,
    John

    Here's the relevant para from the documentation:
    Specifying Character Sets in OCI
    Use the OCIEnvNlsCreate function to specify client-side database and national character sets when the OCI environment is created.This function allows users to set character set information dynamically in applications, independent of the NLS_LANG and NLS_NCHAR initialization parameter settings. In addition, one application can initialize several environment handles for different client environments in the same server environment.
    Any Oracle character set ID except AL16UTF16 can be specified through the OCIEnvNlsCreate function to specify the encoding of metadata, SQL CHAR data, and SQL NCHAR data. Use OCI_UTF16ID in the OCIEnvNlsCreate function to specify UTF-16 data.
    Can somebody please tell me what I can set charset or ncharset to apart from OCI_UTF16ID or zero and hence make the call to OCIEnvNlsCreate return OCI_SUCCESS.
    Thanks,
    John

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • Oracle Multi-Bytes vs Single-Byte

    Hi,
    We have to add japanese to our application, i had succesfully add japanese data in our single-byte database,
    so why should we use a Multi-byte DB?
    what is the gain to use a Multi byte DB vs a Single Byte?
    does intermedia work with japanese in Single Bytes?
    Is utf8 the best way to have an international DB?
    We will have to add a lot of other char-set in the future.
    Thanks

    so why should we use a Multi-byte DB?
    what is the gain to use a Multi byte DB vs a Single Byte? What you are doing is storing invalid multibyte characters into a single byte database. So each double byte Japanese characters are being treated as 2 separate single byte characters. You are using an unsupported but common garbage in garbage out approach, so in that sense you are using Oracle as a garbage container. :)
    Let's look at some of the issues that you are going to have :-
    All SQL Functions are based on the property of the single byte database character set WE8ISO8859P1. So LENGTH(), SUBSTR (), INSTR (), UPPER(), NLS_UPPER etc .. will yield incorrect results . For example a column with one Japanese character and one ASCII character will return a length of 3 characters rather than 2 characters. And if you want to locate a specific character in a mix ASCII and Japanese string using the SUBSTR() it will be very difficult, because to Oracle the string consists of all single byte characters, it will not skip 2 bytes for a Japanese character. Even if you don't have mix strings, you will need to write one routine for handling ASCII only and another for Japanese strings.
    Invalid Data conversion , if your need to talk to another db using dblink say ,all the character conversion will be based on the single byte character set to the target database character set mapping, so the receiver will lose all the source Japanses characters and will get 2 single byte characters for each Japanese char instead .
    Export and Import will have identical problems, character set conversion are performed during these operations, so all Japanese characters will be lost. This also means that you can not load correctly encoded Japanese data into your current single byte DB using IMPORT or SQLLOADER without data corruption ...
    does intermedia work with japanese in Single Bytes?No
    Is utf8 the best way to have an international DB?Yes
    null

  • Faster way to migrate from Single byte to Multi byte

    Hello,
    We are in the process of migrating from a 9i Single byte db to a 10g Multi byte db. The size of our DB is roughly 125 GB. We have fixed everything in the source database (9i) in terms of seamlessly migrating from a single byte to a multi byte db. The only issue is the migration window - curently we are doing an export/import since there is a character set migration involved and it's taking about 20+ hrs to do the import in 10g. The management wants to cut this down to less than 10 hours, if that's possible. I know the duration it takes to import depends on many factors like the system/OS configuration, SAN, etc but I wanted to know what , in theory, is considered the fastest method of migrating a database from single byte to multi byte.
    Have anybody here gone through this before?
    Thanks,
    Shaji

    If the percentage of user tables containing some convertible data (I am assuming you will not have any truncation or lossy data) is low, you can export only those tables, truncate them, and rescan the database. This should report no convertible data, except some CLOBs in Data Dictionary. Such database can be migrated to AL32UTF8 using csalter.plb. After the migration, you import only the previously exported subset of tables.
    Note, for this process to work, no convertible VARCHAR2, nor CHAR, nor LONG data can be present in the Data Dictionary.
    The process should be refined by dropping and recreating indexes on the exported tables as recreating an index is faster then updating it during import. You should also disable triggers so that they do not interfere with the migration (for example, they should not update any "last_updated" timestamp columns).
    If the number and size of affected tables is low compared to the overall size of the database, the time saved may be significant.
    There may also be tables that require even more sophisticated approach. Let's say you have a multi-gigabyte table that stores pictures or documents in a BLOB column. The table also has a single text column that keeps some non-ASCII descriptions of the stored entities. Exporting/truncating/importing such table may be still very expensive. A possible optimization is to offload the description column to an auxiliary table (together with ROWIDs), update the original column to NULL, export the auxiliary table, drop it, rescan the database, migrate with csalter.plb, re-import the auxiliary table, and restore the original column. If pictures alone occupy, for example, 30% of the whole database, such approach should yield significant time saving.
    -- Sergiusz

  • How to do data migration between single node and multi node HANA systems ?

    Data migration between single node and multi node HANA systems ?
    What are limitations ?
    What should be the best practices ?

    Data migration between single node and multi node HANA systems ?
    What are limitations ?
    What should be the best practices ?

  • Plz help!  Multi-byte input method and JPasswordField in java 1.4

    Hello
    I got JTextField and JPasswordField in one dialog. I enter multi-byte text into text field, using Microsoft IME, switch focus into password text field without changing input method, and now all text entered in password text field appear in previous visited text field. This issue actual only in java 1.4 and disappear in java 1.5. Is anyone now what can i do to fix it in java 1.4? thanks!

    kajbj wrote:
    Stromberg wrote:
    VishalKothari wrote:
    Help me out regarding datatype of 'x' in the switch(x) statement. Is it only byte and char?any primitive typeNo, not boolean, double or float.oooops

  • PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list

    Hi,
    I have a requirement to send a table data through mail,
    so am using execute statement after opening the connection and am using the following PLSQL code, which am failing to execute successfully.
    My code goes like this.
        0            10            20           30           40            50
    1  CREATE OR REPLACE PROCEDURE SEND_TABLE_DATA( FROMAD IN VARCHAR2,
    2   TOAD IN VARCHAR2,
    3   SUBJECT IN VARCHAR2,
    4   MESSAGE IN VARCHAR2,
    5   DOCID IN VARCHAR2,
    6   DOCDT IN DATE,
    7   PRODOAID IN NUMBER )
    8   AS
    9   BATCHNO  VARCHAR2(32767);
    10  PCSBOX  NUMBER;
    11  AMOUNT  NUMBER;
    12  SMTPHOST VARCHAR2(255) := 'XXX.XXX.X.XXX' ;
    13  A UTL_SMTP.CONNECTION ;
    14  BEGIN
    15  A :=UTL_SMTP.OPEN_CONNECTION(SMTPHOST,25);
    16  UTL_SMTP.HELO(A,SMTPHOST);
    17  UTL_SMTP.MAIL(A,FROMAD);
    18  UTL_SMTP.RCPT(A,TOAD);
    19  UTL_SMTP.OPEN_DATA(A);
    20  UTL_SMTP.WRITE_DATA(A, CHR(13) ||CHR(13) || CHR(13) );
    21  UTL_SMTP.WRITE_DATA (A,'Date: '|| TO_CHAR(SYSDATE,'DD/MM/YYYY HH24:MI:SS') || CHR(13) );
    22  UTL_SMTP.WRITE_DATA(A,'From: '||FROMAD|| CHR(13) );
    23  UTL_SMTP.WRITE_DATA(A, 'To: '||TOAD|| CHR(13) );
    24  UTL_SMTP.WRITE_DATA(A, 'Subject: '|| SUBJECT || CHR(13) );
    25  UTL_SMTP.WRITE_DATA(A,MESSAGE||DOCID||' Documented on '||DOCDT||CHR(13) );
    26  UTL_SMTP.WRITE_DATA(A,CHR(13) || CHR(13) || CHR(13) );
    27  UTL_SMTP.WRITE_DATA(A,'This is for your information'||CHR(13) );
    28  UTL_SMTP.WRITE_DATA (A,' BATCHNO '|| ' -- '||' PCSBOX '||' -- '||' AMOUNT '||CHR(13) );
    29  EXECUTE IMMEDIATE
    30         'SELECT
    31       A.BATCHNO,B.PCSBOX,B.AMOUNT
    32        FROM
    33      SCHEMA1.TABLEX A,SCHEMA2.TABLEY B
    34        WHERE
    35       A.BATCHID=B.BATCHNO AND B.PRODOAID='|| PRODOAID
    36     BULK COLLECT INTO BATCHNO,PCSBOX,AMOUNT;
    37  FOR indx IN 1..BATCHNO.COUNT
    38   LOOP
    39    UTL_SMTP.WRITE_DATA (A,BATCHNO(indx)|| ' -- '||PCSBOX(indx)||' -- '||AMOUNT(indx)||CHR(13) );
    40   END LOOP;
    41  UTL_SMTP.WRITE_DATA( A,CHR(13) || CHR(13) || CHR(13) );
    42  UTL_SMTP.CLOSE_DATA(A);
    43  UTL_SMTP.QUIT(A);
    44  EXCEPTION
    45  WHEN OTHERS THEN
    46  UTL_SMTP.QUIT(A);
    47  RAISE;
    48  END;
    49  /
    SELECT * FROM USER_ERRORS
    NAME                       TYPE             SEQUENCE    LINE         POSITION        TEXT                                                                                                             ATTRIBUTE                 MESSAGE_NUMBER
    SEND_TABLE_DATA
    PROCEDURE
    3
    37
    1
    PL/SQL: Statement ignored
    ERROR
    0
    SEND_TABLE_DATA
    PROCEDURE
    2
    37
    24
    PLS-00487: Invalid reference to variable 'BATCHNO'
    ERROR
    487
    SEND_TABLE_DATA
    PROCEDURE
    1
    36
    25
    PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list
    ERROR
    497
    Thanks In Advance
    Regards
    Pradeep.

    > 29  EXECUTE IMMEDIATE
    > 30         'SELECT
    > 31       A.BATCHNO,B.PCSBOX,B.AMOUNT
    > 32        FROM
    > 33      SCHEMA1.TABLEX A,SCHEMA2.TABLEY B
    > 34        WHERE
    > 35       A.BATCHID=B.BATCHNO AND B.PRODOAID='|| PRODOAID
    > 36     BULK COLLECT INTO BATCHNO,PCSBOX,AMOUNT;
    The variables BATCHNO, PCSBOX and AMOUNT are defined as scalar variables. Check there definition
    > 9   BATCHNO  VARCHAR2(32767);
    > 10  PCSBOX  NUMBER;
    > 11  AMOUNT  NUMBER;
    You cannot use BULK COLLECT on scalar variables. The variables must be defined as a COLLECTION TYPE in order to perform bulk collect.

  • DIFFERENCE BETWEEN SINGLE LEVEL AND MULTI LEVEL COSTING

    Sir,'
    Please explain the difference between Single Level and Multi Level Costing in Detail
    thanking You

    While using material ledger, we can use the single level or multi-level price determination for valuation of inventory.
    Procurement processes are used in Product Cost Controlling to determine procurement costs and to present those costs. Purchase Order, for example, is single-level procurement and Production is multilevel procurement.
    Single-level material price determination calculates the periodic unit price for a material. The standard price and the cumulative single-level differences of the period are taken into account. Single-level material price determination takes into account the differences that arise directly when a material is procured.
    Multilevel price determination calculates the periodic unit price for a material. The standard price, the single-level differences cumulated in the period, the differences between planned and actual prices, as well as input material differences (multilevel differences) are all taken into account.
    Single-level material price determination  is a prerequisite for Multi-level price determination.

  • DEFECT: (Serious!) Truncates display of data in multi-byte environment

    I have an oracle 10g database set up with the following nls parameters:
    NLS_CALENDAR      GREGORIAN
    NLS_CHARACTERSET      AL32UTF8
    NLS_COMP      LINGUISTIC
    NLS_CURRENCY      $
    NLS_DATE_FORMAT      DD-MON-YYYY
    NLS_DATE_LANGUAGE      AMERICAN
    NLS_DUAL_CURRENCY      $
    NLS_ISO_CURRENCY      AMERICA
    NLS_LANGUAGE      AMERICAN
    NLS_LENGTH_SEMANTICS      CHAR
    NLS_NCHAR_CHARACTERSET      UTF8
    NLS_NCHAR_CONV_EXCP      TRUE
    NLS_NUMERIC_CHARACTERS      .,
    NLS_RDBMS_VERSION      10.2.0.3.0
    NLS_SORT BINARY
    NLS_TERRITORY      AMERICA
    NLS_TIMESTAMP_FORMAT      DD-MON-RR HH.MI.SSXFF AM
    NLS_TIMESTAMP_TZ_FORMAT      DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_TIME_FORMAT      HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT      HH.MI.SSXFF AM TZR
    I am querying a view in sqlserver 2000 via an odbc database link.
    When I query a 26 character wide column in the view in sql developer, it will only return up to 13 characters of the data.
    When I query the exact same view in the exact same sql server database from the extact same oracle database using the exact same odbc database link using sql navigator, I get the full 26 characters worth of data.
    It also works just fine from the sql command line tool from 10g express.
    Apparently, sql developer is confused about how to handle multi-byte data. If you ask it the length of the data in the column, it will tell you 26, but it will only show you 13.
    I have found a VERY PAINFUL work around, to do a cast(column_name as varchar2(26) when I query it. But I've got hundreds of views and queries...

    In all other respects, the settings I have appear to be working correctly.
    I can enter multi-byte characters into the sql worksheet to create a package, save it, and re-open the package with the multi-byte characters still visible.
    I'm using a fallback directory for my jdk with the correct font installed, so I can see and edit multi-byte data in the data grids.
    In this case, I noticed the problem on a column that only contains the standard ascii letters and digits.
    Environment->Encoding = UTF-16
    All the fonts are set to a font that properly displays western and ge'ez characters. The font has been in use for years, and is working correctly in all other circumstances.
    The Database->NLS Parameters tab under sql developer preferences shows:
    language: American
    territory : American
    sort: binary
    comp: binary
    length: char (I've also tried byte)
    If there are other settings that you think might be relevant, please let me know.
    I've done some more testing. I created an oracle table with a single column and did an insert into ... select from statement across the database link. The correct, full-length data appeared in the oracle table.
    So, it's not a matter of whether the data is being returned or not, it is. It is simply not being displayed correctly. It appears that sql developer is making some unwarranted decisions about the datatable across the database link when it decides to display the data, because sql plus and sql navigator have no such issues.
    This is really a very serious problem, because if I cannot trust the data the tool shows me, I cannot trust the tool.
    It is also an invitation to make an error based upon the erroneous data display.

  • Handling Multi-byte/Unicode (Japanese) characters in Oracle Database

    Hello,
    How do I handle the Japanase characters with Oracle database?
    I have a Java application which retrieves some values from the database; makes some changes to these [ex: change value of status column, add comments to Varchar2 column, etc] and then performs an UPDATE back to the database.
    Everything works fine for the English. But NOT for Japanese language, which uses Multi-byte/Unicode characters. The Japanese characters are garbled after the performing the database UPDATE.
    I verified that Java by default uses UTF16 encoding. So there shouldn't be any problem with Java/JDBC.
    What do I need to change at #1- Oracle (Database) side or #2- at the OS (Linux) side?
    /* I tried changing the NLS_LANG value from OS and NLS_SESSION_PARAMETERS settings in Database and tried 'test' insert from SQL*plus. But SQL*Plus converts all Japanese characters to a question mark (?). So could not test it via SQL*plus on my XP (English) edition.
    Any help will be really appreciated.
    Thanks

    Hello Sergiusz,
    Here are the values before & after Update:
    --BEFORE update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,65,74,61,6c,69,6e,6b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    --AFTER Update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,45,54,41,4c,49,4e,4b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    So the values BEFORE & AFTER Update are the same!
    The problem is that sometimes, the Japanese data in VARCHAR2 (abstract) column gets corrupted. What could be the problem here? Any clues?

  • Multi-byte characters are garbled in SQL Server Business Intelligent Development Studio (Visual Studio) 2008

    Hi,
    I'm revising an existing report which was developed by my predecessor. Though it works fine in the production environment, when I open the .rdl file with SQL Server Business Intelligent Studio (Visual Studio) 2008 on my client
    PC, I find all the multi-byte characters are garbled. When I open it with the BIDS (the same version) on the server, it shows everything correctly.
    The fonts for the controls (labels) are Tahoma and it's originally only for alphabets, but multi-byte characters are supposed to be displayed in MSGOTHIC by Font Link as they are displayed correctly on the server.
    Could anyone advise me how to solve this issue? I know I can fix it by changing the fonts from Tahoma to MSGOTHIC for all the contrls, but I don't want to do it.
    Environment:
    My PC:Windows7 64bit /Visual Studio 9.0.30729.1 / .NET Framework 3.5 SP1
    Server:Windows Server 2003 R2 /Visual Studio 9.0.30729.1 / .NET Framework 3.5 SP1
    Garbled characters sample:
    FontLink - SystemLink
    Please let me know if you need any more information. I would appreciate your advice!

    Hi nino_miya,
    According to your description, when you display the report in client side, characters are garbled.
    In your scenario, please check if the Language is the same as the report on production server. Also please check if the data of Tahoma in registry on client PC is the same as server. If those two settings are the same, please specify font of the each
    control as MSGOTHIC manaually on client PC.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • My ipad does not have the amount of GB on the back and in the settings it says 56gb! Also there is no setting called regulatory. It is an ipad 4 retina w wifi that i got as a Christmas present. Could it be a fake? How can i tell?

    My ipad does not have the amount of GB on the back and in the settings it says 56gb! Also there is no setting called regulatory. It is an ipad 4 retina w wifi that i got as a Christmas present. Could it be a fake? How can i tell?

    The amount of storage that is shown on the box and in Settings are calculated differently - the package uses 1 billion bytes as a gig (i.e. decimal), whereas in Settings it's shown as the binary definition : http://support.apple.com/kb/TS2419
    Also some space has been used for iOS, the built-in app and some space is lost due to formatting.

  • How do I center a text field when input can be single or multi-line?

    I'm creating a form in Adobe Acrobat Pro XI and have almost everything the way I want it.  One of my last problems is trying to get a text in a text field centered.  The input is sometimes a single line and sometimes multi-line.  If I set it up so the multi-line entries are centered, then the single line looks off ... and vice versa.  Is there any way to have the text automatically centered in the text field regardless of whether it's single or multi-line?

    Unfortunately, there's no way to set up a field so that the text is guaranteed to be vertically centered in both cases. If you set it up so that rich text formatting is enabled, it's possible for a user to vertically center, but it's not something you can preconfigure so that it will remain in effect when the field is cleared. For a user to do this, with the focus set to the field they'd have to display the Properties toolbar (Ctrl+E), click the "More..." > Paragraph > Alignment > Text Middle [button]

Maybe you are looking for

  • How do I view bookmarks from a saved profile from 7 months ago

    I am missing some bookmarks. I must have accidentally deleted them some time ago. I have an old profile saved from 6-7 months ago that would have the missing bookmarks. How can I open one of the backup bookmarks files from the old profile or convert

  • Getting Error in Portal while booking Time Sheet......

    Hi All, I am Getting Error in Portal when Booking Time sheet in Record working time Link. Getting below Error. "Enter activity type only in conjunction with cost center"...I have entered data in KP26. But for the same employee if i am booking the Eff

  • The Remote server returned an error (404) not found in SharePoint Client Context code

    Hi All, I am getting an error with below line. "The Remote server returned an error (404) not found" It occurs when I am trying to fetch some data from SharePoint 2010 List. For eg. I have a webpart in which i am showing some content on page load and

  • Syncing with photos already in Facebook?

    Installed iPhoto '09 without a hitch (over 15,000 photos in library - Faces took a while, but works great!) and now just wondering if there's a way to deal with albums/photos already on Facebook? The new features in iPhoto are really great, but just

  • Acces Restriction : Role of web page device profile

    Hello, I work on CCM v6.1. I'm trying to find witch role i can use to allow user to connect to CCMAdmin only on web page Device Profile  ( in menu Device --> device settings) I try many Resource Access Information but i don't find good right . Thank