Avant Garde Gothic - Extended character sets?

I am working on a project for a client in Poland. We have been given regular and bold weights of Avant Garde Gothic with extended character sets that cover the Polish language. Now however, we want to use the medium weight as well, but have no idea where one goes to find a special extended character set. I have tried numerous type websites with no luck. Any help would be great :)

There's an Avant Garde CE Gothic Demi, available from Linotype. It
appears to be Adobe's with the Central European character set added.
(I don't understand that at all!)
URW may also have a Medium version. I have one whose font name may be
AvantGarGotItcTEEMed. I don't know if it's still available.
- Herb

Similar Messages

  • Inbound MQ with extended character sets

    Hi
    We are trying to send to PI data containing Swedish characters in both xml and non xml payloads.
    The message is placed on an MQ queue (version 6.0.2.3) with a JMS header that has a ccsid of 1208 specified.
    The PI adapter is specified as JMS | WebsphereMQ (non-JMS) | JMS Compliant and the payload module has
    AF/Modules/MessageTransformerBean | Plain2XML | Transform.ContentType | text/xml;charset=utf-8
    The received characters are not displaying correctly, which is a theme on several threads from the past but I've been unable to determine the solution.
    I am more familiar of the MQ side so please excuse my bias. I already send extended character sets to other applications using jms over mq and we've tried using the same values on the MQ side to not avail.
    In MQ we set the MQ header to the queue manager default but there is a jms specific additional header preceding the payload that specifies that the payload is utf-8.
    From my perspecitve I can't see that PI is reading the JMS header at all (in fact if I remove it it has not effect) but we want it there in order to set some extended metadata properties.
    When I look at the data on my queue as it leaves MQ it looks correct to view and in hex.
    How do I get PI to recognise the JMS properties I've specified (its known as an mqrfh header in MQ).
    Any advice, guidance, documentation to a PI novice would be most welcome.
    Tim

    Thankyou for the replies Sarvesh and Stefan.
    I had read your previous replies on this subject, but was still stuck.
    The delay in rep[lying is because we were wating for a reply from the Sap Support team.
    They have now acknowledged that there may be a fault in the MessageTransformBean.
    Its still only a may, but at the moment all your other suggestions have been used but not worked.
    I'll update again when I get further information.
    Tim

  • Avant Garde Gothic BT what does the BT stand for?

    I am producing some work for a company, in whose style guide it specifies 'ITV Avant Garde Gothic BT' - I have Avant Garde Gothic, but cannot find any decent reference to what the BT might mean. Is it a different version of the font? Any help on this would be greatly appreciated.

    I have been looking into this further - according to MyFonts.com, "This ITC font, digitized by Bitstream, is no longer available due to ITC's termination of licensing agreements with many resellers in January 2001. We leave it on display at MyFonts for completeness, since it has been a Bitstream font for several years. However, Adobe offers the original font for sale at MyFonts"
    Does this mean that Adobe's version is identical to Bitstream's? What are the differences between these and Monotype's version? Does anyone know how many versions of Avant Garde Gothic exist? Gosh, its all very complicated..

  • Extended character set

    I've just had the results of CS3 pages on a PC, packaged and sent to be opened using the ID2Q plug-in on a Mac running Quark 6.
    The multiplication symbol 0215 (× if you can see it) has come across as a tall skinny diamond.
    As the files were packaged my fonts have been used, so I'm guessing that the Mac didn't like my use of the extended character set.
    Can anyone shed light, and is this likely to happen with all extended characters?
    k

    I can get a multi sign (ALT 0215) on Quark 4, so I don't think this is a
    Unicode issue. AFAIK, there is no diamond in the extended (or regular)
    set, which leads me to believe ID2Q decided your multi sign should be
    formatted in a different font, Symbol maybe?
    Are you sure they used *your* fonts (not their fonts which they think
    are exactly like your fonts)?
    Kenneth Benson
    Pegasus Type, Inc.
    www.pegtype.com

  • Find/Replace Extended Character Set characters in filenames in one pipeline

    Hello all,
    I have to work with some very bored people. Instead of putting a dash (hex 2d) into a filename, they opt for something from this
    set of extended characters, which makes my regular expressions fail.  IS there a way I can efficiently find & replace anything outside the standard character set
    in one pipelinewithout finding and replacing a character at a time?
    So,I'd like something like:
    get-childitem * | where-object $_.name -match '\x99' | rename-item -newname { $_.name -replace '\x99','='}
    from hex 80 to hex FF rather than a for-each.
    Thanks.

    Answer would depend on the way you want to replace... Easier if you want replace any char in set with selected char:
    $Name = -join (180..190|%{[char]$_})
    New-Item -ItemType File -Name $Name
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    } -WhatIf
    But if you want it more complicated, you may do that too. E.g. defining hashtable that can be used to replace individual elements:
    $Replacer = @{}
    foreach ($Char in (180..190 | % { [char]$_ })) {
    $Replacer.Add(
    [string]$Char,
    (echo _, -, =, . | Get-Random)
    $Replacer
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    $Replacer[$args[0].Value]
    } -WhatIf
    Using this syntax make it possible to include some logic in replace. E.g. you could easily use switch to decide what to do with given string:
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    switch ($args[0].Value) {
    º { "0" }
    µ { "u" }
    ¹ { "1" }
    ¸ { "," }
    Default { "_" }
    } -WhatIf

  • Reading in Latin Extended-A character set from a text file

    Hello all,
    I am writing a small program that reads in a text file containing special characters (beyond the ASCII char set) and converting it into "regular" characters. For example I would read in a uaccent and replace it with a u.
    Now I realize that Unicode support is built into Java from ground up but it goes only so far, you actually have to have the relevant character set to read it. My code is as follows:
    InputStreamReader inStreamReader = new InputStreamReader(new FileInputStream("input.txt"), "ISO-8859-1");
    BufferedReader bufferedReader = new BufferedReader(inStreamReader);
    String line = null;
    StringBuffer buff = new StringBuffer();
    while((line = bufferedReader.readLine()) != null) {
    char[] charArray = line.toCharArray();
    for(int i = 0; i < charArray.length; i++) {
    int x = (int)charArray;
    switch(x) {
    case 224: // this is agrave .. we need to replace it with a
    buff.append('a');
    break;
    case 230: // this is aelig .. we need to replace it with ae
    buff.append("ae");
    break;
    ///////// and so on
    Since I am reading in as ISO-8859-1, this works up to unicode 255. For the rest of the characters, apparently I need a Latin Extended-A and Latin Extended-B character set. How can I get that installed on my Windows OS machine? I am using jdk 1.4.1 on Windows XP. Any help is appreciated.
    Thanks,
    -vk4t

    vkat wrote:
    Since I am reading in as ISO-8859-1, this works up to unicode 255. For the rest of the characters, apparently I need a Latin Extended-A and Latin Extended-B character set. How can I get that installed on my Windows OS machine? I am using jdk 1.4.1 on Windows XP. Any help is appreciated.If your file has characters outside of 8859-1's range (0 - 255), then it isn't ISO-8859-1 encoded. You need to know what encoding was used to store the file. It sounds like you it actually may be Unicode text, in which case you need to know which encoding (UTF8, UTF16, etc) was used.

  • Do you have any fonts with the Latin Extended Additional character set?

    I would like to know what fonts in the Adobe catalog support the Latin Extended Additional character set and/or display all of the following diacritics & characters:
    Macrons: ā  ī ū
    Dot below: ṭ ḍ ṇ ḷ ṃ
    Dot above: ṅ
    Tilde: ñ
    Thanks,
    MZ

    All those characters seem to be included in the Adobe Latin 4 character set.
    I believe that currently the only families that support the characters you're looking for are:
    Source Sans Pro
    Hypatia Sans Pro
    Trajan Pro 3
    Trajan Sans Pro
    Adobe Text Pro

  • HOW can I enter text using Japanese character sets?

    The "Text, Plates, Insets" section of the LOOKOUT(6.01) Help files states:
    "Click the » button to the right of the Text field to expand the field for multiple line entries. You can enter text using international character sets such as Chinese, Korean, and Japanese."
    Can someone please explain HOW to do this? Note, I have NO problem inputting Hirigana, Katakana, and Kanji into MS WORD; the keyboard emulates the Japanese layout and characters (Romaji is default) and the IME works fine converting Romaji, and I can also select charcters directly from the IME Pad. I have tried several different fonts with success and am currently using MS UI Gothic.ttf as default. Again, everything is normal and working in a predictable manner within Word.
    I cannot get these texts into Lookout. I can't cut/paste from HTML pages or from text editors, even though both display properly. Within Lookout with JP selected as language/keyboard, when trying to type directly into the text field, the IME CORRECTLY displays Hirigana until <enter> is pressed, at which point all text reverts to question marks (?? ???? ? ?????). If I use the IME Pad, it does pretty much the same. I managed to get the "Yen" symbol to display, though, if that's relevant. As I said, font selected (in text/plate font options) is MS UI Gothic with Japanese as the selected script. Oddly enough, at this point the "sample" window is showing me the exact Hirigana character I want displayed in Lookout, but it won't. I've also tried staying in English and copying unicode characters from the Windows Character Map. Same results (Yen sign works, Hirigana WON'T).
    Help me!
    JW_Tech

    JW_Tech,
    Have you changed the regional setting to Japanese?
    Doug M
    Applications Engineer
    National Instruments
    For those unfamiliar with NBC's The Office, my icon is NOT a picture of me
    Attachments:
    language.JPG ‏50 KB

  • DIR7 Character Set Problem / Foreign Language

    Hi there,
    I am working on an app built using Director 7 that until now
    has used the standard English (latin-1) character set.
    However, I am required to deliver a new version including
    some elements displayed in a second language, in this case Welsh,
    which uses characters outside of the normal set. I believe those
    required are included in Latin-1 Extended, otherwise in Unicode as
    a whole, obviously.
    I am having specific problems with two characters that appear
    to be missing from Latin-1, which are: ŵ and ŷ
    (w-circumflex, and y-circumflex [i think!]).
    In a standard text box I create using Director, I am unable
    either to paste either character in, or enter it using its
    ALT+combination, let alone save to the associated database.
    I have read that Dir 11 is the first version with full
    Unicode support - which surprises me - however I would assume that
    someone would likely have hit this, or a similar issue before the
    release of this version and was wondering if there is a possible
    solution without upgrade.
    My possible thinking is either a declaration that allows
    change of a Charset, as I might do in XHTML for example, or
    deployment of an Xtra that allows me to use a different character
    set.
    If anyone could shed some light on the matter, it would be
    very helpful! Thanks in advance!
    Rich.

    Yes, this was always a problem for years. Back when I was
    **** this, we had
    some projects that needed text displayed in various
    languages. Each
    language presented its own challenges. Things like Greek
    weren't too bad,
    because the Symbol font works for most Greek text. (Only
    problem was the
    's' version of Sigma, which had to switch back to Times New
    Roman.) Various
    eastern European languages (Polish, Czech, Hungarian, etc.)
    posed a problem
    with some of the accents that were not available in standard
    font sets. We
    were forced to live without some of the more exotic accents,
    but were told
    that it would still be readable without them, if not exactly
    correct. This
    would probably be the closest to your situation, from what
    little I know
    about Welsh. It could be worse, though. Hebrew and Arabic
    were challenging
    as they are written right-to-left, and thus had to have code
    written to
    input them backwards. Russian was also tough, as the Cyrillic
    alphabet has
    more characters than the others, but I was able to find a
    font to fake it.
    (It replaced some of the lesser-used standard characters in
    order to fill in
    all the letters, which unfortunately meant that in the rare
    cases where
    those characters *were* needed, we had to improvise.) The
    hardest by far
    were any east Asian languages. In that case, I gave up on
    trying to display
    any of the text in text form, and just converted it all to
    bitmaps. Without
    Unicode, trying to display Mandarin or Japanese or Korean
    correctly as text
    is pretty much impossible.

  • Character Set Migration - Arabic & English Language Support

    Hi,
    Sofware Specifications:
    OS Version : Windows 2003 EE Server, SP2, 32-Bit
    DB Version : 9.2.0.1
    Application : Lotus Domino 6.5
    Existing Set Up:
    DB CHAR SET : WE8MSWIN 1252
    National Character Set : AL16UTF16
    NLS_LANG : NA
    Now the customer extended their business in EGYPT.
    They need the existing database to support ARABIC & ENGLISH Languages.
    Kindly let me know how to do this character set migration and achieve the client specification.
    Regards
    Suresh

    Check Metalink
    Note:179133.1
    Subject:      The correct NLS_LANG in a Windows Environment
    Note:187739.1
    Subject:      NLS Setup in a Multilingual Database Environment
    Note:260023.1
    Subject:      Difference between AR8MSWIN1256 and AR8ISO8859P6 characterset
    Also, please list all the steps you have performed till now

  • Character set conversion

    I have had problems with extended charaters
    in a database table not being represented
    correctly by clients. I believe that the FAQ
    concerning "Why do I see questions marks..."
    identified the problems. The database is set
    to Latin4 and clients at Latin1. I am seeing
    inverted questions marks for characters that
    don't match when displaying the table on
    a client, whether SQLPLUS under NT4.0, SQLPLUS under Solaris 8, even ODBC to MS Access.
    My questions are
    1) How do the database and clients know
    that the character sets are different? We
    at first assumed that only the bit patterns
    were seen so we might see different characters for the same 8 bits.
    2) How are the character sets compared?
    3) If a character is moved to a different
    bit pattern, is this recognized and hadled
    properly? Or is it only matching characters
    with the same bit pattern?
    Answers will be greatly appreciated after
    weeks of asking questions outside this forum
    and searching the WWW.
    Thanks,
    Dave

    Hi,
    You didn't mention what your nls_lang setting on your client is set to. Your NLS_LANG setting for Windows should reflect your current code page. In general two scenarios can occur
    when data is sent from client to the database. If the database character set and client NLS_LANG match then no conversion takes place. Otherwise the data is automatically converted converted from the client code page to the database character set and vice versa. In either of these two scenarios if the NLS_LANG is set improperly (not reflecting current client OS code) corruption can occur. In the scenario you are describing have you entered non Latin1 data into the database? If so how? If you have, and it was entered properly you will still have difficulties displaying the data in SQL*PLUS on a Latin1 client as it will not know about these characters. Another tactic that would be useful is to use the dump command to see if your latin4 characters are stored properly on the database. An example would be something like: SELECT DUMP(col,1016)FROM table ;
    null

  • Character set conversion problem during upgrde.

    Dear Friends,
    I am trying upgrade one of my windows database with version 9.2.0.5 to 10.2.0.4 on unix. I am following exp/imp. During import I am seeing followinig errors for couple of tables,
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column
    IMP-00058: ORACLE error 1461 encountered
    ORA-01461: can bind a LONG value only for insert into a LONG column
    This may be due to character set issue, since database on windows has WE8MSWIN1252 and on unix it has UTF8.
    Please let me know how I can resolve this issue.
    Regards.
    Mahdu

    Hello,
    It's better that your Target Database is created with the same character set than the source one.
    This is an option you can choose at the database creation.
    If you have to stay in UTF8 on your Target database then, you'll have to extend the column size or, use the
    option CHAR (as Unicode may use up to *4 bytes* for one character instead of *1 byte* for WE8MSWIN1252).
    To use the option CHAR you may specify it on the column datatype, for instance:
    col1 VARCHAR2 (100 CHAR)Else, without this option, VARCHAR2 (100) means 100 Bytes (which can store 25 characters in Unicode).
    You also have the parameter NLS_LENGTH_SEMANTICS that you can set to CHAR, but the export/import
    utility doesn't manage it well.
    So, the safest way is to create your target database with the same character set than the source one
    except if you want to migrate to Unicode.
    Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Mar 3, 2010 10:11 PM

  • Write XML to file - character set problem

    I have a package that generates XML from relational data using SQLX. I want to write the resulting XML to the unix file system. I can do this with the following code :
    DECLARE
    v_xml xmltype;
    doc dbms_xmldom.DOMDocument;
    BEGIN
    v_xml := create_my_xml; -- returns an XMLType value
    doc := dbms_xmldom.newDOMDocument(v_xml);
    dbms_xmldom.writeToFILE(doc, '/mydirectory/myfile.xml');
    END;
    This creates the file but characters such å,ä and ö are getting 'corrupted' and the resultant xml is invalid.(I've checked the xml within SQL*Plus and the characters are OK)
    I assume the character set of the unix operating system doesn't support these characters. How can I overcome this ?

    Hi,
    Do you mean that you would like to write output to an external file somewhere on flask disk or perhaps even inside the directory where the MIDlet is located?? To be able to do so you will need manufacturere specific APIs extended on FileConnection (=JSR, don't know the number right now...). The default MIDP I/O libary does not support direct action on a file.
    However, such a FileConnection method invocation requires an import statement that is manufacturere specifc...
    To keep your MIDlet CLDC MIDP compliant you can try using RMS with which you can write data that will be stored in a 'database' within the 'res' directory inside your MIDlet suite. If you're new to RMS, please check the web for tutorials, etc etc.
    Cheers for now,
    Jasper

  • Unrecognised Char in GB2312 character set using java InputStreamReader??

    Reading the following file chinese GB2312 html file from
    http://news.xinhuanet.com/local/2007-02/13/content_5732705.htm
    using the InputStreamReader with GB2312 encoding as shown below
    public class readGB2312html file
    //........TmpText declarations.....
    public static void main( String[] args )
    try
    FileInputStream is = new FileInputStream( args[0] );
    BufferedReader br = new BufferedReader(
    new InputStreamReader( is, "GB2312" ) );
    String strLine;
    while ( (strLine = br.readLine()) != null )
    TmpText.append(strLine);
    TmpText.append("\r\n");
    br.close();
    bw.close();
    catch ( Exception e )
    e.printStackTrace();
    The TmpText variable does not display the last character in the article properly &#65288;&#35760;&#32773;&#22799;&#29690;&#65289; it gives instead &#65288;&#35760;&#32773;&#22799;?B&#65289;
    Inside the html file the unrecognised charcter is represented by �B in the html file Why is this so
    ���������B��
    In the internet browser it is displayed and recognised as a chinese GB2312 character why not recognised by Java InputStreamReader???
    Any help or explanation would be much appreciated

    Yes, it is not a GB2312 character
    The �B character is AC40 in hex format which is outside of the GB2312 character range, it is in GBK
    Copied from wikipedia,
    GBK is an extension of the GB2312 character set for simplified Chinese characters, used in the People's Republic of China.
    GB stands for National Standard, while K stands for Extension. GBK not only extended the old standard GB2312 with Traditional Chinese characters, but also with Chinese characters that were simplified after the establishment of GB2312 in 1981. With the arrival of GBK, certain names with characters formerly unrepresentable, like the "rong" (�g) character in former Chinese Premier Zhu Rongji's name, are now representable.
    Thanks a lot will use the GBK charset to read the file for all GB2312 file since it is a subset of it.

  • Magnetic Swipe (Keyboard wedge) character set problems

    Hi,
    I'm writing an app that handles scanner and mag stripe input into a JTextField, the scanner works fine, the wedge has some character set/encoding problems
    Windows XP, both USB, and both work fine in notepad etc. the wedge doesn't even present the same data on each scan (simple numeric codes with ; start and ? end chars)
    eg. ;123456? reads fine in notepad but in my app, and a few other unrelated java apps i used for test purposes, i get strings like the following:-
    W“|‡4*?
    ;“|3456?
    ;“|‡4¥—
    SSÓ234*É
    all the same card, which scans into this window correctly as ;123456?
    it almost gets it right sometimes which makes me think java is chopping the bits into the wrong length to encode as characters, or something like that??
    i tried subclassing the JTextField (snippet appended) and overriding the insertString function in the doc but it didn't work, i think because this is after the problem's occured.
    Has anyone managed to get a wedge working in java? or have any ideas?
    Thanks in advance,
    Mark
    protected Document createDefaultModel()
    return new ASCIIDocument();
    static class ASCIIDocument extends PlainDocument
    public void insertString(int offs, String str, AttributeSet a) throws BadLocationException
    if (str == null)
    return;
    try{
    String str2=new String(str.getBytes(),"UTF-8"); // and various others, ascii, iso8859, etc
    super.insertString(offs,str, a);
    }catch (Exception e){}
    ...

    To find out what is actually being sent, stop using a Swing application to do that. (You're right, you want to avoid the conversion of bytes to chars which is happening.)
    Write a command line application which reads bytes from System.in and displays the values of those bytes. Don't convert them to chars in any way. Then run it and use the wedge to scan your data. You might have to press the Enter key to get System.in's buffer sent to your code.

Maybe you are looking for

  • We have build a nice Flex BI front end and want to open-source it would this be useful to the group?

    Hello, I am the CEO of DataRoket (www.dataroket.com) and we are a data integration and analytics platform.  As part of our effort, we have built a really nice data visualization and reporting package. Since this is not our core business and we want t

  • Multiple tasks handling in change Correction

    Hi Gurus, Need your help on the following queries: 1. when we create a change request from a service desk request, how does the information flow happens. Also i have a scenario, wherein until my chnage request gets resolved and closed, i should not b

  • ICI adapter development (IC Web Client)

    Hello, all We are developing ICI adapter(like Genesys Gplus) using ICI Interface Specification and two wsdl files(IciSystem-3.10.wsdl and IciUser-3.10.wsdl). But method getWorkcenterCapability does not run. Other method not related Free seating are r

  • NetBoot not working since 10.6 upgrade

    Hi all. I recently upgraded from Server 10.5.8 to 10.6.3 (DVD) with software update to 10.6.4. Everything with the migration went without incident. DNS, DHCP, Open Directory, NFS and AFP are all working perfectly (including network accounts, etc.). I

  • Subtotal / Grand total not working

    I am new to OBIEE & I could not get a simple 'Total By' or 'Grand Total' in Answers, Table view. The default aggregation rule in Administrator is selected as 'Sum'. When I click on the 'E' summation icon, nothing happens. What am I missing? Thanks, D