Identity-H Encoding Glyph 13 returning Glyph 10

Hi,
I am using Identity-H encoding on a text string in a PDF file. This specifies that there is equivalence between the character ids specified in the string for the purpose of extracting the appropriate glyph by its ID from the font. That is if I specify a character ID of 3 in the stream the PDF reader will extract the glyph with ID (index) 3 from the TrueType font file. This works for all glyph IDs (indexes) except the glyph with ID (index) 13 which returns the glyph with ID (index) 10 when requested. I am wondering does Identity-H encoding automatically map the character ID 13 to ID (index) 10? I have checked the font using a font editor and I can confirm that there does exist a glyph 13 in the TrueType font file which is not the same as glyph 10.
Any assisatance would be gratefully received.

Thank you for your reply!
In the content stream the character IDs are stored between round parenthesis. This is working for all glyphs except number 13.

Similar Messages

  • How to get the identity claim encoding types of windows and forms authentication providers using API?

    Hi,
    We have to get all the claims providers associated with a web application and its identity claim encoding type using API.
    For example:
    If the identity claim of windows authentication is user name and the user name is a string, then we should get
    "i:0#.w".
    If the identity claim of forms authentication is
    email and the provider name is "fba" , then we should get "i:0!.f|fba|".
    The below link shows us to get all claims providers associated with a web application, but how do we get the identity claim encoding type of each provider?
    http://msdn.microsoft.com/en-us/library/gg650432(v=office.14).aspx#SP_WCP_Tip3
    using (SPSite theSite = new SPSite("http://someContosoUrl"))
    // Get the web application.
        SPWebApplication wa = theSite.WebApplication;
        // Get the zone for the site.
        SPUrlZone theZone = theSite.Zone;
        // Get the settings that are associated with the zone.
        SPIisSettings theSettings = wa.GetIisSettingsWithFallback(theZone);
        // Get the list of authentication providers that are associated with the zone.
        foreach (SPAuthenticationProvider prov in
            theSettings.ClaimsAuthenticationProviders)
        {   // Need to get the identity claims encoding type using the SPAuthenticationProvider
    Is windows authentication's identity claim encoding type always i.0#.w or the identity claim is always the user name?
    Thanks & Regards,
    Kalai.

    If the requirement is to be able to convert claim identities to windows identities that can be used with other LOB/legacy application that still relies on NTLM/Windows Auth, then I would recommend to explore C2WTS.
    Here are some references:
    http://msdn.microsoft.com/en-us/library/office/ee539739(v=office.14).aspx
    http://blah.winsmarts.com/2013-11-Use_C2WTS_to_get_a_classic_windows_identity_from_a_claims_identity.aspx
    http://henrymcclain.blogspot.in/2013/05/claims-to-windows-token-service-c2wts.html
    http://blogs.msdn.com/b/rodneyviana/archive/2011/02/20/claims-to-windows-token-service-c2wts-may-not-start-automatically-when-you-reboot-your-server-don-t-blame-sharepoint-for-that.aspx
    http://blogs.msdn.com/b/russmax/archive/2010/05/27/understanding-sharepoint-2010-claims-authentication.aspx
    Thanks!
    These postings are provided "AS IS" with no warranties, and confers no rights.

  • Having trouble with a file with that includes CID or Identity-H encoding fonts

    I am not able to use the cntrl find feature on one of my pdf files becasue the fonts are  CID or Identity-h encode.
    Can you please tell me how I can change the font or the files so I can use the Cntrl Find feature.

    LabVIEW has a sound and vibration toolkit.  Do you have that?  Do you have LabVIEW?
    The forum is made up of volunteers who will help others with their LabVIEW problems.  But you need to have a specific problem and ask clear questions.
    What you are asking for is very unclear.  Why do you even want to do a "thorough analysis of the audio"?  No one is going to do your project for you.

  • How do I convert identity-h encoding to regular text?

    I am currently working on this project where I am running this program called Swish-e.
    Here is the link for Swish-e: Swish-e :: Home Page
    This is used to index files. I have noticed that it is only able to index certain PDF files but not PDF files that are Chinese for example.
    I am using Nitro PDF3 reader to read my PDF files if that makes any difference.
    What I would like to know is what would be the best Linux command or piece of software to use to convert PDF files that are in Identity-h encoding to regular text files? Could this software be incorporated into Swish-e if so how?  Is there even a way to do this? Any help would be greatly appreciated.
    What would I have to put in my swish-e configuration file to index it?

    jdam wrote:
    Here are 8 words that comprise the following message   KNID/T192023
    19278, 18756, 12116, 12601, 12848, 12851, 8224, 8224
    Your 8 words should give 16 characters.  The last four must be spaces because there are only 12 characters in KNID/T192023.  Also you stated that the data is in hex, but your 8 words are obviously in decimal.  Anyway, here is a vi to decipher the words.
    Once again Altenbach has to come up with a simpler way to do things.  Maybe I should send him all my work for reveiw.
    Message Edited by tbob on 06-20-2006 04:16 PM
    - tbob
    Inventor of the WORM Global
    Attachments:
    DecimalStringsToAsciiChars.vi ‏21 KB

  • Creating a form field with Arial MS Unicode and Identity-H encoding

    I need to create a text field on an Adobe Acrobat form that has a font of Arial Unicode MS with the font encoding being set to Identity-H. However when I use Adobe Acrobat 9 Standard (v9.5.1) to do this the encoding is always set to UniKS-UTF16-H. I can't find a way of changing the encoding  to Identity-H. Any suggestions on how I can achieve this.

    I have a 3rd party form filling app that needs the encoding set to identity-h to support the range of characters that could be input. I could ask the vendor to change the app to fix the problem but there would be significant process involved in that. I have a small demo that shows that creating a form with the encoding set to identity-h provides a resolution. The demo was created with iText but as the real form is considerably more complicated I'd rather not use iText if I don't have to. Since the form was originally created with acrobat the question remains as this would be the "least painful" route to a resolution.

  • SSRS 2005 Encoding Problem

    hi , I am using ssrs 2005 for generating PDF the reports. The reports are getting generated with different character encoding. Like if go in the report properties its showing two different character Encoding i.e ANSI and Identity-H encoding.
    Is there fix for this ?

    Hi pradip nayak ,
    According to your description that you are experiencing the issue when you generating the PDF report in SSRS 2005, the reports have different kind of encoding: ANSI and Identity-H, the Identity-H encoding caused some issue, right?
    Generally the PDF rendering extension supports ANSI characters, the Identity-H was developed to support pictographic East Asian character sets, as these comprise many more characters than the Latin, Greek and Cyrillic writing systems. If you have the characters
    turn into CID font in the PDF which will  be the Identity-H encoding, it means that some text content in the PDF Document is now written out using Glyph IDs instead of ANSI characters.
    PostScript fonts.
    If you have some issue with the Identity-H encoding and because this is an known issue and we already have some hotfix to fix it, please install related hotfix according to your current  version :
    That is part of a change we had to make in order to correctly support complex script characters (http://support.microsoft.com/kb/944358 ). These are used in languages such as Gujarati or Hindi.
    This fix is part of the GDI+ GDR (http://support.microsoft.com/kb/954607 ), which is part of SSRS 2005 CU9.
    If you have some problem For text content written out using Glyph IDs(Identity-H encoding) copy/paste is not supported for documents generated by our current PDF renderer. In order to support this our PDF renderer would need to write out a map that maps
    Glyph IDs back to Unicode chars. It is something that is on the list for a future release of SQL Reporting Services.
    Would it be possible for you to deploy and render one of your reports on SSRS 2008? It was already added to SSRS 2008 as part of SQL Server 2008 CU1 which  we added full RichText support, in this case might give you a more granular separation in terms
    of what text content is written out using ANSI characters and  Glyph IDs.
    Article about font embedding  for your reference:
    http://technet.microsoft.com/en-us/library/dd255291(v=sql.105).aspx
    Similar thread which issue resolved for your reference:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/eb43f026-9066-47c9-a9cf-5c2368b3fcb5/report-footer-text-all-unreadable-jumbled-when-exported-to-pdf-after-installed-hotfix-3282?forum=sqlreportingservices
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/eb867e23-726a-4dd5-b522-c69e562d6dc6/reporting-services-2005-now-rendering-pdfs-differently?forum=sqlreportingservices
    If your problem still exists, please feel free to ask
    Regards
    Vicky Liu

  • How to detect encoding file in ANSI, UTF8 and UTF8 without BOM

    Hi all,
    I am having a problem with detecting a .txt/.csv file encoding. I need to detect a file in ANSI, UTF8 and UTF8 without BOM but the problem is the encoding of ANSI and UTF8 without BOM are the same. I checked the function below and saw that ANSI and UTF8
    without BOM have the same encoding. so, How can I detect UTF8 without BOM encoding file? because I need to handle for this case in my code.
    Thanks.
    public Encoding GetFileEncoding(string srcFile)
                // *** Use Default of Encoding.Default (Ansi CodePage)
                Encoding enc = Encoding.Default;
                // *** Detect byte order mark if any - otherwise assume default
                byte[] buffer = new byte[10];
                FileStream file = new FileStream(srcFile, FileMode.Open);
                file.Read(buffer, 0, 10);
                file.Close();
                if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2] == 0xbf)
                    enc = Encoding.UTF8;
                else if (buffer[0] == 0xfe && buffer[1] == 0xff)
                    enc = Encoding.Unicode;
                else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2] == 0xfe && buffer[3] == 0xff)
                    enc = Encoding.UTF32;
                else if (buffer[0] == 0x2b && buffer[1] == 0x2f && buffer[2] == 0x76)
                    enc = Encoding.UTF7;
                else if (buffer[0] == 0xFE && buffer[1] == 0xFF)
                    // 1201 unicodeFFFE Unicode (Big-Endian)
                    enc = Encoding.GetEncoding(1201);
                else if (buffer[0] == 0xFF && buffer[1] == 0xFE)
                    // 1200 utf-16 Unicode
                    enc = Encoding.GetEncoding(1200);
                return enc;

    what you want is to get the encoding utf-8 without bom which can only be detected if the file has special characters, so do the following:
    public Encoding GetFileEncoding(string srcFile)
                // *** Use Default of Encoding.Default (Ansi CodePage)
                Encoding enc = Encoding.Default;
                // *** Detect byte order mark if any - otherwise assume default
                byte[] buffer = new byte[10];
                FileStream file = new FileStream(srcFile, FileMode.Open);
                file.Read(buffer, 0, 10);
                file.Close();
                if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2]
    == 0xbf)
                    enc = Encoding.UTF8;
                else if (buffer[0] == 0xfe && buffer[1] == 0xff)
                    enc = Encoding.Unicode;
                else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2]
    == 0xfe && buffer[3] == 0xff)
                    enc = Encoding.UTF32;
                else if (buffer[0] == 0x2b && buffer[1] == 0x2f &&
    buffer[2] == 0x76)
                    enc = Encoding.UTF7;
                else if (buffer[0] == 0xFE && buffer[1] == 0xFF)
                    // 1201 unicodeFFFE Unicode (Big-Endian)
                    enc = Encoding.GetEncoding(1201);
                else if (buffer[0] == 0xFF && buffer[1] == 0xFE)
                    // 1200 utf-16 Unicode
                    enc = Encoding.GetEncoding(1200);
               else if (validatUtf8whitBOM(srcFile))
                    enc = UTF8Encoding(false);
                return enc;
    private bool validateUtf8whitBOM(string FileSource)
                bool bReturn = false;
                string TextUTF8 = "", TextANSI = "";
                //lread the file as utf8
               StreamReader srFileWhitBOM  = new StreamReader(FileSource);
               TextUTF8 = srFileWhitBOM .ReadToEnd();
               srFileWhitBOM .Close();
                //lread the file as  ANSI
               StreamReader srFileWhitBOM  = new StreamReader(FileSource,Encoding.Defaul,false);
               TextANSI  = srFileWhitBOM .ReadToEnd();
               srFileWhitBOM .Close();
               // if the file contains special characters is UTF8 text read ansi show signs
                if(TextANSI.Contains("Ã") || TextANSI.Contains("±")
                     bReturn = true;
                return bReturn;

  • Encoding problem with xml

    Hello,
    I'm trying to save on disk an xml with utf-8 codification and compressed with "gzip" algorithm. For some reasons, I have to use a BufferedWriter, then I must convert the byte[] to String.
    Afterwards, I read this document and I try to decompress it and save on a String variable.
    If I do all this process using utf-8 encoding, the decompressing process throws an exception: Not in GZIP format.
    But if I use iso-8859-1, then everything works OK.
    I don't understand why using iso works, and why using utf-8 does not work (when the webservice specification says that the xml documents are sent in utf-8).
    The code is the following (the static "myCharset" is the key: when I set iso works, and setting utf-8 does not work):
    public class testCompress
    public static String xmlOutput     = "<?xml version=\"1.0\" encoding=\"utf-8\"?><soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"><soap:Body><consultaCiudadesPorPaisResponse xmlns=\"http://tempuri.org/\"><consultaCiudadesPorPaisResult><xml funcion=\"ConsultaCiudadesPorPais\" xmlns=\"\"><ROK>TRUE</ROK></xml></consultaCiudadesPorPaisResult></consultaCiudadesPorPaisResponse></soap:Body></soap:Envelope>";
    public static String myCharset     = "iso-8859-1";
    // Writes the compressed document to disk.
    public static void writeDocument() throws Exception
      BufferedWriter writer = null;
      try 
       // Compress the "xmlOutput" converting this string to bytes using "utf-8" as specification says (in "compress" method)
       // Afterwards, I convert this byte[] to String using "myCharset".
       String compressedFile = new String(CompressionService.compress(xmlOutput, "utf-8", "gzip", 8), myCharset);
       // And write to disk using "myCharset" as the encoding used by "OutputStreamWriter".
       writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("c:/cache"), myCharset), 8192);
       writer.write(compressedFile);
      catch (Exception e) { throw e; }
      finally
        if (writer != null)
         try { writer.close(); writer = null; } catch (IOException ioe) {}
    // Reads the compressed document from disk.
    public static byte[] readDocument() throws Exception
      BufferedReader reader = null;
      StringBuilder sb           = new StringBuilder();
      char[] buffer           = new char[8192];
      try
       // Open the file using "myCharset" for reading chars.
       reader = new BufferedReader(new InputStreamReader(new FileInputStream("c:/cache"), myCharset));
       int numchars = 0;
       while ((numchars = reader.read(buffer, 0, 8192)) >= 0) sb.append(buffer, 0, numchars);
       // And return the result as a byte[] encoded with "myCharset".
       return (sb.toString().getBytes(myCharset));
      catch (Exception e) { throw e; }
      finally
       if (reader != null)
        try { reader.close(); } catch (IOException ioe) {}
    public static void main(String[] args) throws Exception
      writeDocument();
      byte[] file = readDocument();
      // DECOMPRESS FAILS IF myCharset = "utf-8", and works if myCharset = "iso-8859-1"
      System.out.println(com.vpfw.proxy.services.compress.CompressionService.decompress(file, "utf-8", "gzip", 8));
    }

    Okay, here's what's happening. You created a byte[] by encoding some text as UTF-8, then you ran that byte[] through a gzip deflater. The result is binary data that can only be understood by a gzip inflater; to any other software it just looks like garbage. Now you're taking a randomly-chosen encoding and pretending the binary data is really text that was encoded with that encoding.
    Most encodings have limits on what kinds of input they can accept. For example, US-ASCII only uses the low-order seven bits of each byte; any byte with a value larger than 127 is invalid. When the encoder encounters such a byte, it inserts the standard replacement character, U+FFFD, in that spot. When you try to decode the string again as US-ASCII, the replacement character is what you see in that position; the original byte value is lost. In UTF-8, the bytes have to conform to [certain patterns|http://en.wikipedia.org/wiki/UTF-8#Description]; for example, any byte with a value greater than 127 has to be part of a valid two-, three- or four-byte sequence.
    ISO-8859-1 is different. It's a single-byte encoding like ASCII, but it uses all eight bits of every byte. Furthermore, every possible byte value (0..255) maps to a character, so you can throw any random byte at it and tell it the byte represents a character, and it will believe you. Some of those values may map to control characters that would look like garbage if you displayed them, but they're valid. That means you can re-encode the string as ISO-8859-1 and get back the exact byte sequence you started with.
    So that's why your code "works" when you use ISO-8859-1, but I strongly recommend that you find another way; making binary data masquerade as text is dangerously fragile. Why do you have to use a Writer anyway? Is it for transmission over a medium that only accepts text data? If so, you should use a Base64 encoder or similar tool that's designed for that purpose.

  • Same functions with different return types in C++

    Normally the following two functions would be considered the same:
    int getdata ( char *s, int i )
    long getdata ( char *s, int i )
    Every other compiler we use would resolve both of these to the same function. In fact, it is not valid C++ code otherwise.
    We include some 3rd party source in our build which sometimes messes with our typedefs causing this to happen. We have accounted for all of the function input types but never had a problem with the return types. I just installed Sun ONE Studio 8, Compiler Collection and it is generating two symbols in our libraries every time this occurs.
    Is there a compiler flag I can use to stop it from doing this? I've got over 100 unresolved symbols and I'd rather not go and fix all of them if there is an easier way.

    Normally the following two functions would be
    considered the same:
    int getdata ( char *s, int i )
    long getdata ( char *s, int i )Not at all. Types int and long are different types, even if they are implemented the same way.
    Reference: C++ Standard, section 3.9.1 paragraph 10.
    For example, you can define two functions
    void foo(int);
    void foo(long);
    and they are distinct functions. The function that gets called depends on function overload resolution at the point of the call.
    Overloaded functions must differ in the number or the type of at least one parameter. They cannot differ only in the return type. A program that declares or defines two functions that differ only in their return types is invalid and has undefined behavior. Reference: C++ Standard section 13.1, paragraph 2.
    The usual way to implement overloaded functions is to encode the scope and the parameter types, and maybe the return type, and attach the encoding to the function name. This technique is known as "name mangling". The compiler generates the same mangled name for the declaration and definition of a given function, and different mangled names for different functions. The linker matches up the mangled names, and can tell you when no definition matches a reference.
    Some compilers choose not to include the return type in the mangled name of a function. In that case, the declaration
    int foo(char*);
    will match the definition
    long foo(char*) { ... }
    But it will also match the definitions
    myClass foo(char*) { ... }
    double foo(char*) { ... }
    You are unlikely to get good results from such a mismatch. For that reason, and because a pointer-to-function must encode the function return type, Sun C++ always encodes the function return type in the mangled name. (That is, we simplify things by not using different encodings for the same function type.)
    If you built your code for a 64-bit platform, it would presumably link using your other compilers, but would fail in mysterious ways at run time. With the Sun compiler, you can't get into that mess.
    To make your program valid, you will have to ensure your function declarations match their definitions.

  • LiveCycle DS , can't get the return value of fill( arg) from Assembler class

    Hi all!
    I'm a have small problem , can any one help me? Please
    I make a project which very similar to Product project(in
    example).
    1) Have Assembler class: I work correctly
    package flex.samples.stock;
    import java.util.List;
    import java.util.Collection;
    import java.util.Map;
    import flex.data.DataSyncException;
    import flex.data.assemblers.AbstractAssembler;
    public class StockAssembler extends AbstractAssembler {
    public Collection fill(List fillArgs) {
    StockService service = new StockService();
    System.out.print(fillArgs.size());
    return service.getStocks();
    public Object getItem(Map identity) {
    StockService service = new StockService();
    return service.getStock(((Integer)
    identity.get("StockId")).intValue());
    public void createItem(Object item) {
    StockService service = new StockService();
    service.create((Stock) item);
    public void updateItem(Object newVersion, Object
    prevVersion, List changes) {
    StockService service = new StockService();
    boolean success = service.update((Stock) newVersion);
    if (!success) {
    int stockId = ((Stock) newVersion).getStockId();
    throw new DataSyncException(service.getStock(stockId),
    changes);
    public void deleteItem(Object item) {
    StockService service = new StockService();
    boolean success = service.delete((Stock) item);
    if (!success) {
    int stockId = ((Stock) item).getStockId();
    throw new DataSyncException(service.getStock(stockId),
    null);
    some require class is ok.
    2) I configure in data-management-config.xml
    <destination id="stockinventory">
    <adapter ref="java-dao" />
    <properties>
    <source>flex.samples.stock.StockAssembler</source>
    <scope>application</scope>
    <metadata>
    <identity property="StockId"/>
    </metadata>
    <network>
    <session-timeout>20</session-timeout>
    <paging enabled="false" pageSize="10" />
    <throttle-inbound policy="ERROR" max-frequency="500"/>
    <throttle-outbound policy="REPLACE"
    max-frequency="500"/>
    </network>
    </properties>
    </destination>
    3) My client app:
    I use :
    <mx:ArrayCollection id="stocks"/>
    <mx:DataService id="ds" destination="stockinventory"/>
    ds.fill(stocks); --> Problem here
    When I run this app, The StockAssembler on the server work
    correctly (i use printout to debug) . But variable stocks can't get
    the return value,it is a empty list.
    Please help me!

    Hi,                                                             
    The executeQueryAsync method is “asynchronous, which means that the code will continue to be executed without waiting for the server to
    respond”.
    From the error message “The collection has not been initialized”, which suggests that the object in use hadn’t been initialized after the execution of the executeQueryAsync method.
    I suggest you debug the code in browser to see which line of code will throw this error, then you can reorganize the logic of the code to avoid using the uninitialized object.
    A documentation from MSDN about Using the F12 Developer Tools to Debug JavaScript Errors
    for your reference:
    http://msdn.microsoft.com/en-us/library/ie/gg699336(v=vs.85).aspx
    Best regards
    Patrick Liang
    TechNet Community Support

  • PackedInteger encoding does not preserve order - but it could

    Currently, when viewed under the default unsigned lexical ordering, "packed" integers (and longs) do not sort numerically. This is because for values requiring two or more bytes, the "value" bytes are stored little endian.
    For example:
    <tt> 65910 -> 7a ff 00 01</tt>
    <tt> 65911 -> 7a 00 01 01</tt>
    However, it's possible to encode packed integers such that they still sort correctly. You can do this by modifying the existing PackedInteger algorithm as follows:
    <li>Invert the high-order bit on the first byte
    <li>Reverse the order of the second and subsequent bytes (if any)
    <li>If the original number was negative, invert (one's complement) the bits in the second and subsequent bytes (if any)
    Here's some examples showing the two encodings:
                  NUMBER             ENCODING         NEW ENCODING
    -9223372036854775808   8189ffffffffffff7f   018000000000000076
             -2147483648           8589ffff7f           0580000076
                  -65911             86000101             06fefeff
                  -65910             86ff0001             06feff00
                  -65655             86000001             06feffff
                  -65654               87ffff               070000
                  -65400               8701ff               0700fe
                  -65399               8700ff               0700ff
                    -375               870001               07feff
                    -374                 88ff                 0800
                    -120                 8801                 08fe
                    -119                   89                   09
                      -1                   ff                   7f
                       0                   00                   80
                       1                   01                   81
                     119                   77                   f7
                     120                 7801                 f801
                     374                 78ff                 f8ff
                     375               790001               f90100
                     631               790002               f90200
                   65399               7900ff               f9ff00
                   65400               7901ff               f9ff01
                   65654               79ffff               f9ffff
                   65655             7a000001             fa010000
                   65910             7aff0001             fa0100ff
                   65911             7a000101             fa010100
              2147483647           7b88ffff7f           fb7fffff88
    9223372036854775807   7f88ffffffffffff7f   ff7fffffffffffff88So the question is.. any particular reason this wasn't done? Would this alternate encoding be a useful addition? (I'm sure it's not possible to replace the current algorithm at this point).
    Thanks.

    greybird wrote:
    Thanks Archie. I appreciate this and I know you have good intentions. Unfortunately, we won't be able to incorporate any code from your svn repository. We can only incorporate submissions if the source code is posted here in our OTN forum.Not a problem... Here is the new class:
    package org.dellroad.sidekar.util;
    import com.sleepycat.bind.tuple.TupleInput;
    import com.sleepycat.bind.tuple.TupleOutput;
    import java.util.Arrays;
    * An improved version of Berkeley DB's {@link com.sleepycat.util.PackedInteger} class that packs values in such
    * a way that their order is preserved when compared using the default binary unsigned lexical ordering.
    * @see <a href="http://forums.oracle.com/forums/thread.jspa?messageID=4126254">Berkeley DB Java Edition forum posting</a>
    public final class PackedLong {
         * Maximum possible length of an encoded value.
        public static final int MAX_ENCODED_LENGTH = 9;
         * Minimum value for the first byte of a single byte encoded value. Lower values indicate a multiple byte encoded negative value.
        public static final int MIN_SINGLE_BYTE_ENCODED = 0x08;        // values 0x00 ... 0x07 prefix negative values
         * Maximum value for the first byte of a single byte encoded value. Higher values indicate a multiple byte encoded positive value.
        public static final int MAX_SINGLE_BYTE_ENCODED = 0xf7;        // values 0xf8 ... 0xff prefix positive values
         * Adjustment applied to single byte encoded values before encoding.
        public static final int ZERO_ADJUST = 127;                     // single byte value that represents zero
         * Minimum value that can be encoded as a single byte.
        public static final int MIN_SINGLE_BYTE_VALUE = MIN_SINGLE_BYTE_ENCODED - ZERO_ADJUST;          // -119
         * Maximum value that can be encoded as a singel byte.
        public static final int MAX_SINGLE_BYTE_VALUE = MAX_SINGLE_BYTE_ENCODED - ZERO_ADJUST;          // 120
         * Adjustment applied to multiple byte encoded negative values before encoding.
        public static final int NEGATIVE_ADJUST = -MIN_SINGLE_BYTE_VALUE;                               // 119
         * Adjustment applied to multiple byte encoded positive values before encoding.
        public static final int POSITIVE_ADJUST = -(MAX_SINGLE_BYTE_VALUE + 1);                         // -121
        // Cutoff values at which the encoded length changes (this field is package private for testing purposes)
        static final long[] CUTOFF_VALUES = new long[] {
            0xff00000000000000L - NEGATIVE_ADJUST,      // [ 0] requires 8 bytes
            0xffff000000000000L - NEGATIVE_ADJUST,      // [ 1] requires 7 bytes
            0xffffff0000000000L - NEGATIVE_ADJUST,      // [ 2] requires 6 bytes
            0xffffffff00000000L - NEGATIVE_ADJUST,      // [ 3] requires 5 bytes
            0xffffffffff000000L - NEGATIVE_ADJUST,      // [ 4] requires 4 bytes
            0xffffffffffff0000L - NEGATIVE_ADJUST,      // [ 5] requires 3 bytes
            0xffffffffffffff00L - NEGATIVE_ADJUST,      // [ 6] requires 2 bytes
            MIN_SINGLE_BYTE_VALUE,                      // [ 7] requires 1 byte
            MAX_SINGLE_BYTE_VALUE + 1,                  // [ 8] requires 2 bytes
            0x0000000000000100L - POSITIVE_ADJUST,      // [ 9] requires 3 bytes
            0x0000000000010000L - POSITIVE_ADJUST,      // [10] requires 4 bytes
            0x0000000001000000L - POSITIVE_ADJUST,      // [11] requires 5 bytes
            0x0000000100000000L - POSITIVE_ADJUST,      // [12] requires 6 bytes
            0x0000010000000000L - POSITIVE_ADJUST,      // [13] requires 7 bytes
            0x0001000000000000L - POSITIVE_ADJUST,      // [14] requires 8 bytes
            0x0100000000000000L - POSITIVE_ADJUST,      // [15] requires 9 bytes
        private PackedLong() {
         * Write the encoded value to the output.
         * @param output destination for the encoded value
         * @param value value to encode
        public static void write(TupleOutput output, long value) {
            output.makeSpace(MAX_ENCODED_LENGTH);
            int len = encode(value, output.getBufferBytes(), output.getBufferLength());
            output.addSize(len);
         * Read and decode a value from the input.
         * @param input input holding an encoded value
         * @return the decoded value
        public static long read(TupleInput input) {
            long value = decode(input.getBufferBytes(), input.getBufferOffset());
            input.skipFast(getReadLength(input));
            return value;
         * Determine the length (in bytes) of an encoded value without advancing the input.
         * @param input input holding an encoded value
         * @return the length of the encoded value
        public static int getReadLength(TupleInput input) {
            return getReadLength(input.getBufferBytes(), input.getBufferOffset());
         * Determine the length (in bytes) of an encoded value.
         * @param buf buffer containing encoded value
         * @param off starting offset of encoded value
         * @return the length of the encoded value
         * @throws ArrayIndexOutOfBoundsException if {@code off} is not a valid offset in {@code buf}
        public static int getReadLength(byte[] buf, int off) {
            int prefix = buf[off] & 0xff;
            if (prefix < MIN_SINGLE_BYTE_ENCODED)
                return 1 + MIN_SINGLE_BYTE_ENCODED - prefix;
            if (prefix > MAX_SINGLE_BYTE_ENCODED)
                return 1 + prefix - MAX_SINGLE_BYTE_ENCODED;
            return 1;
         * Determine the length (in bytes) of the encoded value.
         * @return the length of the encoded value, a value between one and {@link #MAX_ENCODED_LENGTH}
        public static int getWriteLength(long value) {
            int index = Arrays.binarySearch(CUTOFF_VALUES, value);
            if (index < 0)
                index = ~index - 1;
            return index < 8 ? 8 - index : index - 6;
         * Encode the given value into a new buffer.
         * @param value value to encode
         * @return byte array containing the encoded value
        public static byte[] encode(long value) {
            byte[] buf = new byte[MAX_ENCODED_LENGTH];
            int len = encode(value, buf, 0);
            if (len != MAX_ENCODED_LENGTH) {
                byte[] newbuf = new byte[len];
                System.arraycopy(buf, 0, newbuf, 0, len);
                buf = newbuf;
            return buf;
         * Encode the given value and write the encoded bytes into the given buffer.
         * @param value value to encode
         * @param buf output buffer
         * @param off starting offset into output buffer
         * @return the number of encoded bytes written
         * @throws ArrayIndexOutOfBoundsException if {@code off} is negative or the encoded value exceeds the given buffer
        public static int encode(long value, byte[] buf, int off) {
            int len = 1;
            if (value < MIN_SINGLE_BYTE_VALUE) {
                value += NEGATIVE_ADJUST;
                long mask = 0x00ffffffffffffffL;
                for (int shift = 56; shift != 0; shift -= 8) {
                    if ((value | mask) != ~0L)
                        buf[off + len++] = (byte)(value >> shift);
                    mask >>= 8;
                buf[off] = (byte)(MIN_SINGLE_BYTE_ENCODED - len);
            } else if (value > MAX_SINGLE_BYTE_VALUE) {
                value += POSITIVE_ADJUST;
                long mask = 0xff00000000000000L;
                for (int shift = 56; shift != 0; shift -= 8) {
                    if ((value & mask) != 0L)
                        buf[off + len++] = (byte)(value >> shift);
                    mask >>= 8;
                buf[off] = (byte)(MAX_SINGLE_BYTE_ENCODED + len);
            } else {
                buf[off] = (byte)(value + ZERO_ADJUST);
                return 1;
            buf[off + len++] = (byte)value;
            return len;
         * Decode a value from the given buffer.
         * @param buf buffer containing an encoded value
         * @param off starting offset of the encoded value in {@code buf}
         * @return the decoded value
         * @throws ArrayIndexOutOfBoundsException if {@code off} is negative or the encoded value is truncated
         * @see #getReadLength
        public static long decode(byte[] buf, int off) {
            int first = buf[off++] & 0xff;
            if (first < MIN_SINGLE_BYTE_ENCODED) {
                long value = ~0L;
                while (first++ < MIN_SINGLE_BYTE_ENCODED)
                    value = (value << 8) | (buf[off++] & 0xff);
                return value - NEGATIVE_ADJUST;
            if (first > MAX_SINGLE_BYTE_ENCODED) {
                long value = 0L;
                while (first-- > MAX_SINGLE_BYTE_ENCODED)
                    value = (value << 8) | (buf[off++] & 0xff);
                return value - POSITIVE_ADJUST;
            return (byte)(first - ZERO_ADJUST);
    }and here is the unit test:
    package org.dellroad.sidekar.util;
    import com.sleepycat.bind.tuple.TupleInput;
    import com.sleepycat.bind.tuple.TupleOutput;
    import com.sleepycat.je.DatabaseEntry;
    import org.dellroad.sidekar.TestSupport;
    import org.testng.annotations.DataProvider;
    import org.testng.annotations.Test;
    import static org.testng.Assert.assertEquals;
    public class PackedLongTest extends TestSupport {
        @Test(dataProvider = "encodings")
        public void testEncoding(long value, String string) {
            // Test direct encoding
            byte[] expected = ByteArrayEncoder.decode(string);
            byte[] actual = PackedLong.encode(value);
            assertEquals(actual, expected);
            // Test write()
            TupleOutput out = new TupleOutput();
            PackedLong.write(out, value);
            assertEquals(out.toByteArray(), expected);
            // Test getWriteLength()
            assertEquals(actual.length, PackedLong.getWriteLength(value));
            // Test decoding
            long value2 = PackedLong.decode(actual, 0);
            assertEquals(value2, value);
            // Test read()
            TupleInput input = JEUtil.toTupleInput(new DatabaseEntry(actual));
            assertEquals(actual.length, PackedLong.getReadLength(input));
            value2 = PackedLong.read(input);
            assertEquals(value2, value);
            // Test getReadLength()
            assertEquals(actual.length, PackedLong.getReadLength(actual, 0));
        @Test(dataProvider = "lengths")
        public void testEncodedLength(long value, int expected) {
            int actual = PackedLong.getWriteLength(value);
            assertEquals(actual, expected);
            byte[] buf = PackedLong.encode(value);
            assertEquals(buf.length, expected);
        @DataProvider(name = "encodings")
        public Object[][] genEncodings() {
            return new Object[][] {
                // Corner cases
                { 0x8000000000000000L, "008000000000000077" },
                { 0xfeffffffffffff88L, "00feffffffffffffff" },
                { 0xfeffffffffffff89L, "0100000000000000" },
                { 0xfeffffffffffffffL, "0100000000000076" },
                { 0xfffeffffffffff88L, "01feffffffffffff" },
                { 0xfffeffffffffff89L, "02000000000000" },
                { 0xfffeffffffffffffL, "02000000000076" },
                { 0xfffffeffffffff88L, "02feffffffffff" },
                { 0xfffffeffffffff89L, "030000000000" },
                { 0xfffffeffffffffffL, "030000000076" },
                { 0xfffffffeffffff88L, "03feffffffff" },
                { 0xfffffffeffffff89L, "0400000000" },
                { 0xfffffffeffffffffL, "0400000076" },
                { 0xfffffffffeffff88L, "04feffffff" },
                { 0xfffffffffeffff89L, "05000000" },
                { 0xfffffffffeffffffL, "05000076" },
                { 0xfffffffffffeff88L, "05feffff" },
                { 0xfffffffffffeff89L, "060000" },
                { 0xfffffffffffeffffL, "060076" },
                { 0xfffffffffffffe88L, "06feff" },
                { 0xfffffffffffffe89L, "0700" },
                { 0xfffffffffffffeffL, "0776" },
                { 0xffffffffffffff88L, "07ff" },
                { 0xffffffffffffff89L, "08" },
                { 0xffffffffffffffa9L, "28" },
                { 0xffffffffffffffc9L, "48" },
                { 0xffffffffffffffe9L, "68" },
                { 0xffffffffffffffffL, "7e" },
                { 0x0000000000000000L, "7f" },
                { 0x0000000000000001L, "80" },
                { 0x0000000000000071L, "f0" },
                { 0x0000000000000077L, "f6" },
                { 0x0000000000000078L, "f7" },
                { 0x0000000000000079L, "f800" },
                { 0x0000000000000178L, "f8ff" },
                { 0x0000000000000179L, "f90100" },
                { 0x0000000000010078L, "f9ffff" },
                { 0x0000000000010079L, "fa010000" },
                { 0x0000000001000078L, "faffffff" },
                { 0x0000000001000079L, "fb01000000" },
                { 0x0000000100000078L, "fbffffffff" },
                { 0x0000000100000079L, "fc0100000000" },
                { 0x0000010000000078L, "fcffffffffff" },
                { 0x0000010000000079L, "fd010000000000" },
                { 0x0001000000000078L, "fdffffffffffff" },
                { 0x0001000000000079L, "fe01000000000000" },
                { 0x0100000000000078L, "feffffffffffffff" },
                { 0x0100000000000079L, "ff0100000000000000" },
                { 0x7fffffffffffff79L, "ff7fffffffffffff00" },
                { 0x7fffffffffffffffL, "ff7fffffffffffff86" },
                // Other cases
                { 0xffffffff80000000L, "0480000077" },
                { 0xfffffffffffefe89L, "05feff00" },
                { 0xfffffffffffefe8aL, "05feff01" },
                { 0xfffffffffffeff86L, "05fefffd" },
                { 0xfffffffffffeff87L, "05fefffe" },
                { 0xfffffffffffeff88L, "05feffff" },
                { 0xfffffffffffeff89L, "060000" },
                { 0xfffffffffffeff8aL, "060001" },
                { 0xffffffffffff0086L, "0600fd" },
                { 0xffffffffffff0087L, "0600fe" },
                { 0xffffffffffff0088L, "0600ff" },
                { 0xffffffffffff0089L, "060100" },
                { 0xfffffffffffffe86L, "06fefd" },
                { 0xfffffffffffffe87L, "06fefe" },
                { 0xfffffffffffffe89L, "0700" },
                { 0xfffffffffffffe8aL, "0701" },
                { 0xffffffffffffff87L, "07fe" },
                { 0xffffffffffffff88L, "07ff" },
                { 0xffffffffffffff89L, "08" },
                { 0xffffffffffffffffL, "7e" },
                { 0x0000000000000000L, "7f" },
                { 0x0000000000000001L, "80" },
                { 0x0000000000000077L, "f6" },
                { 0x0000000000000078L, "f7" },
                { 0x0000000000000176L, "f8fd" },
                { 0x0000000000000177L, "f8fe" },
                { 0x0000000000000178L, "f8ff" },
                { 0x0000000000000277L, "f901fe" },
                { 0x000000000000ff77L, "f9fefe" },
                { 0x000000000000ff78L, "f9feff" },
                { 0x000000000000ff79L, "f9ff00" },
                { 0x000000000000ff7aL, "f9ff01" },
                { 0x0000000000010076L, "f9fffd" },
                { 0x0000000000010077L, "f9fffe" },
                { 0x0000000000010078L, "f9ffff" },
                { 0x0000000000010079L, "fa010000" },
                { 0x000000000001007aL, "fa010001" },
                { 0x0000000000010176L, "fa0100fd" },
                { 0x0000000000010177L, "fa0100fe" },
                { 0x000000007fffffffL, "fb7fffff86" },
                { 0x7fffffffffffffffL, "ff7fffffffffffff86" },
        @DataProvider(name = "lengths")
        public Object[][] genLengths() {
            return new Object[][] {
                // Check cutoff values
                {   PackedLong.CUTOFF_VALUES[ 0] - 1,      9   },
                {   PackedLong.CUTOFF_VALUES[ 0],          8   },
                {   PackedLong.CUTOFF_VALUES[ 1] - 1,      8   },
                {   PackedLong.CUTOFF_VALUES[ 1],          7   },
                {   PackedLong.CUTOFF_VALUES[ 2] - 1,      7   },
                {   PackedLong.CUTOFF_VALUES[ 2],          6   },
                {   PackedLong.CUTOFF_VALUES[ 3] - 1,      6   },
                {   PackedLong.CUTOFF_VALUES[ 3],          5   },
                {   PackedLong.CUTOFF_VALUES[ 4] - 1,      5   },
                {   PackedLong.CUTOFF_VALUES[ 4],          4   },
                {   PackedLong.CUTOFF_VALUES[ 5] - 1,      4   },
                {   PackedLong.CUTOFF_VALUES[ 5],          3   },
                {   PackedLong.CUTOFF_VALUES[ 6] - 1,      3   },
                {   PackedLong.CUTOFF_VALUES[ 6],          2   },
                {   PackedLong.CUTOFF_VALUES[ 7] - 1,      2   },
                {   PackedLong.CUTOFF_VALUES[ 7],          1   },
                {   PackedLong.CUTOFF_VALUES[ 8] - 1,      1   },
                {   PackedLong.CUTOFF_VALUES[ 8],          2   },
                {   PackedLong.CUTOFF_VALUES[ 9] - 1,      2   },
                {   PackedLong.CUTOFF_VALUES[ 9],          3   },
                {   PackedLong.CUTOFF_VALUES[10] - 1,      3   },
                {   PackedLong.CUTOFF_VALUES[10],          4   },
                {   PackedLong.CUTOFF_VALUES[11] - 1,      4   },
                {   PackedLong.CUTOFF_VALUES[11],          5   },
                {   PackedLong.CUTOFF_VALUES[12] - 1,      5   },
                {   PackedLong.CUTOFF_VALUES[12],          6   },
                {   PackedLong.CUTOFF_VALUES[13] - 1,      6   },
                {   PackedLong.CUTOFF_VALUES[13],          7   },
                {   PackedLong.CUTOFF_VALUES[14] - 1,      7   },
                {   PackedLong.CUTOFF_VALUES[14],          8   },
                {   PackedLong.CUTOFF_VALUES[15] - 1,      8   },
                {   PackedLong.CUTOFF_VALUES[15],          9   },
                // Check some other values
                { Long.MIN_VALUE,                               9   },
                { Long.MAX_VALUE,                               9   },
                { (long)Integer.MIN_VALUE,                      5   },
                { (long)Integer.MAX_VALUE,                      5   },
                { (long)Short.MIN_VALUE,                        3   },
                { (long)Short.MAX_VALUE,                        3   },
                { (long)Byte.MIN_VALUE,                         2   },
                { (long)Byte.MAX_VALUE,                         2   },
                { 0,                                            1   },
    }Edited by: archie172 on Mar 3, 2010 12:40 PM
    Removed copyright message.

  • @@IDENTITY

    This is the code I used to use with Dreamweaver to return the
    record ID when the record is created.
    Set commInsert = Server.CreateObject("ADODB.Connection")
    commInsert.Open MM_editConnection
    commInsert.Execute(MM_editQuery)
    Set rsNewID = commInsert.Execute("SELECT @@IDENTITY")
    Session("newAdded") = rsNewID(0)
    rsNewID.Close
    Set rsNewID = Nothing
    This doesn't work with the way Dreamweaver connects to db's
    now. How would I use "SELECT @@IDENTITY" or "SELECT
    SCOPE_IDENTITY()" to return the record ID? Below is an example of
    how Dreamweaver creates db connections now.
    Dim MM_editCmd
    Set MM_editCmd = Server.CreateObject ("ADODB.Command")
    MM_editCmd.ActiveConnection = MM_banList_STRING
    MM_editCmd.CommandText = "INSERT INTO banList (lastName,
    firstName, midInit, AKA, gender, race, address, city, [state], zip,
    ssNumber, dob, dateAdded, reason, ourBar, policeBar, addedBy)
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"
    MM_editCmd.Prepared = true
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param1", 202, 1, 20,
    Request.Form("lastName")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param2", 202, 1, 20,
    Request.Form("firstName")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param3", 202, 1, 1,
    Request.Form("midInit")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param4", 202, 1, 20,
    Request.Form("AKA")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param5", 202, 1, 1,
    Request.Form("gender")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param6", 202, 1, 10,
    Request.Form("race")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param7", 202, 1, 30,
    Request.Form("address")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param8", 202, 1, 15,
    Request.Form("city")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param9", 202, 1, 2,
    Request.Form("state")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param10", 202, 1, 5,
    Request.Form("zip")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param11", 202, 1, 11,
    Request.Form("ssNumber")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param12", 202, 1, 10,
    Request.Form("dob")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param13", 202, 1, 11,
    Request.Form("dateAdded")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param14", 203, 1, 1073741823,
    Request.Form("reason")) ' adLongVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param15", 202, 1, 1,
    MM_IIF(Request.Form("ourBar"), "Y", "N")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param16", 202, 1, 1,
    MM_IIF(Request.Form("policeBar"), "Y", "N")) ' adVarWChar
    MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param17", 202, 1, 25,
    Request.Form("addedBy")) ' adVarWChar
    MM_editCmd.Execute
    MM_editCmd.ActiveConnection.Close

    Try:
    Set rsNewID = commInsert.Execute("set nocount on;" &
    MM_editQuery & ";SELECT
    @@IDENTITY")
    Jules
    http://www.charon.co.uk/products.aspx
    Charon Cart
    Ecommerce for ASP/ASP.NET
    "jdrinkert" <[email protected]> wrote in
    message
    news:[email protected]...
    > This is the code I used to use with Dreamweaver to
    return the record ID
    > when
    > the record is created.
    >
    > Set commInsert = Server.CreateObject("ADODB.Connection")
    > commInsert.Open MM_editConnection
    > commInsert.Execute(MM_editQuery)
    > Set rsNewID = commInsert.Execute("SELECT @@IDENTITY")
    > Session("newAdded") = rsNewID(0)
    > rsNewID.Close
    > Set rsNewID = Nothing
    >
    > This doesn't work with the way Dreamweaver connects to
    db's now. How
    > would I
    > use "SELECT @@IDENTITY" or "SELECT SCOPE_IDENTITY()" to
    return the record
    > ID?
    > Below is an example of how Dreamweaver creates db
    connections now.
    >
    > Dim MM_editCmd
    > Set MM_editCmd = Server.CreateObject ("ADODB.Command")
    > MM_editCmd.ActiveConnection = MM_banList_STRING
    > MM_editCmd.CommandText = "INSERT INTO banList (lastName,
    firstName,
    > midInit, AKA, gender, race, address, city, [state], zip,
    ssNumber, dob,
    > dateAdded, reason, ourBar, policeBar, addedBy) VALUES
    > ?, ?,
    > ?, ?, ?, ?, ?, ?, ?, ?, ?)"
    > MM_editCmd.Prepared = true
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param1", 202,
    > 1,
    > 20, Request.Form("lastName")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param2", 202,
    > 1,
    > 20, Request.Form("firstName")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param3", 202,
    > 1,
    > 1, Request.Form("midInit")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param4", 202,
    > 1,
    > 20, Request.Form("AKA")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param5", 202,
    > 1,
    > 1, Request.Form("gender")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param6", 202,
    > 1,
    > 10, Request.Form("race")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param7", 202,
    > 1,
    > 30, Request.Form("address")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param8", 202,
    > 1,
    > 15, Request.Form("city")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param9", 202,
    > 1,
    > 2, Request.Form("state")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param10",
    > 202, 1,
    > 5, Request.Form("zip")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param11",
    > 202, 1,
    > 11, Request.Form("ssNumber")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param12",
    > 202, 1,
    > 10, Request.Form("dob")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param13",
    > 202, 1,
    > 11, Request.Form("dateAdded")) ' adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param14",
    > 203, 1,
    > 1073741823, Request.Form("reason")) ' adLongVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param15",
    > 202, 1,
    > 1, MM_IIF(Request.Form("ourBar"), "Y", "N")) '
    adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param16",
    > 202, 1,
    > 1, MM_IIF(Request.Form("policeBar"), "Y", "N")) '
    adVarWChar
    > MM_editCmd.Parameters.Append
    MM_editCmd.CreateParameter("param17",
    > 202, 1,
    > 25, Request.Form("addedBy")) ' adVarWChar
    > MM_editCmd.Execute
    > MM_editCmd.ActiveConnection.Close
    >

  • Re: Encode THAI Language into BASE64 Format

    Hi  Guru's,
            Have a requirement  to encode particular IDOC text fields which are in  THAI language to BASE64 format . We are able to encode the same for english but not able to do the same for Thai language so can any one help with the coding to create custom UDF  which can convert.
    Regards
    Sathish Kumar K.C

    Hi ,
       Please find below my code...
    public class Base64a {
    //     Mapping table from 6-bit nibbles to Base64 characters.
    private static char[]    map1 = new char[64];
         static {
            int i=0;
            for (char c='A'; c<='Z'; c+) map1[i+] = c;
            for (char c='a'; c<='z'; c+) map1[i+] = c;
            for (char c='0'; c<='9'; c+) map1[i+] = c;
            map1[i+] = ''; map1[i++] = '/'; }
    //     Mapping table from Base64 characters to 6-bit nibbles.
    private static byte[]    map2 = new byte[128];
         static {
            for (int i=0; i<map2.length; i++) map2<i> = -1;
            for (int i=0; i<64; i++) map2[map1<i>] = (byte)i; }
    Encodes a string into Base64 format.
    No blanks or line breaks are inserted.
    @param s  a String to be encoded.
    @return   A String with the Base64 encoded data.
    public static String encode (String s) {
         return new String(encode(s.getBytes())); }
    Encodes a byte array into Base64 format.
    No blanks or line breaks are inserted.
    @param in  an array containing the data bytes to be encoded.
    @return    A character array with the Base64 encoded data.
    public static char[] encode (byte[] in) {
         int iLen = in.length;
         int oDataLen = (iLen*4+2)/3;       // output length without padding
         int oLen = ((iLen+2)/3)*4;         // output length including padding
         char[] out = new char[oLen];
         int ip = 0;
         int op = 0;
         while (ip < iLen) {
            int i0 = in[ip++] & 0xff;
            int i1 = ip < iLen ? in[ip++] & 0xff : 0;
            int i2 = ip < iLen ? in[ip++] & 0xff : 0;
            int o0 = i0 >>> 2;
            int o1 = ((i0 &   3) << 4) | (i1 >>> 4);
            int o2 = ((i1 & 0xf) << 2) | (i2 >>> 6);
            int o3 = i2 & 0x3F;
            out[op++] = map1[o0];
            out[op++] = map1[o1];
            out[op] = op < oDataLen ? map1[o2] : '='; op++;
            out[op] = op < oDataLen ? map1[o3] : '='; op++; }
         return out; }
    Regards
    Sathish Kumar

  • Image Encoding

    Hi
    I have an xml file and I read my data from it, but my question is:
    how can I encode the image and then write it into this file? since I want to let my application read the encoded image from the xml file and then store it to Record Store.
    Please any help will be so appreciated :(
    thanks in advanced

    your code didn't work with me, but many thanks for your helping.
    I searched for another code and i found this code and it's work fine :)
    /* Copyright (c) 2002,2003, Stefan Haustein, Oberhausen, Rhld., Germany*/
    package Files;
    import java.io.*;
    public class Base64 {
        static final char[] charTab =
            "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
                .toCharArray();
        public static String encode(byte[] data) {
            return encode(data, 0, data.length, null).toString();
        /** Encodes the part of the given byte array denoted by start and
        len to the Base64 format.  The encoded data is appended to the
        given StringBuffer. If no StringBuffer is given, a new one is
        created automatically. The StringBuffer is the return value of
        this method. */
        public static StringBuffer encode(
            byte[] data,
            int start,
            int len,
            StringBuffer buf) {
            if (buf == null)
                buf = new StringBuffer(data.length * 3 / 2);
            int end = len - 3;
            int i = start;
            int n = 0;
            while (i <= end) {
                int d =
                    ((((int) data) & 0x0ff) << 16)
    | ((((int) data[i + 1]) & 0x0ff) << 8)
    | (((int) data[i + 2]) & 0x0ff);
    buf.append(charTab[(d >> 18) & 63]);
    buf.append(charTab[(d >> 12) & 63]);
    buf.append(charTab[(d >> 6) & 63]);
    buf.append(charTab[d & 63]);
    i += 3;
    if (n++ >= 14) {
    n = 0;
    buf.append("\r\n");
    if (i == start + len - 2) {
    int d =
    ((((int) data[i]) & 0x0ff) << 16)
    | ((((int) data[i + 1]) & 255) << 8);
    buf.append(charTab[(d >> 18) & 63]);
    buf.append(charTab[(d >> 12) & 63]);
    buf.append(charTab[(d >> 6) & 63]);
    buf.append("=");
    else if (i == start + len - 1) {
    int d = (((int) data[i]) & 0x0ff) << 16;
    buf.append(charTab[(d >> 18) & 63]);
    buf.append(charTab[(d >> 12) & 63]);
    buf.append("==");
    return buf;
    static int decode(char c) {
    if (c >= 'A' && c <= 'Z')
    return ((int) c) - 65;
    else if (c >= 'a' && c <= 'z')
    return ((int) c) - 97 + 26;
    else if (c >= '0' && c <= '9')
    return ((int) c) - 48 + 26 + 26;
    else
    switch (c) {
    case '+' :
    return 62;
    case '/' :
    return 63;
    case '=' :
    return 0;
    default :
    throw new RuntimeException(
    "unexpected code: " + c);
    /** Decodes the given Base64 encoded String to a new byte array.
    The byte array holding the decoded data is returned. */
    public static byte[] decode(String s) {
    ByteArrayOutputStream bos = new ByteArrayOutputStream();
    try {
    decode(s, bos);
    catch (IOException e) {
    throw new RuntimeException();
    return bos.toByteArray();
    public static void decode(String s, OutputStream os)
    throws IOException {
    int i = 0;
    int len = s.length();
    while (true) {
    while (i < len && s.charAt(i) <= ' ')
    i++;
    if (i == len)
    break;
    int tri =
    (decode(s.charAt(i)) << 18)
    + (decode(s.charAt(i + 1)) << 12)
    + (decode(s.charAt(i + 2)) << 6)
    + (decode(s.charAt(i + 3)));
    os.write((tri >> 16) & 255);
    if (s.charAt(i + 2) == '=')
    break;
    os.write((tri >> 8) & 255);
    if (s.charAt(i + 3) == '=')
    break;
    os.write(tri & 255);
    i += 4;

  • Encoding program help

    Hello I am trying to make a program to encode or decode an array. I have gotten it to work but only by making many if statements but I don't want it to be that ugly. I am fairly new to programming and am having trouble with the
    this method is the one I am having the trouble with It is supposed to compare find thisChar in the encoder array and then convert that to two characters down the array. I know it is not right and I am getting the out of bounds error. If someone could help me solve this I would appreciate it.
          char charMapEnco(char thisChar)
            for (int index =0; index < encoder.length; index++)
               if (thisChar == encoder[index])
                   thisChar = encoder[index + 2];
             return thisChar;
            Here is the whole code
    public class EncodeDecode
        //Input list of numbers/letters
        private String[] originalList;  
        //Resulting list of encoded numbers/letters
        private String[] encodeList;
        //Resulting list of decoded numbers/letters
        private String[] decodeList;
        private char[] encoder = {'A','B','C','D','E','F','G','H',
            'I','J','K','L','M','N','O','P','Q','R','S','T','U','V',
            'W','X','Y','Z','a','b','c','d','e','f','g','h',
            'i','j','k','l','m','n','o','p','q','r','s','t','u','v',
            'w','x','y','z','0','1','2','3','4','5','6','7','8','9'};
        public EncodeDecode(String[] origList)
            //Assign parameter to instance variable
            originalList = origList;
            //Create another string array of the same size
            encodeList = new String[originalList.length];
            //Do the mapping
            for (int index = 0; index < originalList.length; index++)
                encodeList[index] = encodeConverter(originalList[index]);  
         //Create another string array of the same size
            decodeList = new String[encodeList.length];
            //Do the mapping
            for (int index = 0; index < encodeList.length; index++)
                decodeList[index] = decodeConverter(encodeList[index]);  
        String encodeConverter(String origMix)
            //Start with empty result string
            String result = "";
            //for each character in the origMix
            for (int index = 0; index < origMix.length(); index++)
                //map it to correct number/letter and concat it to the result
                result = result + charMapEnco(origMix.charAt(index));
            //return result string
            return result;
        String decodeConverter(String encoMix)
            //Start with empty result string
            String result = "";
            //for each character in the encoMix
            for (int index = 0; index < encoMix.length(); index++)
                ////map it to correct number/letter and concat it to the result
                result = result + charMapDeco(encoMix.charAt(index));
            //return result string
            return result;
         * This maps one character to one number/letter using the standard
         * mapping. Any chars other than numbers, uppercase or lowercase
         * letters, will be flagged as errors and left unchanged
          char charMapEnco(char thisChar)
            for (int index =0; index < encoder.length; index++)
               if (thisChar == encoder[index])
                   thisChar = encoder[index + 2];
             return thisChar;
         * This maps one characeter to one number/letter using the standard
         * mapping. Any chars other than numbers, uppercase or lowercase
         * letters, will be flagged as errors and left unchanged
       char charMapDeco(char thisChar)
        //Get all three lists:
        String[] getEncodeList()
            return encodeList;
        String[] getDecodeList()
            return decodeList;
        String[] getOriginalList()
            return originalList;
    }Edited by: ebuffh51 on Feb 5, 2008 8:27 PM

    Here's some code to play with
    public class Test{
      private static char[] decoded = {'A','B','C','D','E','F','G','H',
            'I','J','K','L','M','N','O','P','Q','R','S','T','U','V',
            'W','X','Y','Z','a','b','c','d','e','f','g','h',
            'i','j','k','l','m','n','o','p','q','r','s','t','u','v',
            'w','x','y','z','0','1','2','3','4','5','6','7','8','9'};
      private char[] encoded;
      public Test(int offset){
        this.encoded = new char[decoded.length];
        System.arraycopy(Test.decoded,0,this.encoded,offset,this.encoded.length-offset);
        System.arraycopy(Test.decoded,Test.decoded.length-offset,this.encoded,0,offset);
      public String encode(String toencode){
        StringBuffer encoded = new StringBuffer();
        for(int i=0;i<toencode.length();i++){
             encoded.append(encodeChar(toencode.charAt(i)));
        return encoded.toString();
      public String decode(String todecode){
        StringBuffer decoded = new StringBuffer();
        return decoded.toString();
      private char encodeChar(char toencode){
        for(int i=0;i<Test.decoded.length;i++){
             if(toencode==Test.decoded){
              return this.encoded[i];
    return toencode;
    /*private char decodeChar(char todecode){
    public static void main(String args[]){
    Test t = new Test(7);
    String sample = "ABCDEFGHIJKLMNOPQRSTUVWXYZ!?abcdefghijklmnopqrstuvwxyz123456789";
    System.out.println("Sample before "+sample);
    sample = t.encode(sample);
    System.out.println("Sample encoded "+sample);
    sample = t.decode(sample);
    System.out.println("Sample decoded "+sample);

Maybe you are looking for