New character encoding

I've invented a Character encoding.
Advantages of this: ANSI ASCII compatible, Bitwise operations based, Self-synchronising, Abundant.
Yields of this encoding, against those of UTF-8, are,
Number of Bits    This Encoding    UTF-8
Number of Codes    Accumulation    Number of Codes    Accumulation
8    128    128    128    128
10    192    320    0    128
12    512    832    0    128
14    1280    2112    0    128
16    3072    5184    2048    2176
18    7168    12352    0    2176
20    16384    28736    0    2176
22    36864    65600    0    2176
24    81920    147520    65536    67712
26    180224    327744    0    67712
28    393216    720960    0    67712
30    851968    1572928    0    67712
32    1835008    3407936    2097152    2164864
This is a new Character encoding scheme (CES) that maps Unicode code points to bit sequences.
Could you please suggest improvements?
Please bear with me as the table may not be formatted well for you, especially when using serif. When reading a line in the table, the first value is number of bits and the next pair is for this encoding, and the other pair is for UTF-8, with the first of each pair being the number of codes and the others are their accumulation.
Sorry! I wish to provide more details on this but I'm restricted for some time. I hope that this does not stop you from assisting me.
Regards
Anbu
This encoding maintains almost all the properties of UTF-8 in a more compact format. ANSI ASCII compatiblity, Bitwise operations based, Self-synchronising and Abundance are some of the properties of this encoding. Further, this encoding encodes all characters in far fewer number of bits than UTF-8 as shown in the table. As I had mentioned earlier I will provide more details and proofs soon. This post is a request for suggestions. Please suggest the most suitable place for this post.

Ambuk which Adobe software or service is your inquiry in relation too?

Similar Messages

  • Adding new character encoding to PBP

    is there any way to add new character encoding in PBP1.1..?
    .I want it to support japanese encoding

    1) Derive MyDateFormat from SimpleDateFormat only allowing the default constructor.
    2) Override the public void applyPattern(String pattern) method so that it detects the 'Q' and replaces it with some easily identifiable pattern involving the month (say �MM� ) and then call the superclass applyPattern method.
    3) Override the public StringBuffer format(Date date,
                StringBuffer toAppendTo,
                FieldPosition fieldPosition)
          method such that if first calls the superclass method to get the formatted output and then corrects this output by replacing (using regular expressions) the �01�, �02� etc with the appropriate quarter.
    You might do better to not to actually derive a new class from SimpleDateFormat but just create a class which uses SimpleDateFormat.

  • How to add a new character set encoding?

    Hello,
    can anybody please explain to me, how to add a new character set encoding to Mac OS Tiger?
    I have two Mac laptops, a new one with Snow Leopard and an older one with Tiger, and on the old one i cannot use or enable anywhere the "Russian (DOS)" character set encoding, which i need to be able to use some old text files.
    On the Snow Leopard, this encoding is present in the list of available encodings of TextWrangler, but not in TIger.
    If i have understood correctly, this is not a problem of TextWrangler, and the same encodings are available systemwide.
    So, the question is: how to add new encodings to Tiger (or to Mac OS in general)?
    Thanks.

    I think possibly that's in the Get Info window of Finder?
    I don't think either that or the input menu have any effect on available encoding choices. Adding languages to system prefs/international/languages can do that, but once you have added Russian there, I don't know of any way to add an additional Russian encoding (there are quite a number of them).

  • XML parser not detecting character encoding

    Hi,
    I am using Jdeveloper 9.0.5 preview and the same problem is happening in our production AS 9.0.2 release.
    The character encoding of an xml document is not correctly being detected by the oracle v2 parser even though the xml declaration correctly contains
    <?xml version="1.0" encoding="ISO-8859-1" ?>
    instead it treats the document as UTF8 encoding which is fine until a document comes along with an extended character which then causes a
    java.io.UTFDataFormatException: Invalid UTF8 encoding.
    at oracle.xml.parser.v2.XMLUTF8Reader.checkUTF8Byte(XMLUTF8Reader.java:160)
    at oracle.xml.parser.v2.XMLUTF8Reader.readUTF8Char(XMLUTF8Reader.java:187)
    at oracle.xml.parser.v2.XMLUTF8Reader.fillBuffer(XMLUTF8Reader.java:120)
    at oracle.xml.parser.v2.XMLByteReader.saveBuffer(XMLByteReader.java:448)
    at oracle.xml.parser.v2.XMLReader.fillBuffer(XMLReader.java:2023)
    at oracle.xml.parser.v2.XMLReader.tryRead(XMLReader.java:972)
    at oracle.xml.parser.v2.XMLReader.scanXMLDecl(XMLReader.java:2589)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:485)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:192)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:144)
    as you can see it is explicitly casting the XMLUTF8Reader to perform the read.
    I can get around this by hard coding the xml input stream to be processed by a reader
    XMLSource = new StreamSource(new InputStreamReader(XMLInStream,"ISO-8859-1"));
    however the manual documents that the character encoding is automatically picked up from the xml file and casting into a reader is not necessary, so I should be able to write
    XMLSource = new StreamSource(XMLInStream)
    Does anyone else experience this same problem?
    having to hardcode the encoding causes my software to lose flexibility.
    Jarrod Sharp.

    An XML document should be created with 'ISO-8859-1' encoding to be parsed as 'ISO-8859-1' encoding.

  • What's the difference of character encoding between 1.4.0and1.4.2 in Linux

    As i find, the character encoding about chinese in jdk1.4.2 no langer the same of jdk1.4.0.
    In jdk1.4.0, the character encoding used the "file.encoding" system property, we often set the
    property with "gb2312".
    But in jdk1.4.2, i find that the default character encoding no longer used the "file.encoding" system property.
    Who knows the reason?
    Test Program:
    public class B{
    public static void main(String args[]) throws Exception{
    byte [] bytes = new byte[]{(byte)0xD6,(byte)0xD0,(byte)0xCE,(byte)0xC4};
    String s1 = new String(bytes);
    String s2 = new String(bytes,System.getProperty("file.encoding"));
    System.out.println("s1="+s1+" , s2="+s2);
    System.out.println("s1.length=" + s1.length() + " , s2.length="+s2.length());
    run four times and the result list:
    [root@app15 component]# /usr/local/j2sdk1.4.0/bin/java -Dfile.encoding=ISO-8859-1 -cp . B
    s1=&#20013;&#25991; , s2=&#20013;&#25991;
    s1.length=4 , s2.length=4
    [root@app15 component]# /usr/local/j2sdk1.4.0/bin/java -Dfile.encoding=gb2312 -cp . B
    s1=&#20013;&#25991; , s2=&#20013;&#25991;
    s1.length=2 , s2.length=2
    [root@app15 component]# /usr/local/j2sdk1.4.2/bin/java -Dfile.encoding=ISO-8859-1 -cp . B
    s1=&#20013;&#25991; , s2=&#20013;&#25991;
    s1.length=4 , s2.length=4
    [root@app15 component]# /usr/local/j2sdk1.4.2/bin/java -Dfile.encoding=gb2312 -cp . B
    s1=&#20013;&#25991; , s2=??
    s1.length=4 , s2.length=2
    [root@app15 component]#

    I don't know for sure, but:
    -- The API documentation for String says that "new String(byte[])" uses "the platform's default charset".
    -- The API documentation for Charset says "The default charset is determined during virtual-machine startup and typically depends upon the locale and charset being used by the underlying operating system."
    You'll notice that it doesn't say anything about using the file.encoding system value, so presumably (based on your experiments) it doesn't. I did a search for "java default charset" and didn't find anything specific, but this site says "As of Java 1.4.1, the default Charset varies from platform to platform" and suggests you explicitly hard-code your charset. I would agree with that.

  • How can I tell what character encoding is sent from the browser?

    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

    From what I understand (I haven't used it yet) 6.1 supports the 2.3
    servlet spec. That should have a method to set the encoding.
    Otherwise, I don't think you can support multiple encodings in one
    instance of WebLogic.
    From what I know browsers don't give any indication at all about what
    encoding they're using. I've read some chatter about the HTTP spec
    being changed so it's always UTF-8, but that's a Some Day(TM) kind of
    thing, so you're stuck with all the stuff out there now which doesn't do
    everything in UTF-8.
    Sorry for the bad news, but if it makes you feel any better I've felt
    your pain. Oh, and trying to process multipart/form-data (file upload)
    forms is even worse and from what I've seen the API that people talk
    about on these newsgroups assumes everything is ISO-8859-1.
    Emmy Lau wrote:
    >
    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

  • Where is the Character Encoding option in 29.0.1?

    After the new layout change, the developer menu don't have Character Encoding options anymore where the hell is it? It's driving me mad...

    You can also this access the Character Encoding via the View menu (Alt+V C)
    See also:
    *Options > Content > Fonts & Colors > Advanced > Character Encoding for Legacy Content

  • Character Encoding question

    I'm helping another group out on this so I'm pretty new to this stuff so please go easy on me if I ask anything that is obvious.
    We have a J2EE web application that is sitting on a Red Hat Linux box and is being served up by OAS 10.1.3. The application reads an xml file which contains the actual content of the page and then pulls in the navigation and metadata from other sources.
    Everyone works as it should but there is one issue that has been ongoing for a while and we would like to close it off. In my content source xml file, I have encoded special characters such as é as & amp;#233; - when I view the web page all is well (I see the literal value of é but when I do a view source, I see & #233;
    If I put & #233; into the source xml file, the page still displays with é but when I do a view source on the web page, I see literal é value in the source which is not what we want. What is decoding the character reference? While inserting & amp;#233; into the xml source file works, we do not want to have to encode everything that way we would prefer the have & #233;. Is it a setting of the OS, the Application Server or the Application itself?
    When I previewed this post, I noticed that by typing & amp;#233; as one solid word, it gets decoded as is seen as é so I had to put a space between the & and the amp to get my properly explain myself.
    Any help would be appreciated!
    Thanks,
    /HH

    There are a lot of notes on MetaLink about character encoding. I wrote Note 337945.1 a while ago, which explains this into more detail. I will quote some relevant to your situation:
    For the core components, there are three places to set NLS_LANG:
    - in the system environment (this is obvious)
    - in the file opmn.xml
    - in the file apachectl
    A. Changing opmn.xml
    - go to $ORACLE_HOME/opmn/conf and edit the file opmn.xml
    - Search for the OC4J container your application runs in.
    - Within the <process-type.... > </process-type> section, add an entry similar to:
    (1) OracleAS 10g (10.1.2, 10.1.3):
    <environment>
         <variable id="NLS_LANG" value="ENGLISH_UNITED KINGDOM.AL32UTF8"/>
    </environment>
    B. Changing apachectl (Unix only)
    - Go to $ORACLE_HOME/Apache/Apache/bin
    - Open the file 'apachectl'
    - search for NLS_LANG
    e.g.
    NLS_LANG=${NLS_LANG=""}; export NLS_LANG
    Verify if the variable is getting the correct value; this may depend on your environment and on the version of OracleAS. If necessary, change this line. In this example, the value from the environment is taken automatically.
    There is more on this topic in the mod_plsql area but since you do not mention pulling data from the database, this may be less relevant. Otherwise you need to ensure the same NLS_LANG and character set is used in the database to avoid conversions.

  • Character Encoding in XML

    Hello All,
    I am not clear about solving the problem.
    We have a Java application on NT that is supposed to communicate with the same application on MVS mainframe through XML.
    We have a character encoding for these XML commands we send for communication.
    The problem is, on MVS the parser is not understaning the US-ASCII character encoding. And so we are getting the infamous "illegal character error".
    The main frame file.encoding=CP1047 and
    NT's file.encoding = us-ascii.
    Is there any character encoding that is common to these two machines: mainframe and NT.
    IF it is Unicode, what is the correct notation for it.
    Or is there any way for specifying the parsers to which character encoding should be used.
    thanks,
    Sridhar

    On the mainframe end maybe something like-
    FileInputStream fris = new FileInputStream("C:\\whatever.xml");
    InputStreamReader is= new InputStreamReader(fris, "ASCII");//or maybe "us-ascii" "US-ASCII"
    BufferedReader brin = new BufferedReader(is);
    Or give inputstream/buffered reader to whatever application you are using to parse the xml. The input stream reader should allow you to set your encoding even if the system doesnt have the native encoding. Depends though on which/whose jvm using you are using jdk1.2 at least supports following on this page http://as400bks.rochester.ibm.com/pubs/html/as400/v4r4/ic2924/info/java/rzaha/javaapi/intl/encoding.doc.html

  • Detecting character encoding from BLOB stream... (PLSQL)

    I'am looking for a procedure/function which can return me the character encoding of a "text/xml/csv/slk" file stored in BLOB..
    For example...
    I have 4 files in different encodings (UTF8, Utf8BOM, ISO8859_2, Windows1252)...
    With java I'can simply detect the character encoding with JuniversalCharDet (http://code.google.com/p/juniversalchardet/)...
    thank you

    Solved...
    On my local PC I have installed Java 1.5.0_00 (because on DB is 1.5.0_10)...
    With Jdeveloper I have recompiled source code from:
    http://juniversalchardet.googlecode.com/svn/trunk/src/org/mozilla/universalchardet
    http://code.google.com/p/juniversalchardet/
    After that I have made a JAR file and uploaded it with loadjava to my database...
    C:\>loadjava -grant r_inis_prod -force -schema insurance2 -verbose -thin -user username/password@ip:port:sid chardet.jarAfter that I have done a java procedure and PLSQL wrapper example below:
       public static String verifyEncoding(BLOB p_blob) {
           if (p_blob == null) return "-1";
           try
            InputStream is = new BufferedInputStream(p_blob.getBinaryStream());
            UniversalDetector detector = new UniversalDetector(null);
            byte[] buf = new byte[p_blob.getChunkSize()];
            int nread;
            while ((nread = is.read(buf)) > 0 && !detector.isDone()) {
                detector.handleData(buf, 0, nread);
            detector.dataEnd();
            is.close();
           return detector.getDetectedCharset();
           catch(Exception ex) {
               return "-2";
       }as you can see I used -2 for exception and -1 if input blob is null.
    then i have made a PLSQL procedure:
    function f_preveri_encoding(p_blob in blob) return varchar2 is
    language Java name 'Zip.Zip.verifyEncoding(oracle.sql.BLOB) return java.lang.String';After that I have uploaded 2 different txt files in my blob field.. (first one is encoded with UTF-8, second one with WINDOWS-1252)..
    example how to call:
    declare
       l_blob blob;
       l_encoding varchar2(100);
    begin
    select vsebina into l_blob from dok_vsebina_dokumenta_blob where id = 401587359 ;
    l_encoding := zip_util.f_preveri_encoding(l_blob);
    if l_encoding = 'UTF-8' then
       dbms_output.put_line('file is encoded with UTF-8');
    elsif l_encoding = 'WINDOWS-1252' then
       dbms_output.put_line('file is encoded with WINDOWS-1252');
    else
        dbms_output.put_line('other enc...');
    end if;
    end;Now I can get encoding from blob and convert it to database encoding and store datas in CLOB field..
    Here you have a chardet.jar file if you need this functionality..
    https://docs.google.com/open?id=0B6Z9wNTXyUEeVEk3VGh2cDRYTzg
    Edited by: peterv6i.blogspot.com on Nov 29, 2012 1:34 PM
    Edited by: peterv6i.blogspot.com on Nov 29, 2012 1:34 PM
    Edited by: peterv6i.blogspot.com on Nov 29, 2012 1:38 PM

  • (nokia N8) Character encoding - reduced support (n...

    I'm from Poland, and in my language i use special letters like ó,ż,ź,ą,ę. When i want write a massage and use one of above letters, my message is shorter 90 signs. In older Nokia's phone i change settings text messages (character encoding:full support=>reduced support). When i change the same settings in Nokia N8, this settings doesn't work! I always see shorter message, strangest is that when i choose "conversations" i don't have any problems, everything works fine. When i choose "new message" i can't use reduced support (this option i on, but not working) My software version:011.012 Software version date: 2010-09-18 Release: PR1.0 Custom Version: 011.012.00.01 Custom version date: 2010-09-18 Language set: 011.012.03.01 Product code: 0599823

    Talking about product code changes is prohibited in this forum. It is unofficial and is grounds for Nokia to refuse to repair or service your phone in any way.
    If you find my post helpful please click the green star on the left under the avatar. Thanks.

  • Force same character encoding in UNIX as in Windows

    My program reads a simple text file and outputs the same on the console. The file contains special characters like àéèéàöäül etc. The goal of this exercise is to make java recognize special characters both on the UNIX and on the Windows system in the same manner.
    I wrote the program on a Windows machine and then transferred the class file to the UNIX system and ran it there. It is giving two different outputs for the different systems.
    Here is the code
    public class LocaleFile {
    public static void main(String [] args ) throws IOException{
    System.out.println("Deafult Locale :: "+Locale.getDefault());
    System.out.println("Deafult Characterset :: "+Charset.defaultCharset());
    System.out.println("Resetting defailt local to de_DE ... ");
    Locale newLocale=new Locale(Locale.GERMAN.toString(), Locale.GERMANY.toString());
    Locale.setDefault(newLocale);
    System.out.println("After resetting Locale :: "+Locale.getDefault());
    BufferedReader bufferedReader=new BufferedReader( new InputStreamReader( new FileInputStream("helloworld.txt"), "ISO-8859-15"));
    String line=bufferedReader.readLine();
    do{
    System.out.println(line);
    line=bufferedReader.readLine();
    while(line!=null);
    bufferedReader.close();
    Windows Output
    Deafult Locale :: de_CH
    Deafult Characterset :: ISO-8859-1
    Resetting defailt local to de_DE ...
    After resetting Locale :: de_DE_DE
    Character set 1: Arijit
    Character set 2: è!éà£
    Character set 3: ü?öä$
    Character set 4: []{}
    End of Test
    UNIX Output
    bash-3.00$ java LocaleFile
    Deafult Locale :: en
    Deafult Characterset :: US-ASCII
    Resetting defailt local to de_DE ...
    After resetting Locale :: de_DE_DE
    Character set 1: Arijit
    Character set 2: ?!???
    Character set 3: ????$
    Character set 4: []{}
    End of Test
    bash-3.00$
    *Structure of helloworld.txt*
    Character set 1: Arijit
    Character set 2: è!éà£
    Character set 3: ü?öä$
    Character set 4: []{}
    End of Test
    Hope to hear from you soon guys.
    Thanks
    Arijit

    gimbal2 wrote:
    Just a thought, but I'd say that it is the character encoding of the command/bash prompt that is misleading you here. It probably actually has nothing to do with the code or the data, just the way the OS is displaying it.
    edit: ninja'd by Baftos. :)*@gimbal2*
    Well, thank you. That did raise a doubt. I changed the program slightly to write to a file say "recorded.txt" instead of printing it to the console.
    Code
    BufferedReader bufferedReader=new BufferedReader( new InputStreamReader( new FileInputStream("helloworld.txt"), "ISO-8859-15"));
    BufferedWriter bufferedWriter=new BufferedWriter( new OutputStreamWriter(new FileOutputStream("recorded.txt")));
    String line=bufferedReader.readLine();
    do{
    bufferedWriter.write(line.toString());
    System.out.println(line);
    line=bufferedReader.readLine();
    while(line!=null);
    bufferedReader.close();
    bufferedWriter.close();
    However the output remained the same

  • Setting character encoding in a Writer

    Hi,
    Is this possible?
    I'm reading from an InputStream (stream from a text file) using an InputStreamReader wrapped in a BufferedReader. I set the character encoding in the InputStreamReader.
    Then I read line by line - making some modifications in the text and writing (appending)
    each line into a new file (using a FileWriter wrapped in a BufferedWriter).
    I can't seem to set the character encoding in any of the Writer.
    Firstly, do I need to specify the encoding to 'keep' the correct encoding?
    And if so, what writer should I use?

    Yes you need to specify the encoding. Use an OutputStreamWriter in much the same way as you used the InputStreamReader.

  • Change character encoding?

    Edit > Preferences > New Document > Character
    encoding is set to UTF-8
    However when I edit documents with non-standard extensions
    (I'm working on PHP files with a .ctp extension) the document still
    seems to save in iso-8859-1 format. This problem doesn't seem to
    occur on files with .php/html extensions.
    Does anyone know of a solution to this problem?
    Thanks,
    Emil

    I'm not sure where you are getting %xx encoded UTF-8.... Is it cuz you have it in a GET method form and that's what you are seeing in the browser's location bar? ...
    Let's assume you have a form on a page, and the page's charset is set to UTF-8, and you want to generate a URL encoded string (%xx format, although URLEncoder will not encode ASCII chars that way...).
    In the page processing the form, you need to do this:
    request.setCharacterEncoding("UTF-8"); // makes bytes read as UTF-8 strings(assumes that the form page was properly set to the UTF-8 charset)
    String fieldValue = request.getParameter("fieldName"); // get value
    // the value is now a Unicode String in Java, generated from reading the bytes submitted from the form as UTF-8 encoded text...
    String utf8EncString = URLEncoder.encode(fieldValue, "UTF-8");
    // now utf8EncString is a URL encoded (%xx) string of UTF-8 values
    String euckrEncString = URLEncoder.encode(fieldValue, "EUC-KR");
    // now euckrEncString is a URL encoded (%xx) string of EUC-KR valuesWhat is probably screwing things up for you mostly is this:
    euckrValue = new String(utf8Value.getBytes(), "EUC-KR");
    What this does is takes the bytes of the string utf8Value (which is not really UTF-8... see below) in the local encoding (possibly Cp1252 (Windows) or ISO8895-1 (Linux), or EUC-KR if it's Korean Windows), and then reads them as if they were EUC-KR... which they aren't.
    The key here is that Strings in Java are not of any encoding. They are pure Unicode values. Encodings only matter when converting to or from bytes. The strings stored in a file or sent over the net have to convert to bytes since that's what is stored/sent, just bytes. The encoding defines how the characters can be encoded into 1 or more bytes, and thus reconstructed.

  • Problem using character encoding....

    Hi,
    I have written an app which constructs an email and sends it to subscribers. The data is read from the database (oracle 9) and its running on UNIX.
    The subject title (read from the email) however does not appear correctly in the email. I think the encoding character set is incorrect.
    When my subject title (a string) is:
    test � $
    i get something like this
    test =?ANSI_X3.4-1968?Q?
    I checked and the data is stored correctly in the database.
    So i changed my method which reads the subject title from the database to:
    for (int i = 0; i < o.length; i++) {
    strArray[i] = new String(((String) o).getBytes(), "ISO-8859-1");
    which basically changes the character encoding to ISO-8859-1 for the text. But now in the subject title i get
    test $��
    (the � shouldnt be there)
    Any ideas? Which character set should i use?

    You should use the character set that the database uses. Looks like it could be UTF-8.

Maybe you are looking for

  • Unable to update the values from store procedure

    hi , I have simple table with 3 columns. id, name , sal. am updating the salary based on names.  alter PROCEDURE p1(@names varchar(100)) AS BEGIN declare @empData varchar(200) set @empData= '''' + replace(@names,',',''',''')+'''' print @empData updat

  • Issues with Catalogs in Organiser when photos on hard disk other than C:

    Hello Folks; I face the following issue with PSE7 : the Catalog function does not work when I try to import pictures located on any other hard disk different from C: disk. In other words, the following happens : 1) in the Organiser, I create a new Ca

  • Dimension Doesn't Open, No Error Message

    We're experiencing an odd occurence in admin (using BPC 7.0).  We're attempting to add a member to an existing dimension.  We click on 'Maintain dimension members".  It starts working, but the dimension sheet never opens.  We receive no error message

  • Filtering internal table using select-option values

    Hi, I got an internal table with select-option values. for eg.  it_perno with the values  I  BT    000160    000170. Now i got another second internal table which is have the person number. Now i need to filter this second internal table based on the

  • How to make BPEL process to be exposed via java binding?

    How to make a BPEL process itself to be exposed via java binding?