Binary encoding

Hi , I am writing an ascii file, how can i binary encode it.
Thanks.

Not sure if this is what you are looking for; but this is how I usually create real ascii files. (J2SDK 1.4)
- Chris
import java.io.*;
import java.nio.charset.*;
import java.nio.*;
public class Main
  public static void main(String[] args)
    CharsetEncoder asciiEncoder = Charset.forName("US-ASCII").newEncoder();
    String messageToWrite = "Hello World\r\n";
    try
      FileOutputStream os = new FileOutputStream("c:/test.txt");
      os.write(
          asciiEncoder.encode(CharBuffer.wrap(messageToWrite)).array()
      os.close();
    catch (IOException ex)
      ex.printStackTrace();

Similar Messages

  • Base64 binary encode a BLOB

    I need to base64 binary encode a BLOB of up to 60k.
    Basically I need a way to base64 binary encode a BLOB and output it to a text file.
    I don't want to use UTL_ENCODE.BASE64_ENCODE(r raw) because breaking my BLOB into RAW size chunks, encoding each chunk and putting the encoded data back together again doesn't work, even if I strip the '=' character(s) off the end of all but the last chunk (they're there because of the way the data at the end of the encoded chunk falls for the bit count).
    Basically I need a way to base64 binary encode a BLOB and output it to a text file.
    The resultant encoded data will often exceed 32,000 characters.
    Thanks in advance.
    John

    This may require a bit of tweaking for use as a Java stored procedure...
    public class Base64Encoder {
    /** Constructor
    public Base64Encoder() {
    public static String base64Encode(String s) {
    ByteArrayOutputStream bOut = new ByteArrayOutputStream();
    Base64OutputStream out = new Base64OutputStream(bOut);
    try {
    out.write(s.getBytes());
    out.flush();
    } catch (IOException ioe) {
    //Something to do if ioe occurs
    return bOut.toString();
    /* BASE64 encoding encodes 3 bytes into 4 characters.
    |11111122|22223333|33444444|
    Each set of 6 bits is encoded according to the
    toBase64 map. If the number of input bytes is not
    a multiple of 3, then the last group of 4 characters
    is padded with one or two = signs. Each output line
    is at most 76 characters.
    class Base64OutputStream extends FilterOutputStream
    {  public Base64OutputStream(OutputStream out)
    {  super(out);
    public void write(int c) throws IOException
    {  inbuf[i] = c;
    i++;
    if (i == 3)
    {  super.write(toBase64[(inbuf[0] & 0xFC) >> 2]);
    super.write(toBase64[((inbuf[0] & 0x03) << 4) |
    ((inbuf[1] & 0xF0) >> 4)]);
    super.write(toBase64[((inbuf[1] & 0x0F) << 2) |
    ((inbuf[2] & 0xC0) >> 6)]);
    super.write(toBase64[inbuf[2] & 0x3F]);
    col += 4;
    i = 0;
    if (col >= 76)
    {  super.write('\n');
    col = 0;
    public void flush() throws IOException
    {  if (i == 1)
    {  super.write(toBase64[(inbuf[0] & 0xFC) >> 2]);
    super.write(toBase64[(inbuf[0] & 0x03) << 4]);
    super.write('=');
    super.write('=');
    else if (i == 2)
    {  super.write(toBase64[(inbuf[0] & 0xFC) >> 2]);
    super.write(toBase64[((inbuf[0] & 0x03) << 4) |
    ((inbuf[1] & 0xF0) >> 4)]);
    super.write(toBase64[(inbuf[1] & 0x0F) << 2]);
    super.write('=');
    private static char[] toBase64 =
    {  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
    'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',
    'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
    'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',
    'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
    'o', 'p', 'q', 'r', 's', 't', 'u', 'v',
    'w', 'x', 'y', 'z', '0', '1', '2', '3',
    '4', '5', '6', '7', '8', '9', '+', '/'
    private int col = 0;
    private int i = 0;
    private int[] inbuf = new int[3];
    }

  • Base64 binary encode a .JPG

    I need to take a .JPG that is stored in a table as a <BLOB> and base64 binary encode it.
    The reason for doing this is so that I can include it as a single string of text in an XML file.
    And I have no idea how to do it!!!
    Help me please.
    Thanks in advance,
    John

    Thank you very much for your reply, but this function takes a Raw as its argument, whereas I have a BLOB.
    I'll look into breaking the BLOB down into sections, converting it to a Raw and encoding each section one at a time, but I'd prefer to encode the BLOB all in one go, if that is possible.
    Any ideas or other pointers?
    Cheers,
    John

  • Binary to Voltage Conversion of encoder data on cRio 9073 using FPGA

    I am using FPGA with a cRio 9073 to acquire torque and absolute quadrature encoder values. It says in the FPGA instructions that the documentation for the 9073 should include the binary to voltage conversion, but when I looked at the documentation, it wasn't there. Where can I find the conversion value or function to convert binary encoder data back to voltage? The encoder is hooked up to an analog converter and is acquired with a 9215 AI (+-10V differential). Thanks

    There are individual formulas for one or a group of modules.
    LabView examples path:
    LabVIEW 2010\examples\CompactRIO\Basic IO\Analog Raw Host Calibration\AI Raw Host Calibration
    LabView help topic 
    Converting and Calibrating CompactRIO Analog Input Values (FPGA
    Interface)
    Best regards
    Christian

  • How to read pdf files using java.io package classes

    Dear All,
    I have a certain requirement that i should read and write PDF files at runtime. With normal java file IO reading is not working. Can any one suggest me how to proceed probably with sample code block
    Thanks in advance.

    hi I also have the pbm. to read pdf file using JAVA
    can any body help meWhy is it so difficult to read the thread you posted in? They say: java.io is pointless, use iText. So why don't you?
    or also I want to read a binary encoded data into
    ascii,
    can anybody give me a hint how to do it.Depends on what you mean with "binary encoding". ASCII's binary encoding, too, basically.

  • ICloud disaster destroyed all my Pages work on the iPad!

    I have an IPad 2 with Pages that I have used for months to write business articles and research papers for publication. I have been using iCloud to synch it through a single user account.
    Last week, I was forced to upgrade my iMac to Lion because corrupted account permissions locked me out of all my home folders as well as my two different external drives and even my Time Capsule and Time Machine --- something supposedly impossible. But it all inexplicably happened. It took over a week for assorted sympathetic Apple care specialists and engineers to try to sort it out. The original problem was still never diagnosed or solved. Instead, my hard drive was erased, OSX 10.6 reinstalled, a date back up restored, and then Lion loaded as new. 
    As per Apple care instruction, iCloud in Lion was immediately activated on the iMac -- but no document files were ever shared.
    Four days later, after synching my iPad to iCloud, when I next opened Pages I watched in horror as in a mere second more than 30 documents vanished en masse right before my eyes.
    Somehow all these files never got backed up in my iTunes even though I had made several recent saves to the hard drive as well as an iCloud back up!
    And as for the files or even the mobile documents library folder being save in my recent Time Machine back ups, they never were. The back ups contain no iOS Pages text nor any deleted folder files to restore. 
    Despite all the Time Machine and iTunes back ups, and despite all the free space on the iPad where I actually did all the work without an internet connection, as soon as I synched they were mysteriously deleted by iCloud. 
    Pages on iPad was left empty and blank. Replacing the mobile documents folder on my iMac from a back up did at least managed to provide Pages file some icons with names dates, but they are ghost documents, mere skeleton files that can’t be opened. All my actual text content is now apparently nowhere; lost in Apple’s iCloud ether!
    This is an iPad which for months I used for Pages work without any internet connection, 3G or WiFi, and yet in an instant the iCloud on its own disintegrated everything it contained!  Apple essentially reached down into my device, into my personal property that I own, and without warning, notice, or permission and simply erased all my work; all my personal labor and effort that belonged to me. They essentially committed sabotage and theft.
    I have nothing saved anywhere despite being told how the iCloud was a trustworthy safety net, that Time Machine provided worry free back up, and that saving through iTunes could restore iPad content. Garbage. I did everything right and still got screwed.
    Further…
    In just the past two weeks alone I have suffered: my second iPhone -- a month old replacement for a previous defective iPhone itself only a month old -- experience a “software corruption” that caused it to continuously shut down if unplugged; my iMac and Time Capsule failing; and now this catastrophic iPad disaster.
    If this is not resolved with my fiels returned to me I simply will never use iCloud and I will never trust Apple again.
    I am disgusted and furious beyond words.
    JC
    Atlanta

    Hi JC,
    My iPad is in a similar state, although I arrived in the bizarro state by a different path.  I am also waiting for the apple experts to get back with me.  I have an iPad 1 with ios 5.1.1 and Pages 1.7.1.
    In the mean time, while I was waiting, I found a way to extract the TEXT from a large Pages document I did not want to lose;  it was the result of many years of notes.  This might save you some time instead of sifting through the raw index.db. 
    The index.db file is a sqlite3 file.  If you are on mac you can open it from a terminal with the command:
    sqlite3 index.db.
    I suggest you move the file out of the iTunes directory before you start working with it, just in case something unexpected happens, you don't want to lose the only copy of the data you are trying to recover. 
    1) Once the file is open via 'sqlite3 index.db' from a terminal command line, type ".tables" at the 'sqlite>" prompt and press enter.  It should show five tables:
    cullingState
    dataStates
    document
    objects
    relationships
    2) The file with your data is named dataStates, but it has many records in it.  I found my data by looking for the "largest" record.  Let me explain.  There are two fields in the dataStates table:
    identifier
    state
    Your data is stored in the 'state' field and it's data type is 'blob'. (I know that sounds stupid, but stick with me.)
    3) I found the largest record by typing  this query at the 'sqlite>' prompt:
    select identifier, length(state) from dataStates order by length(state) asc;
    My 'dataStates' table had 599 records, so many lines fly by, but the largest state field is at the bottom of the list with it's identifier.  Mine was:
    3|173212
    That identified a dataStates record with identifier '3' and 173,212 bytes of data in the 'state' field. Most of the bytes where printable ascii.  It WAS the text of my original Pages document.
    DISCLAIMER:  This method does not retrieve ANY formatting, pagination, tables bullets or numbering, images or other document rendering information.  It is just the TEXT.
    4)  Type this SQL command at the 'sqlite>' prompt to pull out the blob data:
    select hex(state) from dataStates where identifier=X;
    In my case the identifier was 3 so MY command was:
    select hex(state) from dataStates where identifier=3;
    5) Now is when the funs begins.   The large block of gibberish is 'ascii encoded binary' brought to you courtesy of the hex function. 
    To store it in a file, use the the sqlite3 '.output' command and type the preceding sql select.  Like the following:
    .output binacii.txt
    select hex(state) from dataStates where identifier=3;
    .output stdout
    This takes the large block of 'binary encoded ascii' and writes it to file named 'binascii.txt' rather than displaying it in the terminal window.  After the select statement ends, the output is switched back to the terminal window (stdout).    The 'binascii.txt' file is located in the same folder/directory that was the current directory when you launched the sqlite3 command.
    6) I moved my 'binascii.txt' file to a windows computer at this point and wrote a simple conversion routine to decode the 'ascii coded binary' into something usable with a work processor.  In "ascii coded binary", each 8-bit byte of data is encoded as two ascii characters, it enables sending binary data via the internet, etc, similar to uuencode, but NOT the same.
    7) Here is a small python program to convert the 'binascii.txt' file to the text from your Pages document on a Mac:
    import sys
    p = sys.stdin.read()
    s = []
    for location in range(0, len(p), 2):
       value = int(p[location:location+2], 16)
       if value >=32 and value <=126 or value==10:
           s.append(chr(value))
    sys.stdout.write(''.join(s))
    8) (NOTE: The last line has two single quotes (') not a double quote (")). I am assuming at this point that you know how to create a text file using a terminal window.   Add the python script to a text file, make sure you honor the indentations!  I named my python file 'convert.py'.  To use the python script, type the following command:
    cat binascii.txt | python convert.py > your.txt
    This method also assumes that the files 'binascii.txt', 'convert.py' and 'your.txt' are all in the same folder/directory.
    9) The resulting file, 'your.txt' can be opened by Word or Pages (you can name the file 'your.txt' to whatever you want).  In my case, I opened the file 'your.txt' with MS Word.  It had some rudimentary formatting in the form of paragraph breaks, single lines, some headings etc.  I did lose the data stored in tables, although I expect the text is still lurking somewhere in the index.db file.
    There were some ascii characters that were not part of my original document, but they were pretty easy to find and remove.  In my case the errant ascii characters were outside my document's large text block.   Also numbered and bulleted blocks run together.  Again the formatting information is still buried within the index.db file.
    The iOS Pages file format is more cryptic than the simple zip/XML format from the desktop OS X version of Pages.  
    Hope this helps a little, although you seem to be OK with the approach of sifting through the raw index.db file.  The Pages document  I wanted to recover was 63 pages long and manually sifting through all the crap in the index.db  drove me on a quest to find the real text within the index.db file, which i found.
    Sorry for the length of this post,
    Cheers

  • WCF-Custom performance problem with large response messages

    We're trying to debug a performance issue in Biztalk 2013 when using a WCF-Custom adapter to call en external WCF-enpoint.
    From a Biztalk orchestration we're calling en external WCF-service to retrieve an xml message sometimes containing a lot of Base64 encoded data. The response message can be up to 10Mb in size. The send port is very slow in retrieving the response
    and a 10Mb message can take up to 3min to retrieve. For comparison we've made a console program with the same service reference and binding as the Biztalk adapter uses and we can retrieve the same message in about 3 seconds.
    The WCF is using binary encoding over http and we've set the maxMessageSize, maxBufferSize and maxBufferPoolSize to Int32-MaxValue. We realise that using Biztalk there will be overhead because the message is put into the message box but we're unsure how
    to improve the performance of the send port.
    Any suggestions?

    Hello,
    There are certain Optimization you can do with your BizTalk.
    1)The first thing that I would do is to check the BizTalk SQL server jobs are running correctly
     (SQL Sever Mgmt Studio –> SQL Server Agent –> Jobs). Lookout for the jobs with the word “CleanUp” in them.
    2)Another check that you could do is to verify the entries in the Spool table
     (Database[@Name='BizTalkMsgBoxDb']/Table[@Name='Spool']). Ideally, the number of entries should not be HUGE
     (as in not over 100/200 entries)
    3) Create seperate host and host handler for your send Port.
    4) Set The MaxReceiveInterval from adm_service(BizTalk Management DB) class table 100 ms from 500(By default).
    5)Increased the Throttling Settings Internal message queue size to 1000 from 100 In new Application Host.
    6)Set the Maximum number of messaging engine threads per CPU to 40 (By default it is 20).
    7) Add Max connection in your BTSNTSVC.exe.config file
    <system.net>
                <connectionManagement>
                            <add address
    = “*” maxConnections = “300” />
                </connectionManagement>
    </system.net>
    For other setting you can look for BizTalk Optimization Guide
    http://www.microsoft.com/en-ca/download/details.aspx?id=10855 
    Above setting are for generic performance enhancement purpose.
    Note: You can verify through Orchestration debugger or Orchestration tracing how much time send port is and pipeline is taking to publish the message to Biz Talk again .
    These will give you better picture what more settings you need to apply .
    Thanks
    Abhishek

  • How to set username and password before redirecting to a RESTful webservice

    I am a .Net developer who has developed a webservice used by my ColdFusion colleagues. They are using ColdFusion 9 but I'm not sure if they have incorporated any of the newer features of ColdFusion in their apps. Here is a snippet of how they have been invoking the webmethods:
    <cfscript>
                         ws = CreateObject("webservice", "#qTrim.webServiceName#");
                         ws.setUsername("#qTrim.trimAcct#");
                         ws.setPassword("#qTrim.trimpwd#");
                         wsString=ws.UploadFileCF("#qTrim.webserviceurl#","#objBinaryData#", "#qFiles.Filename#", "Document", "#MetaData#");
                </cfscript>
    As I understand things, the .setUsername and .setPassword correspond to the Windows credentials the CF Admin set when the URL of the .Net webservice was "registered" and given its "name" (for the CreateObject statement above). I have 4 webmethods that are all invoked in this manner and this SOAP protocol works adequately for us. Please note that this ColdFusion web app authenticates anonymous remote internet users by prompting for a username and password and compares them to an application database (i.e. Microsoft calls this "forms authentication"). Because only a few Windows domain accounts are authorized to call this .Net webservice, the above code always uses the same username/password constants and it all works.
    My question involves the newest webmethod added to the .Net webservice. It requires that callers must invoke it as a RESTful service which means it must be invoked by its URL. Here is a snippet of C# code that invokes it from an ASP.NET webclient:
                string r = txtRecordNumber.Text;
                string baseurl = "http://localhost/sdkTrimFileServiceASMX/FileService.asmx/DownloadFileCF?";
                StringBuilder url = new StringBuilder(baseurl);
                url.Append("trimURL="); url.Append(txtFakeURLParm.Text);
                url.Append("&");
                url.Append("TrimRecordNumber="); url.Append(txtRecordNumber.Text);
                Response.Redirect(url.ToString());
    I assume a ColdFusion script could easily build a full URL as above with appended querystring parameters and redirect. Is there some way for the CF code redirecting to a RESTful webservice (by way of its URL) to set the Username and Password to this Windows account mentioned above? When the DownloadFileCF webmethod is hit it must be with the credentials of this special Windows domain account. Can that be set by ColdFusion someway to mimic the result of the SOAP technique (the first snippet above).
    I hope my question is clear and someone can help me make suggestions to my ColdFusion colleagues. Thanks.

    Can you clarify what you mean by "establish a different Windows identity"?  Usually passing identity to a web site or service means adding something to the request's HTTP headers.  This could be a cookie in the case of .NET forms authentication or the "Authorization" header in the case of basic authentication.
    The SOAP web service invocation code you posted does use basic authentication, according to the CF docs "ColdFusion inserts the user name/password string in the authorization request header as a base64 binary encoded string, with a colon separating the user name and password. This method of passing the user name/password is compatible with the HTTP basic authentication mechanism used by web servers."
    http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec13a13 -7fe0.html
    If you need to mimic the SOAP techinque you should have basic authentication enabled for your REST service endpoints.
    If your authentication method is different then CF developers will need to add the appropriate HTTP headers to their service calls.  Note that calling a REST service from CF would probably be accomplished using the CFHTTP tag if the service is designed to be consumed by the CF server.

  • Emailed EPS files not visible

    A friend of mine has been emailing me eps files from the Mail program on the MAC but I cannot see them as attachments in the email in Outlook or in my webmail. They seem to be coming across as images in line with the text rather than as attachments. Other image types are also appearing inline with the text rather than as attachments but I can see jpeg's, then copy them to my harddrive. The images only appear as normal attachments when another file type is attached also, like a pdf. Is there anyway to make images always attach normally?

    They seem to be coming across as images in line with the text rather than as attachments.
    If this person is using Photoshop for Windows, the problem may be how they're saving the EPS files. The default encoding for EPS (why, I don't know) is ASCII. In other words, as a text description of the data. Have her change the encoding to Binary. That choice comes up when you're either initially saving the file, or do a Save As. Once selected, it will stay on Binary for all future saves.
    But not on those already saved encoded as ASCII, depending on what she does. If she does some minor corrections and just presses Command+S to save the changes, it will be saved in the same manner it was opened as. In order to change any standing files to Binary encoding, she's have to open the file and do a Save As, overwriting the current file.

  • [JS CS4/CS5] ScriptUI Click Event Issue in JSXBIN

    Still working on a huge ScriptUI project, I discovered a weird issue which appears to *only* affect 'binary compiled' scripts (JSXBIN export), not the original script!
    When you repeatedly use addEventListener() with the same event type, you theorically have the possibility to attach several handlers to the same component and event, which can be really useful in a complex framework:
    // Declare a 'click' manager on myWidget (at this point of the code)
    myWidget.addEventListener('click', eventHandler1);
    // Add another 'click' manager (for the same widget)
    myWidget.addEventListener('click', eventHandler2);
    When you do this, both eventHandler1 and eventHandler2 are registered, and when the user clicks on myWidget, each handler is called back.
    The following script shows that this perfectly works in ID CS4 and CS5:
    // Create a dialog UI
    var     u,
         w = new Window('dialog'),
         p = w.add('panel'),
         g = p.add('group'),
         e1 = p.add('statictext'),
         e2 = p.add('statictext');
    // Set some widget properties
    e1.characters = e2.characters = 30;
    g.minimumSize = [50,50];
    var gx = g.graphics;
    gx.backgroundColor = gx.newBrush(gx.BrushType.SOLID_COLOR, [.3, .6, .9, 1]);
    // g->click Listener #1
    g.addEventListener('click', function(ev)
         e1.text = 'click handler 1';
    // g->click Listener #2
    g.addEventListener('click', function(ev)
         e2.text = 'click handler 2';
    w.show();
    The result is that when you click on the group box, e1.text AND e2.text are updated.
    But now, if I export the above code as a JSXBIN from the ESTK, the 2nd event handler sounds to be ignored! When I test the 'compiled' script and click on the group box, only e1.text is updated. (Tested in ID CS4 and CS5, Win32.)
    By studying the JSXBIN code as precisely as possible, I didn't find anything wrong in the encryption. Each addEventListener() statement is properly encoded, with its own function handler, nothing is ignored. Hence I don't believe that the JSXBIN code is defective by itself, so I suppose that the ExtendScript/ScriptUI engine behaves differently when interpreting a JSXBIN... Does anyone have an explanation?
    @+
    Marc

    John Hawkinson wrote:
    Not an explanation, but of course we know that in JSXBIN you can't use the .toSource() method on functions.
    So the implication here is that perhaps the .toSource() is somehow implicated in event handlers and there's some kind of static workaround for it?
    Perhaps you can get around this by eval() or doScript()-ing strings?
    Thanks a lot, John, I'm convinced you're on the good track. Dirk Becker suggested me that this is an "engine scope" issue and Martinho da Gloria emailed me another solution —which also works— and joins Dirk's assumption.
    Following is the result of the various tests I did from your various suggestions:
    // #1 - THE INITIAL PROBLEM// =====================================
    // EVALUATED BINARY CODE
    var handler1 = function(ev)
         ev.target.parent.children[1].text = 'handler 1';
    var handler2 = function(ev)
         ev.target.parent.children[2].text = 'handler 2';
    var     u,
         w = new Window('dialog'),
         p = w.add('panel'),
         g = p.add('group'),
         e1 = p.add('statictext'),
         e2 = p.add('statictext');
    e1.characters = e2.characters = 30;
    g.minimumSize = [50,50];
    var gx = g.graphics;
    gx.backgroundColor = gx.newBrush(gx.BrushType.SOLID_COLOR, [.3, .6, .9, 1]);
    g.addEventListener('click', handler1);
    g.addEventListener('click', handler2);
    w.show();
    eval("@JSXBIN@[email protected]@MyBbyBn0AKJAnASzIjIjBjOjEjMjFjShRByBNyBnA . . . . .");
    // RESULT
    // handler 1 only, that's the issue!
    Now to John's approach:
    // #2 - JOHN'S TRICK
    // =====================================
    var handler1 = function(ev)
         ev.target.parent.children[1].text = 'handler 1';
    var handler2 = function(ev)
         ev.target.parent.children[2].text = 'handler 2';
    // EVALUATED BINARY CODE
    var     u,
         w = new Window('dialog'),
         p = w.add('panel'),
         g = p.add('group'),
         e1 = p.add('statictext'),
         e2 = p.add('statictext');
    e1.characters = e2.characters = 30;
    g.minimumSize = [50,50];
    var gx = g.graphics;
    gx.backgroundColor = gx.newBrush(gx.BrushType.SOLID_COLOR, [.3, .6, .9, 1]);
    g.addEventListener('click', handler1);
    g.addEventListener('click', handler2);
    w.show();
    eval("@JSXBIN@[email protected]@MyBbyBn0AIbCn0AFJDnA . . . . .");
    // RESULT
    // handler1 + handler2 OK!
    This test shows that if handler1 and handler2's bodies are removed from the binary, the script works. Note that the handlers are declared vars that refer to (anonymous) function expressions. (BTW, no need to use a non-main #targetengine.) This is not a definitive workaround, of course, because I also want to hide handlers code...
    Meanwhile, Martinho suggested me an interesting approach in using 'regular' function declarations:
    // #3 - MARTINHO'S TRICK
    // =====================================
    // EVALUATED BINARY CODE
    function handler1(ev)
         ev.target.parent.children[1].text = 'handler 1';
    function handler2(ev)
         ev.target.parent.children[2].text = 'handler 2';
    var     u,
         w = new Window('dialog'),
         p = w.add('panel'),
         g = p.add('group'),
         e1 = p.add('statictext'),
         e2 = p.add('statictext');
    e1.characters = e2.characters = 30;
    g.minimumSize = [50,50];
    var gx = g.graphics;
    gx.backgroundColor = gx.newBrush(gx.BrushType.SOLID_COLOR, [.3, .6, .9, 1]);
    g.addEventListener('click', handler1);
    g.addEventListener('click', handler2);
    w.show();
    eval("@JSXBIN@[email protected]@MyBbyBnACMAbyBn0ABJCnA . . . . .");
    // RESULT
    // handler1 + handler2 OK!
    In the above test the entire code is binary-encoded, and the script works. What's the difference? It relies on function declarations rather than function expressions. As we know, function declarations and function expressions are not treated the same way by the interpreter. I suppose that function declarations are stored at a very persistent level... But I don't really understand what happens under the hood.
    (Note that I also tried to use function expressions in global variables, but this gave the original result...)
    Thanks to John, Dirk, and Martinho for the tracks you've cleared. As my library components cannot use neither the global scope nor direct function declarations my problem is not solved, but you have revealed the root of my troubles.
    @+
    Marc

  • How to diagnose performance problems?

    Hi all,
    I'm trying to run some basic performance tests of our app with Coherence, and I'm getting some pretty miserable results. I'm obviously missing something very basic about the configuration, but I can't figure out what it is.
    For our test app, I'm using 3 machines, all Mac Pros running OSX 10.5. Each JVM is configured with 1G RAM and is running in server mode. We're connected on a gigabit LAN (I ran the datagram test, we get about 108Mb/sec but with high packet loss, around 10-12%). Two of the nodes are storage enabled, the other runs as a client performing an approximation of one of the operations we perform routinely.
    The test consists of running a single operation many times. We're developing a card processing system, so I create a bunch of cards (10,000 by default), each of which has about 10 related objects in different caches. The objects are serialised using POF. I write all these to the caches, then perform a single operation for each card. The operation consists of about 14 reads and 6 writes. I can do this about 6-6.5 times per second, for a total of ~120 Coherence ops/sec, which seems extremely low.
    During the test, the network tops out at about 1.2M/s, and the CPU at about 35% on the storage nodes and almost unnoticeable on the client. There's clearly something blocking somewhere, but I can't find what it is. The client uses a thread pool of 100 threads and has 100 threads assigned to its distributed service, and the storage nodes also have 100 threads assigned to their distributed services. I also tried giving more threads to the invocation service (since we use VersionedPut a lot), but that didn't seem to help either. I've also created the required indexes.
    Any help in diagnosing what the problem would be greatly appreciated. I've attached the config used for the two types of node below.
    Cheers,
    Colin
    Config for storage nodes:
    -Dtangosol.coherence.wka=10.1.1.2
    -Dtangosol.coherence.override=%HOME%/coherence.override.xml
    -Dtangosol.coherence.distributed.localstorage=true
    -Dtangosol.coherence.wkaport=8088
    -Dtangosol.coherence.distributed.threads=100
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>distributed-backing-map</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    <serializer>
    <class-name>runtime.util.protobuf.ProtobufPofContext</class-name>
    </serializer>
    <backup-count>1</backup-count>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>distributed-backing-map</scheme-name>
    <listener>
    <class-scheme>
    <class-name>service.data.manager.coherence.impl.utilfile.CoherenceBackingMapListener</class-name>
    <init-params>
    <init-param>
    <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
    <param-value>{manager-context}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </listener>
    </local-scheme>
    <invocation-scheme>
    <service-name>InvocationService</service-name>
    <autostart>true</autostart>
    </invocation-scheme>
    </caching-schemes>
    </cache-config>
    Config for client node:
    -Dtangosol.coherence.wka=10.1.1.2
    -Dtangosol.coherence.wkaport=8088
    -Dtangosol.coherence.distributed.localstorage=false
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <thread-count>100</thread-count>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>distributed-backing-map</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    <backup-count>1</backup-count>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>distributed-backing-map</scheme-name>
    </local-scheme>
    <invocation-scheme>
    <service-name>InvocationService</service-name>
    <autostart>true</autostart>
    </invocation-scheme>
    </caching-schemes>
    </cache-config>

    Hi David,
    Thanks for the quick response. I'm currently using an executor with 100 threads on the client. I thought that might not be enough since the CPU on the client hardly moves at all, but bumping that up to 1000 didn't change things. If I run the same code with a single Coherence instance in my local machine I get around 110/sec, so I suspect that somewhere in the transport layer something doesn't have enough threads.
    The code is somewhat complicated because we have an abstraction layer on top of Coherence. I'll try to post the relevant parts below:
    The main loop is pretty straightforward. This is a Runnable that I post to the client executor:
    public void run()
    Card card = read(dataManager, Card.class, "cardNumber", cardNumber);
    assert card != null;
    Group group = read(dataManager, Group.class, "id", card.getGroupId());
    assert group != null;
    Account account = read(dataManager, Account.class, "groupId", group.getId());
    assert account != null;
    User user = read(dataManager, User.class, "groupId", group.getId());
    assert user != null;
    ClientUser clientUser = read(dataManager, ClientUser.class, "userId", user.getId());
    HoldLog holdLog = createHoldLog(group, account);
    account.getCurrentHolds().add(holdLog);
    account.setUpdated(now());
    write(dataManager, HoldLog.class, holdLog);
    ClientUser clientUser2 = read(dataManager, ClientUser.class, "userId", user.getId());
    write(dataManager, Account.class, account);
    NetworkMessage networkMessage = createNetworkMessage(message, card, group, holdLog);
    write(dataManager, NetworkMessage.class, networkMessage);
    read does a bit of juggling with our abstraction layer, basically this just gets a ValueExtractor for a named field and then creates an EqualsFilter with it:
    private static <V extends Identifiable> V read(DataManager dataManager,
    Class<V> valueClass,
    String keyField,
    Object keyValue)
    DataMap<Identifiable> dataMap = dataManager.getDataMap(valueClass.getSimpleName(), valueClass);
    FilterFactory filterFactory = dataManager.getFilterFactory();
    ValueExtractorFactory extractorFactory = dataManager.getValueExtractorFactory();
    Filter<Identifiable> filter = filterFactory.equals(extractorFactory.fieldExtractor(valueClass, keyField), keyValue);
    Set<Map.Entry<GUID, Identifiable>> entries = dataMap.entrySet(filter);
    if (entries.isEmpty())
    return null;
    assert entries.size() == 1 : "Expected single entry, found " + entries.size() + " for " + valueClass.getSimpleName();
    return valueClass.cast(entries.iterator().next().getValue());
    write is trivial:
    private static <V extends Identifiable> void write(DataManager dataManager, Class<V> valueClass, V value)
    DataMap<Identifiable> dataMap = dataManager.getDataMap(valueClass.getSimpleName(), valueClass);
    dataMap.put(value.getId(), value);
    And entrySet directly wraps the Coherence call:
    public Set<Entry<GUID, V>> entrySet(Filter<V> filter)
    validateFilter(filter);
    return wrapEntrySet(dataMap.entrySet(((CoherenceFilterAdapter<V>) filter).getCoherenceFilter()));
    This just returns a Map.EntrySet implementation that decodes the binary encoded objects.
    I'm not really sure how much more code to post - what I have is really just a thin layer over Coherence, and I'm reasonably sure that the main problem isn't there since the CPU hardly moves on the client. I suspect thread starvation at some point, but I'm not sure where to look.
    Thanks,
    Colin

  • BODS SOAP Web Service Connection error RUN-248005

    Hi Experts,
    I need help connecting SOAP web service to BODS. We have configured the initial set up by importing the functions from the WSDL that was provided to us. The WSDL provided us with a request and reply schema as follows:
    We then created the data service job that would consume the function call. An overview of the dataflow is as follows:
    When we run the job with null values for the source table the job returns no errors, but returns no data either into the destination table (Which we think is to be expected because we have no parameters to search on). If we add a value to the source table that is a required field for WSDL GET function (sys_ID) the job runs, but produces the error shown below:
    We configured the data flow by using a reference document that can be found at this link: http://www.dwbiconcepts.com/etl/23-etl-bods/158-web-service-call-in-sap-data-services.html.
    Any help in regards to why we are getting a “No response to web service error” would be much appreciated! Also, if further detail is needed please feel free to ask.

    Yes we did make progress. Follow the steps listed below.
    Pre-Configuration
    A WSDL for BODS can be provided from your vendor (Amazon or Twitter) or created in-house.  In my tutorial I will be using ServiceNow.  ServiceNow is a platform-as-a-service (PaaS) provider of IT service management (ITSM) software.
    The WSDL provided should look like this:
               https://<instance>.service-now.com/incident.do?WSDL
    To verify that the WSDL works you can open it in a browser to see the xml schema.  Since the WSDL is HTTPS the browser will prompt you to enter credentials in order to view it.  The username and password will need to be provided to you from vendor you are trying to connect to.
    BODS requires that any web service connection that is https to have a certificate saved to its internal configuration on the job server where the application resides. The certificate is referenced when the call goes out to verify authentication. 
    You can get the server certificate from the vendor who is providing the web service or you can also download the certificate from the browser and save it in base64 binary encoded format to a file and use that.  In my example I will be using Firefox to export.
    if you have fire fox, then on the left side before the URL Address bar there will be a lock icon, click on view certificate, select the details tab, select the *.service- now.com  and click on export, in the dialog box, select the Save as type X.509 Certificate (PEM), you can save this with any name on your machine.
    Now go to the JobServer machine, go to %LINK_DIR%\ext\ folder, open axis2.xml in notepad,
    since it's https uncomment the following tag (transportReceiver)
    <!--transportReceiver name="https" class="axis2_http_receiver">
    <parameter name="port" locked="false">6060</parameter>
    <parameter name="exposeHeaders" locked="true">false</parameter>
    </transportReceiver-->
    it should look line something below
    <transportReceiver name="https" class="axis2_http_receiver">
    <parameter name="port" locked="false">6060</parameter>
    <parameter name="exposeHeaders" locked="true">false</parameter>
    </transportReceiver>
    uncomment the following tag (transportSender) and comment out the parameter KEY_FILE and SSL_PASSPHRASE, enter the complete location of the certificate that you saved from the browser in the SERVER_CERT parameter. you can save the certificate also in this folder
    <!--transportSender name="https" class="axis2_http_sender">
    <parameter name="PROTOCOL" locked="false">HTTP/1.1</parameter>
    <parameter name="xml-declaration" insert="false"/>
    </transportSender>
    <parameter name="SERVER_CERT">/path/to/ca/certificate</parameter>
    <parameter name="KEY_FILE">/path/to/client/certificate/chain/file</parameter>
    <parameter name="SSL_PASSPHRASE">passphrase</parameter>
    -->
    this should look like
    <transportSender name="https" class="axis2_http_sender">
    <parameter name="PROTOCOL" locked="false">HTTP/1.1</parameter>
    <parameter name="xml-declaration" insert="false"/>
    </transportSender>
    <parameter name="SERVER_CERT">enter the certificate path</parameter>
    <!--parameter name="KEY_FILE">/path/to/client/certificate/chain/file</parameter-->
    <!--parameter name="SSL_PASSPHRASE">passphrase</parameter-->
    save this file and open this file is browser to make sure that the XML is valid
    ****How to set up multiple axis files****
    Creating a Datastore
    Select Project -> New- >  Datastore
              Datastore name: Applicable Name
              Datastore Type:  Web Service
              Web Service URL: https://<instance>.service-now.com/incident_list.do?WSDL
              Advance <<
              User name: *****
              Password: ******
              Axis2/c config file path: \Program Files (x86)\Business Objects\BusinessObjects Data           Services\ext\Service_now

  • Certificate based authentication

    I have a client application that requires certificate based authentication.
    I could not find any instructions on how to set this up in the 11g manuals. So I reverted to the 5.2 manual (http://docs.oracle.com/cd/E19850-01/816-6698-10/ssl.html#18500), and followed some instructions found online.
    I have completed the setup, and the client is able to authenticate using his certificate, and I have verified this in the logs.
    [22/Mar/2012:13:13:33 -0500] conn=34347 op=-1 msgId=-1 - SSL 128-bit RC4; client CN=userid,OU=company,L=city,ST=state,C=US; issuer CN=issuing,DC=corp,DC=company,DC=lan
    [22/Mar/2012:13:13:33 -0500] conn=34347 op=-1 msgId=-1 - SSL client bound as uid=userid,ou=employees,o=company
    [22/Mar/2012:13:13:33 -0500] conn=34347 op=0 msgId=1 - BIND dn="" method=sasl version=3 mech=EXTERNAL
    When adding the usercertificate attribute to the ID I used the following LDIF:
    version: 1
    dn: uid=userid,ou=employees,o=company
    changetype: modify
    replace: userCertificate
    usercertificate: < file:///home/user/Certs/usercert.bin
    the file was a binary encoded certificate file.
    Here is the part that I don't understand when I do a search (or LDIF export) of the user object with the certificate it just returns a short base64 encoded string. when I decode this string, it is just the literal string of "< file:///home/user/Certs/usercert.bin".
    So it appears that the certificate has not been stored on the user object in binary, and yet the certificate authentication still works. The file mentioned, does not exist on the LDAP server (the cert was loaded from another server), so there is no way that it is reading the cert from the file.
    Anyone have any idea what is going on here? And why certificate auth works, when there appears to be not cert stored in LDAP?
    If by chance this is how it is all suppose to work, then how do I go about backing up the usercertificate attribute when I do my LDAP data backups?
    Thanks
    Brian

    Cyril,
    Thanks for the reply.
    I believe I am doing both types of certificate authentication, you are describing. My issue is that when I perform the steps to store the PEM formatted cert into the directory server, rather than storing a binary value of the cert, it appears to be storing the path to the file I attempted to import. The odd part is that I can still authenticate even after this is done.
    I tried to post as much info as I could before without posting any sensitive data, I'll try and expand on that below.
    Here is my documentation of the steps taken to configure the server and setup a user, for what I believe to be certificate based authentication, where the user is authenticated solely on the certificate that they provide (no password is sent).
    1. Server must be running SSL, all connections for Certificate Auth are done over SSL (just a note)
    2. From the DSCC
    ----a. Directory Servers Tab -> Servers Tab -> Click Server Name
    ----b. Security Tab -> General Tab
    ----c. In "Client Authentication" section, select:
    --------i. LDAP Settings: "Allow Certificate-Based Client Authentication"
    --------ii. This should be the default setting.
    3. On the directory server setup the /ldap/dsInst/alias/certmap.conf file:
    ----a. certmap default default
    ----default:DNComps
    ----default:FilterComps uid,cn
    4. restart the directory server
    5. Do the following to setup the user who will be connecting. On their unix account (or similar)
    ----a. Create a directory to hold the certDB
    --------i. mkdir certdb
    ----b. Create a CertDB
    --------i. /ldap/dsee7/bin/certutil -N -d certdb
    ------------1) Enter a password when prompted
    ----c. Import the CA cert
    --------i. /ldap/dsee7/bin/certutil -A -n "OurRootCA" -t "C,," -a -I ~/OurRootCA.cer -d certdb
    ----d. Create a cert request
    --------i. /ldap/dsee7/bin/certutil -R -s "cn=userid,ou=company,l=city,st=state,c=US" -a -g 2048 -d certdb
    ----e. Send the cert request to the PKI Team to generate a user cert
    ----f. Take the text of the generated cert & save it to a file
    ----g. Import the new cert into your certdb
    --------i. /ldap/dsee7/bin/certutil -A -n "certname" -t "u,," -a -i certfile.cer -d certdb
    ----h. Create a binary version of cert
    --------i. /ldap/dsee7/bin/certutil -L -n "certname" -d certdb -r > userid.bin
    ----i. Add the binary cert to the user's LDAP entry (version: 1 must be included - I read this in a doc somewhere, but it doesn't seem to matter)
    --------i. ldapmodify
    ------------1) ldapmodify -h host -D "cn=directory manager" -w password -ac
    ------------2)
    ------------version: 1
    ------------dn: uid=userid,ou=employees,o=company
    ------------sn: Service Account
    ------------givenName: userid
    ------------uid: userid
    ------------description: Service Account for LDAP
    ------------objectClass: top
    ------------objectClass: person
    ------------objectClass: organizationalPerson
    ------------objectClass: inetorgperson
    ------------cn: Service Account
    ------------userpassword: password
    ------------usercertificate: < file:///home/userid/Certs/userid.bin
    ------------nsLookThroughLimit: -1
    ------------nsSizeLimit: -1
    ------------nsTimeLimit: 180
    After doing this setup I am able to perform a search using the certificate:
    ldapsearch -h host -p 1636 -b "o=company" -N "certname" -Z -W CERTDBPASSWORD -P certdb/cert8.db "(uid=anotherID)"
    This search is successful, and I can see it logged, as having been a certificate based authentication:
    [23/Mar/2012:13:25:20 -0500] conn=44605 op=-1 msgId=-1 - fd=136 slot=136 LDAPS connection from x.x.x.x:53574 to x.x.x.x
    [23/Mar/2012:13:25:20 -0500] conn=44605 op=-1 msgId=-1 - SSL 128-bit RC4; client CN=userid,OU=company,L=city,ST=state,C=US; issuer CN=issuer,DC=corp,DC=company,DC=lan
    [23/Mar/2012:13:25:20 -0500] conn=44605 op=-1 msgId=-1 - SSL client bound as uid=userid,ou=employees,o=company
    [23/Mar/2012:13:25:20 -0500] conn=44605 op=0 msgId=1 - BIND dn="" method=sasl version=3 mech=EXTERNAL
    If I understand correctly that would be using the part 2 of your explanation as using the binary encoded PEM to authenticate the user. If I am not understanding that corretly please let me know.
    Now the part that I am really not getting is that the usercertificate that is stored on the ID is as below:
    dn: uid=userid,ou=employees,o=company
    usercertificate;binary:: PCBmaWxlOi8vL2hvbWUvdXNlcmlkL0NlcnRzL3VzZXJpZC5iaW4
    which decodes as: < file:///home/userid/Certs/userid.bin
    So I'm still unclear as to what is going on here, or what I've done wrong. Have I set this up incorrectly such that Part 2 as you described it is not what I have setup above? Or am I missunderstanding part 2 entirely?
    Thanks
    Brian
    Edited by: BrianS on Mar 23, 2012 12:14 PM
    Just adding ---- to keep my instruction steps indented.

  • Send mail body +  attachement

    Hi expert,
    I am using one script through that using rsync to synchronize one folder call "demo" and then send the output of the script to my mail box (outlook mail box).
    It is doing all perfectly.
    Now the additional requirement is to send the same output file as an attachement also.
    Means the comming mail should be like : message body + attachment.
    Here is the scritp and its output.
    #!/bin/bash
    ADMIN="[email protected]"
    . .profile
    cd /u01/test1
    rsync -avzu --stats demo [email protected]:/u04/test2 > t.txt
    cat t.txt| mailx -s "XXXXXXXXXXxx" $ADMIN
    exit 0
    output:
    sending incremental file list
    rsyn/test1.lst
    Number of files: 4
    Number of files transferred: 4
    Total file size: 212 bytes
    Total transferred file size: 66 bytes
    Literal data: 2 bytes
    Matched data: 64 bytes
    File list size: 93
    File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 150 Total bytes received: 38
    sent 150 bytes received 38 bytes 8.00 bytes/sec total size is 212 speedup is 1.13
    please suggest a way
    Thanks in ADV

    Does the following work?
    cat t.txt | uuencode -m t.txt > t.tmp
    unix2dos t.tmp
    echo >> t.tmp
    echo >> t.tmp
    cat t.txt >> t.tmp
    cat t.tmp| mailx -s "XXXXXXXXXXxx" $ADMIN
    rm t.tmpuuencode -m will do MIME base64 encoding which is today's binary encoding standard for email attachments. If you run Enterprise Linux you can install uuencode, etc. using "sudo yum install sharutils". Using unix2dos may be necessary.

  • Storing complex linked structures using XSU utility

    We have a requirement to store data with very complex linked structures in database. One utility we are analyzing is the XML SQL Utility (XSU).
    The original file is a binary encoded file and it has to be converted to XML for persisting. The files are really huge and complex.
    The main program is in C++, which decodes the binary file and this has to be interfaced with Java to use this utility.
    If anybody has used this tool for a similar requirement, please give your inputs regarding the performance issues when handling huge, complex data.
    Thanks and regards
    Manshi

    You can use XDK C++ interface to write a program. To be able to deal with large document, SAX will be a good parsing technique.
    There is no simple tools available as far as I know.

Maybe you are looking for

  • How can I install any software if they break libpng/jpeg?

    So much software I install either through AUR or pacman requires libpng or whatnot to be a specific version, but when I upgrade them, it never fails to crash everything. Libpng and jpeg fail to run claiming x.12.so is not found and I need to run a ro

  • Print PDF issues with Reader 11.0.03

    I upgraded to Reader 11.0.03 and can no longer print any .pdf files in Mac OS 10.8.4 I get the print dialog box, but when attempting to print get a pop-up "The document could not be printed" and then "there were no pages selected to print".

  • Seg fault using catalyst(-total) driver for my ATI/intel hybrid card

    I am using a HP Envy 14 with Radeon 5650. This is my lspci: http://pastebin.com/5w7SFdrK What I've done so far is I've installed xf86-video-intel (for the onboard card) and catalyst-total (from AUR) for the discrete card. I am using the proprietary v

  • Mysterious green text in upper left of screen on output

    How do I turn this off? I'm getting four lines of green text in the upper left of the screen when I output to Matrox AVI from premiere CS4. I haven't seen this before so I must have inadvertently switched something on, but I can't determine what it i

  • Append new records for a spool file

    Hello, I am running a query and sending the output to a spool file. The query never returns the same data because it is run every 2 hours How can I make the spool to append the new records and not overwrite the old records? Thanks Carlos