Parsing chars into Strings

I am trying to create a Hangman program. It uses a dialog box to get input and then displays the word in the GUI. I need to know how to turn a character from a string into a char. I couldn't find any method to help me in the API documentation. I'm sure this is a stupid question but any help would be nice. Thank you in advance.

String.valueOf(char);That's not going to work. The OP wants to change "from a string into a char". This example does the opposite.

Similar Messages

  • How to Parse XML into String in BPEL?

    Hi,
    Can anyone tell me, how can I parse XML into String?
    I am taking input from File Adapter, File adapter is reading that XML.
    Then in assign activity i am using XPath expression(built functions) using XMLParser(),doTranslateToNative() etc.. many functions I have tried but XML is not getting parsed into String Variable.
    Please help me asap.
    Thanks
    Shikha

    Thanks a lot Eric.
    I am trying this, oraext:get-content-as-string('receiveInput_Read_InputVariable','body','/ns3:orders')
    but getting this error
    <bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is oraext:get-content-as-string('receiveInput_Read_InputVariable','body','/ns3:orders'). The XPath expression failed to execute; the reason was: internal xpath error. Check the detailed root cause described in the exception message text and verify that the XPath query is correct. </summary></part><part name="code"><code>XPathExecutionError</code></part></subLanguageExecutionFault></bpelFault>

  • Char into String

    How do i convert a char into a string?
    Thanks

    char c = 'f';
    Character character = new Character(c);
    String charAsString = character.toString();
    or better
    String charAsString = String.valueOf(c);
    Alan

  • Parsing char[] to String

    I am trying to parse a char[]. Everytime I find a '\n' character I want to take the offset to that position and create a String from it. It doesn't seem to work like expected.. The strings contain parts of the array that shouldn't be there. here is some sample code that I am using. Let me know if anyone has any suggestions..
    Thanks
    public void parseArray( char[] buf ){
    int offset = 0;
    Vector v = new Vector();
    for( int i=0; i < buf.length; i++ ){
    if ( buf[ i ] == '\n' ){
    v.add( new String( buf, offset, i ) );
    offset = (i +1);
    }

    Try:
         public Vector parseArray(char buf[]) {
              Vector v = new Vector();
              String s = new String(buf);
              int offset =0, pos;
              do {
                   pos = s.indexOf('\n',offset);
                   if (pos != -1) {
                        String sub = s.substring(offset, pos);
                        v.add(sub);
                        offset = pos+1;
              } while (pos != -1 && offset < buf.length);
              if (offset < buf.length)
                   v.add(s.substring(offset));
              return v;
         }

  • Parsing an xml string into id-value pair format

    Hi,
    I am new in oracle BPEL.
    My requirement is that I need to parse an xml string containing tag name and coressponding value into an 'id -value' pair.
    For example-
    The input xml format is -
    <employee>
    <empid>12345</empid>
    <name>xyz</name>
    <city>London</city>
    </employee>
    The required xml format is-
    <employee>
    <item id="empid" value="12345"/>
    <item id="name" value="xyz"/>
    <item id="city" value="London"/>
    </employee>
    Please let me know if there is a work-around for this.
    Thanks

    Something like this (have not tested):
    <xsl:for-each select="//employee">
    <employee>
    <xsl:for-each select="./*">
    <item>
    <xsl:attribute name="id">
    <xsl:value-of select="name()"/>
    </xsl:attribute>
    <xsl:attribute name="value">
    <xsl:value-of select="text()"/>
    </xsl:attribute>
    </item>
    </xsl:for-each>
    </employee>
    </xsl:for-each>

  • How to parse a delimited string and insert into different columns?

    Hi Experts,
    I need to parse a delimited string ':level1_value:level2_value:level3_value:...' to 'level1_value', 'level2_value', etc., and insert them into different columns of one table as one row:
    Table_Level (Level1, Level2, Level3, ...)
    I know I can use substr and instr to get level value one by one and insert into Table, but I'm wondering if there's better ways to do it?
    Thanks!

    user9954260 wrote:
    However, there is one tiny problem - the delimiter from the source system is a '|' When I replace your test query with | as delimiter instead of the : it fails. Interestingly, if I use ; it works. See below:
    with t as (
    select 'str1|str2|str3||str5|str6' x from dual union all
    select '|str2|str3|str4|str5|str6' from dual union all
    select 'str1|str2|str3|str4|str5|' from dual union all
    select 'str1|str2|||str5|str6' from dual)
    select x,
    regexp_replace(x,'^([^|]*).*$','\1') y1,
    regexp_replace(x,'^[^|]*|([^|]*).*$','\1') y2,
    regexp_replace(x,'^([^|]*|){2}([^|]*).*$','\2') y3,
    regexp_replace(x,'^([^|]*|){3}([^|]*).*$','\2') y4,
    regexp_replace(x,'^([^|]*|){4}([^|]*).*$','\2') y5,
    regexp_replace(x,'^([^|]*|){5}([^|]*).*$','\2') y6
    from t;
    The "bar" or "pipe" symbol is a special character, also called a metacharacter.
    If you want to use it as a literal in a regular expression, you will need to escape it with a backslash character (\).
    Here's the solution -
    test@ORA11G>
    test@ORA11G> --
    test@ORA11G> with t as (
      2    select 'str1|str2|str3||str5|str6' x from dual union all
      3    select '|str2|str3|str4|str5|str6' from dual union all
      4    select 'str1|str2|str3|str4|str5|' from dual union all
      5    select 'str1|str2|||str5|str6' from dual)
      6  --
      7  select x,
      8         regexp_replace(x,'^([^|]*).*$','\1') y1,
      9         regexp_replace(x,'^[^|]*\|([^|]*).*$','\1') y2,
    10         regexp_replace(x,'^([^|]*\|){2}([^|]*).*$','\2') y3,
    11         regexp_replace(x,'^([^|]*\|){3}([^|]*).*$','\2') y4,
    12         regexp_replace(x,'^([^|]*\|){4}([^|]*).*$','\2') y5,
    13         regexp_replace(x,'^([^|]*\|){5}([^|]*).*$','\2') y6
    14  from t;
    X                         Y1      Y2      Y3      Y4      Y5      Y6
    str1|str2|str3||str5|str6 str1    str2    str3            str5    str6
    |str2|str3|str4|str5|str6         str2    str3    str4    str5    str6
    str1|str2|str3|str4|str5| str1    str2    str3    str4    str5
    str1|str2|||str5|str6     str1    str2                    str5    str6
    4 rows selected.
    test@ORA11G>
    test@ORA11G>isotope
    PS - it works for semi-colon character ";" because it is not a metacharacter. So its literal value is considered by the regex engine for matching.
    Edited by: isotope on Feb 26, 2010 11:09 AM

  • Combine two ints into a long without using chars or strings.

    Hi
    I have two ints and i need to combine (not add) them into a long value. The reason for this is because I need the make keys for a DB containing both values.
    eg.
    int1 = 3567453647
    int2= 6368535795
    long combination = combine(int1, int2);
    and the value of combination should be 35674536476368535795
    I know you can doing it by turning the ints into strings, combining the strings and then using the combined string as intput to a new Long(String) but I really need a faster way using pure maths (maybe bit shift?) as this combination function will be used billions of times.
    Thanks
    Edited by: user8908143 on 29-Sep-2010 19:52

    It's also a good way to get rid of boneheaded comments once one realizes how boneheaded they were. :-)A posteriori editing is another way :o)
    [OT: Forum features and moderators] A posteriori edition ( self- or mod- )
    Edited by: jduprez on Sep 30, 2010 6:21 PM
    That being said (and mocked), the binary shift left solution seems correct (although I'd definitely go for a custom class and double-column key instead).

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • How to transform DOM into String

    Hi,
    Can any one provide an example of transforming DOM into String using TransformationFactory or any other API of JAXP?
    Regards...
    Shamit

    And for finer output:
          * Prints a textual representation of a DOM object into a text string..
          * @param document DOM object to parse.
          * @return String representation of <i>document</i>.
        static public String toString(Document document) {
            String result = null;
            if (document != null) {
                StringWriter strWtr = new StringWriter();
                StreamResult strResult = new StreamResult(strWtr);
                TransformerFactory tfac = TransformerFactory.newInstance();
                try {
                    Transformer t = tfac.newTransformer();
                    t.setOutputProperty(OutputKeys.ENCODING, "iso-8859-1");
                    t.setOutputProperty(OutputKeys.INDENT, "yes");
                    t.setOutputProperty(OutputKeys.METHOD, "xml"); //xml, html, text
                    t.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4");
                    t.transform(new DOMSource(document.getDocumentElement()), strResult);
                } catch (Exception e) {
                    System.err.println("XML.toString(Document): " + e);
                result = strResult.getWriter().toString();
            return result;
        }//toString()

  • Convert document into string with unicode

    I want to convert my document into string with all <,>,& to be converted into <, >, and &. When I am doing transformation, I am getting <,> etc.
    Can anybody suggest me how to do that.
    regards,
    Ranjan

    I don't know of any way to tell the parser to convert is for you, you'll have to replace the characters yourself after you got the string from the parser.
    Aviran
    http://www.aviransplace.com

  • Good way to initialize clusters and parse files into them

    I'm writing a small application to open some sonar .xtf files and look at their data.  The format is published and I have done a crude job of building a cluster and initializing it...then reading the file header and parsing it into the cluster so I can use the data in the headers to read the data packets that are stored next.  Here is an example of how I have done one of the parsing jobs...very crudely,
    Read the header binary in (length defined)
    Build the cluster by creating variables from the input file...and then loading the file data in the appropriate cluster variables after casting and ordering the bytes properly...
    Surely this is not the best way to do this...perhaps loops controlled by the number of bytes in parameter to be parsed...???
    I'm looking for a smart strategy for looking at the definition document, which defines the name of the variable, where the variable appears in the header (byte offset), how many bytes it is made up of....and translating that information into the cluster definition efficiently...
    Ideas would be appreciated....
    Thanks in advance for your help.
    Attached are a couple of examples of parsing part of the header information.
    Hummer1
    Attachments:
    Channel Header Info Parse.vi ‏36 KB
    ClusterHeaderInfo.vi ‏46 KB

    There may be a way to do this using "Flatten to String" and "Unflatten from String".
    One issue is that it looks like your strings are fixed length, while LabVIEW uses variable length strings where the string length is preprended to the string bytes when the data is flattened.   But I think if you yourself insert the bytes 16 and 53 into the string data you read from the file at the appropriate places, you can get LabVIEW to read the data and convert it to your cluster.
    Read the data as a string.
    Insert the bytes listed above at the correct places.
    Pass that through the Unflatten from string to convert that string data to a cluster.  You define the cluster by feeding a constant that defines the datatype into the function.
    I think if your binary files as a not too complicated structure, and you define your data types for each element of the cluster precisely, you can get this to work.
    Play with the attached snippet.
    Attachments:
    Channel%20Header%20Info%20Parse[1].png ‏66 KB

  • Converting chars to strings

    I don't know how to get this thing to work:
    public class TestGrade
    public static void main(String[] args)throws GradeException
      int studentID = 123423;
      char grade;
      String aGrade = new String();
      try
       System.out.println("Student #: " + studentID);
       System.out.println("Enter a letter grade for student: ");
       grade = (char)System.in.read();
       if(grade >= 'A' && grade <= 'F' || grade >= 'f' && grade <= 'f')
        System.out.println("Student ID#: " + studentID + "Grade: " + grade);
        aGrade = grade;
       else
        throw(new GradeException(aGrade));
      catch(GradeException error)
           System.out.println("Test Error: " + error.getMessage ());
    }

    it might help if I tell you where the error is...
       aGrade = grade; generates a rather obvious error. Incompatible types, I don't know how to convert this sucker into a string so I can feed a string into GradeException....

  • How to convert ascii codes into Strings

    Is it possible to convert integers (ascii codes) into Strings. It cannot be done by casting like:
    int temp = (String)(111)
    Please help me out. I think there is a method for this.

    Something as simple as String.valueOf((char) 111) comes to mind...

  • HOW can I convert int value char TO String

    I am trying to study Java by books, but sometimes it is quite difficult to find answers even to simple questions...
    I am writing a program that analyzes a Chinese text in Unicode. What I get from the text is CHAR int value (in decimals), and now I need to find them in another text file (with RegEx). But I can't find a way how to convert an INT value of a char into a String.
    Could you help me?
    Thank you in advance.

    You are confusing matters a bit. A char is a char is a char, no matter
    how you represent it. Have a look at this:char a= 'A';
    char b= 65;Both a and b have the same value. The representation of that value
    can be 'A' or 65 or even (char)('B'-1), because the decimal representation
    of 'B' is 66. Note the funny construct (char)(...). This construct casts
    the value ... back to the char type. Java performs all non-fraction
    arithmetic using four byte integers. The cast transforms it back to
    a two byte char type.
    Strings are just concatenations of zero or more chars. the charAt(i)
    method returns you the i-th char in a string. Have a look:String s= "AB";
    char a= 'A';
    char b= (char)(a+1);
    if (a == s.charAt(0)) System.out.println("yep");
    if (b == s.charAt(1)) System.out.println("yep");This piece of code prints "yep" two times. If you want to compare two
    Strings instead, you have to do this:String s= "AB";
    char a= 'A';
    char b= 'B';
    String t= ""+a+b;
    if (s.equals(t)) System.out.println("yep");Note the 'equals' method instead of the '==' operator. Also note that
    string t was constructed using those two chars. The above was a
    shorthand of something like this:StringBuffer sb= new StringBuffer("");
    sb.append(a);
    sb.append(b);
    String t= sb.toString();This is all handled by the compiler, for now, you can simply use the '+'
    operator if you want to append a char to a string. Note that it is not
    very efficient, but it'll do for now.
    Does this get your started?
    kind regards,
    Jos

  • BufferedInputStream into string  ?

    while running fallowing programme i am getting out put as int .how to convert them into charecter string ?
    int len = 0;
    BufferedInputStream in = new BufferedInputStream(theProcess.getInputStream());
              while ( len != -1 ){
                   len = in.read();
                   if ( len != -1 )
    System.out.print(len);
    thanks in advance

    import java.io.*;
    public class CallHelloPgm
    public static void main(String args[])
    Process theProcess = null;
    ObjectInputStream inStream = null;
    System.out.println("CallHelloPgm.main() invoked");
    File filepath = new File("/home/chgmgmt/");
    // call the ls command
    try
    theProcess = Runtime.getRuntime().exec("ls" );
    catch(IOException e)
    System.err.println("Error on exec() method");
    e.printStackTrace();
    // read from the called program's standard output stream
    try
    int len = 0;
    BufferedInputStream in = new BufferedInputStream(theProcess.getInputStream());
    while ( len != -1 ){
    len = in.read();
    if ( len != -1 )
    { // do something with buf }}
    // StringBuffer buf = new StringBuffer();
    // buf.append((char)len);
    // System.out.print(buf.toString());
    System.out.print(len);
    BufferedReader reader = new BufferedReader(new InputStreamReader(theProcess.getInputStream()));
    char[] buffer = new char[1024];
    StringBuffer stringBuffer = new StringBuffer();
    int count;
    while ((count = reader.read(buffer, 0, 1024)) > -1) {
    stringBuffer.append(buffer, 0, count);
    System.err.println("the count"+count);
    String theString = stringBuffer.toString();
    System.out.println("the string is "+theString);
    catch(IOException e)
    System.err.println("Error on inStream.readLine()");
    e.printStackTrace();
    } // end method
    } // end class
    java CallHelloPgm
    out put in as400
    CallHelloPgm.main() invoked
    71982271952402422402413719822723024024124024137201148151230133130109242240240
    24524024424224710922724224024024175133129153372151531501511331531631681091371
    49162163129147147193151151162751671481473721515315015113315316316810913714916
    21631291471471931511511621091311501491341371357516316716337227196150131162752
    the string i
    s
    $
    i am not getting into string buffer while loop
    thanks for the suggestions

Maybe you are looking for

  • AD and adding group members via CFLDAP

    I posted this over in Advanced techniques with only one brave, yet unfortunately uninformed taker... Anyone here have a clue as to why I'd get the error described in the text below??? [Only Response...] Thank you for your response... I probably shoul

  • Dynamically displaying a new region (row?) based on immediate user input

    Whew, figuring out a title was almost as hard as trying to explain what I want to do! Okay, a little background first. My app has 178 main data fields, spread across about 35 tables. The users want to be able to search any and all data fields. So, I

  • Does anyone know a way of making primary keys updatable in tabular forms

    Is there a way i can make my primary key updatable in a tabular form. I have two tables one called proposal details and one called student current details. Bascially you select one of your proposal details to become your current project this is done

  • Timer function in ABAP

    Hello, I have requirement to cancel execution of a function module if it exceeds 10 Sec. How do i do this in ABAP. This FM is sending data through XI and waiting on confirmation from a external system through XI. But the requirement is to wait only f

  • Target Host error

    When I open Host home page I get error message: ===================================== oracle.jbo.TooManyObjectsExeption: JBO-25013: Too many objects match the primary key oracle.jbo.Kye[798 0 UNKNOWN 537003012 UNKNOWN DA018E44DD2B08EEE044005056B03247