Convert char[] to string

Hi,
How can I convert a char array to string?

Don't worry about it.
Just I have to use String(char[])
Thanks :)

Similar Messages

  • Converting chars to strings

    I don't know how to get this thing to work:
    public class TestGrade
    public static void main(String[] args)throws GradeException
      int studentID = 123423;
      char grade;
      String aGrade = new String();
      try
       System.out.println("Student #: " + studentID);
       System.out.println("Enter a letter grade for student: ");
       grade = (char)System.in.read();
       if(grade >= 'A' && grade <= 'F' || grade >= 'f' && grade <= 'f')
        System.out.println("Student ID#: " + studentID + "Grade: " + grade);
        aGrade = grade;
       else
        throw(new GradeException(aGrade));
      catch(GradeException error)
           System.out.println("Test Error: " + error.getMessage ());
    }

    it might help if I tell you where the error is...
       aGrade = grade; generates a rather obvious error. Incompatible types, I don't know how to convert this sucker into a string so I can feed a string into GradeException....

  • How to convert char array to string

    sirs,
    i have written a method in java which will return a randomly generated string of a fixed length. i am creating one one character and inserting it into char array.
    at last i am converting it to string
    like this
    String newpass= pass.toString();// pass is a char array
    but problem is that the string is having different value what i hav generated.
    if i am doing like this..
    String newpass= new String(pass);// pass is a char array
    here newpass is having correct value but having some error
    error in the sense when i print
    System.out.println(newpass+"some text");
    "some text" is not printing
    can you suggest the better way

    /*this is my method */
    private String generateString(int len){
              char pass[] = new char[10];
              int cnt=0;
              int temp=0;
              Random randomGenerator = new Random();
                   for (int idx = 0; idx < 1000; ++idx)
                        temp = randomGenerator.nextInt(1000)%128;
                   if((temp>=65 && temp<=90)||(temp>=97 && temp<=122)||(temp>=48 && temp<=57))
                             pass[cnt]=(char)temp;
                             cnt++;               
                        if(cnt>=len) break;
                   String newpass= pass.toString();
                   String newpass1= new String(pass);
                   System.out.println("passed pass"+newpass+"as"+newpass1+"sa");
              return newpass;
    here newpass and newpass1 are having separate values
    why ??
    Edited by: Shovan on Jun 4, 2008 2:21 AM

  • Convert Char to Date format - Evaluate

    Hi,
    Could anyone provide us the Evaluate formula to convert Char to Date format
    2009-06-20 should be converted to 06/20/2009
    Regards,
    Vinay

    Hi,
    Refer the below threads...
    Re: How to convert string to date format?
    how to convert character string into date format????
    Regards,
    Chithra Saravanan

  • Convert char to ascii code and vice versa

    HI
    Is there any function module to convert char to ascii code and vice versa.
    Thanks in advance

    Hi,
    be careful if you have unicode running in your system. URL_ASCII_CODE_GET is platform-dependent so it will return the internal HERX representation of the character in your system - which is hopefully and in most cases ASCII.
    Under unicode, we use double-byte characters here. I tried this function and the result CHAR_CODE is '00' regardless what character I specify for TRANS_CHAR. But the coding is so simple I corrected resultig in this sample code:
    [P]
    convert p_form to ASCII (internal) representation
      DATA:
        l_ofs TYPE syfdpos,
        l_len TYPE sy-linsz,
        l_ascii TYPE i.
      FIELD-SYMBOLS:
        <x> TYPE x.
      l_len = STRLEN( p_ascii ).
      DO l_len TIMES.
        l_ofs = sy-index - 1.
        ASSIGN p_ascii+l_ofs(1) TO <x> CASTING.
        l_ascii = <x>.
        WRITE: l_ascii.
      ENDDO.
    [/P]
    Here, for each character of string p_ascii, the internal (ASCII) representation is determined and written to the output list.
    Regards,
    Clemens

  • Convert char * to LPCTSTR

    How I can convert char * to LPCTSTR.
    I reading msdn but I don't know nothing sensible to do.
    Thx for all hepl.

    If your program is not using Unicode as the default, then "char *" is the same thing as LPCTSTR; the compiler will see the same thing, the only difference is that the preprocessor replaces LPCTSTR with "char *".
    Your program however is probably using Unicode as the default. If so, then you need to convert the non-Unicode "char *" string to a Unicode string. There are many ways to do that and the most convenient solution for you depends on what your program is already using. If your program is using MFC then there is a MFC solution. If your program is using the CLI (.Net) then there is a different solution using that. If your program uses the C++ standard classes (std namespace) then there is a solution using that. There is also a solution using the C runtime and anotehr using the Windows SDK.

  • CIN: Converting a C String to a LabVIEW String

    Hi all,
    I have been developing CINs in Microsoft Visual C++ 6.0 for LabVIEW as
    project needs. However, I am having a problem with converting a C String
    to a LabVIEW String in CIN.
    I used two ways to try to make the conversion work that were LStrPrintf
    and MoveBlock as stated as following:
    1. LStrPrintf
    #include "extcode.h"
    #include "hosttype.h"
    #include "windows.h"
    struct teststrct{
    const char* test;
    struct teststrct testinstance;
    typedef struct {
    LStrHandle test
    } TD1;
    CIN MgErr CINRun(TD1 *testcluster, LVBoolean *Error) {
    char *tempCStr = NULL;
    strcpy(tempCStr, testinstance.test); // this would cause LabVIEW crash!
    LStrPrintf(testcluster->test, (CStr) "%s", tempCSt
    r);
    // but if I assigned tempCStr as tempCStr = "test", the string value
    "test" could be passed to LabVIEW without any problems.
    2. MoveBlock
    #include "extcode.h"
    #include "hosttype.h"
    #include "windows.h"
    struct teststrct{
    const char* test;
    struct teststrct testinstance;
    typedef struct {
    LStrHandle test
    } TD1;
    CIN MgErr CINRun(TD1 *testcluster, LVBoolean *Error) {
    char *tempCStr = NULL;
    int32 len;
    tempCStr = (char *)&testinstance.test; //since strcpy didn't work, I
    used this way to try to copy the const char* to char*.
    len = StrLen(tempCStr);
    if (err = NumericArrayResize(uB, 1L, (UHandle*)&testcluster->test,
    len))
    *Error = LVFALSE;
    goto out;
    MoveBlock(&tempCStr, LStrBuf(*testcluster->test), len); // the string
    was able to passed to LabVIEE, but it was unreadable.
    out:
    Did I do anything wrong? Any thougths or suggestions would be very
    appreciated!

    Thank you so much for your response, Greg. However, I still have problem after making
    corresponding modification for LStrPrintf approach:
    int32 len;
    char tempCStr[255] = "";
    strcpy(temCStr, testinstance.test);
    len = StrLen(tempCStr);
    LStrPrintf(testcluster->test, (CStr) "%s", tempCStr);
    LStrLen(*testcluster->test) = len;
    LabVIEW crashes. Any ideas?
    Greg McKaskle wrote:
    > The LStrPrintf example works correctly with the "test" string literal because
    > tempCStr= "test"; assigns the pointer tempCStr to point to the buffer containing
    > "text". Calling strcpy to move any string to tempStr will cause
    > problems because it is copying the string to an uninitialized pointer,
    > not to a string buffer. There isn't anything wrong with the LStrPrintf
    > call, the damage is already done.
    >
    > In the moveblock case, the code:
    > tempCStr = (char *)&testinstance.test; //since strcpy didn't work, I
    > used this way to try to copy the const char* to char*.
    >
    > doesn't copy the buffer, it just changes a pointer, tempCStr to point to
    > the testinstance string buffer. This is not completely necessary, but
    > does no harm and is very different from a call to strcpy.
    >
    > I believe the reason that LV cannot see the returned string is that the
    > string size hasn't been set.
    > Again, I'm not looking at any documentation, but I believe that you may
    > want to look at LStrLen(*testcluster->test). I think it will be size of
    > the string passed into the CIN, and it should be set to len before returning.
    >
    > Greg McKaskle
    >
    > > struct teststrct{
    > > ...
    > > const char* test;
    > > ...
    > > };
    > >
    > > struct teststrct testinstance;
    > >
    > > typedef struct {
    > > ...
    > > LStrHandle test
    > > ...
    > > } TD1;
    > >
    > > CIN MgErr CINRun(TD1 *testcluster, LVBoolean *Error) {
    > >
    > > char *tempCStr = NULL;
    > >
    > > ...
    > >
    > > strcpy(tempCStr, testinstance.test); // this would cause LabVIEW crash!
    > > LStrPrintf(testcluster->test, (CStr) "%s", tempCStr);
    > > // but if I assigned tempCStr as tempCStr = "test", the string value
    > > "test" could be passed to LabVIEW without any problems.
    > >
    > > ...
    > > }
    > >
    > > 2. MoveBlock
    > >
    > > #include "extcode.h"
    > > #include "hosttype.h"
    > > #include "windows.h"
    > >
    > > struct teststrct{
    > > ...
    > > const char* test;
    > > ...
    > > };
    > >
    > > struct teststrct testinstance;
    > >
    > > typedef struct {
    > > ...
    > > LStrHandle test
    > > ...
    > > } TD1;
    > >
    > > CIN MgErr CINRun(TD1 *testcluster, LVBoolean *Error) {
    > >
    > > char *tempCStr = NULL;
    > > int32 len;
    > > ...
    > >
    > > tempCStr = (char *)&testinstance.test; //since strcpy didn't work, I
    > > used this way to try to copy the const char* to char*.
    > > len = StrLen(tempCStr);
    > >
    > > if (err = NumericArrayResize(uB, 1L, (UHandle*)&testcluster->test,
    > > len))
    > > {
    > > *Error = LVFALSE;
    > > goto out;
    > > }
    > >
    > > MoveBlock(&tempCStr, LStrBuf(*testcluster->test), len); // the string
    > > was able to passed to LabVIEE, but it was unreadable.
    > > ...
    > >
    > > out:
    > > ...
    > >
    > > }
    > >
    > > Did I do anything wrong? Any thougths or suggestions would be very
    > > appreciated!

  • Converting a given string to non-english language

    {color:#0000ff}Hi can anybody help me how to convert an entered string in textfield to french or spanish or to anyother non-english language?{color}

    Hi,
    I don't think you get a language translator package.
    What you can do is store the fraises, words in a database.
    //SQL Code
    CREATE TABLE [Language_Data]
      [ID]    INT NOT NULL IDENTITY PRIMARY KEY,
      [Lang]  VARCHAR(30) NOT NULL,                             //Lang English/French.....
      [Type]  CHAR(1) NOT NULL,                                 //is Fraise or Word
      [Words] VARCHAR(100) NOT NULL                             //Fraise or Word data
    GO
    CREATE TABLE [Translate]
      [ID]       INT NOT NULL IDENTITY PRIMARY KEY,
      [FK_Orig]  INT NOT NULL REFERENCES [Language_Data]([ID]), //ID of the original language
      [FK_Trans] INT NOT NULL REFERENCES [Language_Data]([ID])  //ID's for all known translations
    GO Create Stored procedures to add a new word/fraise to the [Language_Data] table,
    Create a Stored Procedure to add a translation to the [Translate] table
    Please note that to add a translation you will first insert into the [Language_Data] table then
    insert the original's ID and the translation ID to the [Translate] table Also make prevision for backwards translation

  • Converting byte to string

    Im doing a CRC for a txt file.
    So after the CRC calculation i got the 16bit check sum.
    How should i add the 16 bit to the file?
    The file is generated base on various objects. So it would be very good if i could convert the 16 bits into String and then add to the file as part of a text.

    Im suppose to add into the file, the 16 bits as 2 bytes of ASCII.
    Im doing this...
    int firstCRC = crc>>>8;
    int intSecCRC = 0x00ff & crc;
    String crc1 = ""+ (char)firstCRC;
    String crc2 = ""+ (char)intSecCRC;
    01000100 00111101 << CRC in binary
    firstCRC: D
    SecCRC : =
    So i will be appending 'D''and '=' into the file... is this correct?

  • Converting from spreadshet string to array and then back to spreadsheet string

    My questions is; why is the Spreadsheet string to array function creating more data than the original string had when you change the array back into a spreadsheet string. Im trying to analyze a comma delimited file using array functions since my column and row size is constant, but my data varies. Thus my reason for not using string parsing functions which would get more involved and difficult. So, however, after i convert to a 2D array of data from the comma delimited file I read from, and then I convert back to string using the Array to Spreadsheet String, I get added columns to the file, which prevents another program from receiving these files. Also, the data which I am reading is not all contiguous, it has gaps in some places for empty data. Looking at the file compared to the original after it has gone from string to array and then back to string again, looks almost identical except for the file size which got larger by 400 bytes and where the original file has empty spaces, the new file has a lot of commas added. Any idea?
    Charles

    The result you get is normal when the spreadsheet string contains rows of uneven length. Since the array rows have the same number of elements, nil values are added during the coonversion. And of course, the back to string conversion keep those added values in the string, with the associated commas.
    example : 3 x 3 array
    1,2,3
    4
    5,6,7
    is converted into
    1 2 3
    4 0 0
    5 6 7
    then back to
    1,2,3
    4,0,0
    5,6,7
    Chilly Charly    (aka CC)
             E-List Master - Kudos glutton - Press the yellow button on the left...        

  • Date Format Picture ends before converting entire input string..

    Hi all
    i am creating a report in 10g.. in which i want to convert Emp_lump_sum (number)  to character .I have created a formula column in which i have written a function in the formula column and assign that formula column to the text field in which i want the result.
    when the report run i get the following error....
    ORA-01830: date format picture ends before converting entire input string..
    i have tried with a attribute EMP_NO (Varchar2(30)), it shows the result but when i add EMP_LUMP_SUM (NUMBER) , it show the above error. on the database side i get the same error.
    Database : 11g
    OS :     Windows server 2003
    reports   : 10g
    any help will be thankful..
    Function i wrote behind the formula coloumn is..
    function CF_1Formula return CHAR is
    xy varchar2(100);
    begin
      select to_char(to_date(emp_lump_sum,'j'),'jsp') into xy from fms_111_form4_tr;
        return xy;
        --xy := f_19;
    end;

    Hello:
    Try to use a database function and call it from reports.
    Regards,

  • JNI: Converting char* to jstring

    Hello all, i was wondering if there is another alternative to
    env->NewStringUTF(ajstring);in converting char* to jstring as NewStringUTF causes my app to crash.
    thanks

    Have you tried debugging the application to see why exactly that method is crashing?
    Finding alternatives is not a solution, there is a problem here that you need to figure out. Are you sure env is valid? Are you sure ajstring is a valid string (null-terminated for example)?

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • Converting char to upper

    How can I convert a char to upper case like you would a string?
    char code;
    String Stringcode;
    code = letter.getText().charAt(0);
    Stringcode = String.valueOf(code).toUpperCase();
    Can't I just convert the char to upper?

    public class Test {
        public static void main (String[] parameters) {
            for (char ch = 'a'; ch <= 'z'; ch ++) {
                System.out.print (Character.toUpperCase (ch));
    }

  • Jni: converting char* to JNI type?

    In my native C++ code I have to call a function through another dll. This function has an OUT parameter which is of type: char *.
    I need to return this parameter to the calling Java method through the native C++ code using JNI ...is the corresponding JNI type that i should be returning jcharArray or jstring??
    If so, how do I convert char * to the jcharArray/jstring type.

    Things may not be that simple. Beware that this function dos not expect any kind of char* . It expects UTF-8-encoded string.
    it means that if your C string is all basic (i.e <127) chars, it is ok.
    but if you have some special characters ( where ascii code > 128), it may not be interpreted the way you want.
    see the end of http://java.sun.com/j2se/1.3/docs/guide/jni/spec/types.doc.html
    for more details.

Maybe you are looking for