Reading a string and remove chars

Hi,
I am new the java and I am having a problem with converting a string to an int. I have a data comming in to gbyte_in which is "1;1;1". I am then converting it to a string a spliting it up into three vars to hold each value. I am then converting the string to a int so I can use it. The problem comes when I try to convert the last value as I get a
numberformatexception, I think there must be some chars such as Enter or line feed at the end causing the problem. I have tried using the gstr_string.replace('\n',' ') but still having the problem.
I have put a basic example of what I am doing below, If someone could help a beginner with understanding what I am doing wrong I would be very greatful
Thanks
Joolz
** Start code**
byte[] gbyte_in;
private void CheckData_In()
int i;
/* make STRING of BYTE */
String lstr_temp = new String(gbyte_in);
/* take STRING a make an INT*/
i = Integer.parseInt(lstr_temp);
** End code **

Hi,
I have,
this is what comes in 1;2;3
I can convert 1 and 2 to an int but 3 throws the error, I am just doing a substring looking for the ";". So I think there must be some special chars at the end, but I cannot find a way of removing them before I convert it to an int.
Thanks
Joolz

Similar Messages

  • Substring from certain location and removing chars.

    Hello!
    1.
    I have many Strings like these:
    "apple,trees, water, money, 150 m, 90 000 EEK, 3 421.79 EEK/m"
    "dogs, cats, beer, water, metal, roads, 35 000 000 EEK, 27 888.45 EEK/m"
    How do I get these substrings - "90 000" and "35 000 000"? I cannot determine the starting position of these substrings.
    2. How do you remove specific characters from a string?
    For example, I have string "35┬Ā000┬Ā000┬Ā". I want to get string like this
    "35000000".
    Any help is greatly appreciated.

    Yes, the trailing EEK identifies the substring. I
    would be very grateful if somebody tells me how to
    remove these chars now.It is still not clear to me. Do you mean you want to end up with
    "apple,trees, water, money, 150 m"
    "dogs, cats, beer, water, metal, roads"
    or do you mean you want to extract the value i.e.
    90 000 EEK and 3 421.79 EEK/m
    35 000 000 EEK and 27 888.45 EEK/m
    ?

  • How to Read the String and break them in to TUPULES

    HI ALL,
    i have one String like THE STRING IS --- ---- --- >>
    3122078,12/12/2005
    3122079,12/1/1988
    3122076,12/12/1999
    I want to break them into STR :=3122078 and STR1:=12/12/2005 .

    SQL> select substr('3122078,12/12/2005',1,instr('3122078,12/12/2005',',')-1) from dual;
    SUBSTR(
    3122078
    SQL> select substr('3122078,12/12/2005',instr('3122078,12/12/2005',',')+1) from dual;
    SUBSTR('31
    12/12/2005
    SQL>

  • GPO Deploy Reader 9.3 and remove earlier version fails

    Hi
    I am deploying Reader 9.3 with a mst from the Customization Wizard using a GPO. Basically this works perfect with the following exception:
    - I can not select to remove older versions in the Customization Wizard, its grayed out
    How do I manage to correctly remove the older version?
    The 8.x was installed by a GPO as well. Of course I can set to remove the 8.x version and - if possible - on the same boot install the new 9.3 version. Does that work? What's the 'Best practice' to update all clients from 8 to 9?
    Thanks
    Franz

    Hi
    I am trying to deploy 9.3 with GPO as well, I also cannot control the checkboxs' to uninstall older versions of acrobat and reader but I have found that it will uninstall the apps without these set.  By default remove older version of acrobat is checked and reader is not, but i have still found it to remove both...

  • Create and remove on read-only beans

    while playing around with read-only beans i found that calling create()
    ond read-only homes and remove on read only entities does work, which
    was new to me. 'work' just means changing the database ;)
    ist there any danger in using this (maybe just an internal feature which
    does not work in 7.0, ...) ?
    axel.

    You should definitely check out 7.0 and its support for optimistic caching. It
    does what you want.
    -- Rob
    Axel Großmann wrote:
    thanks for the quick response.
    Rob Woollen wrote:
    ist there any danger in using this (maybe just an internal feature which
    does not work in 7.0, ...) ?
    No, it'll work fine. Until 7.0, read-only beans always ran in an
    unspecified transaction. In 7.0, this has been relaxed to allow you to use
    all the tx attributes with read-only beans.sounds great; seems that we should give 7.0 beta a try.
    what comes to my mind is: if read-only beans would also write changes to
    their CMP fields to the database (and send invalidation message) we
    would have some kind of write-through-caching bean, wouldnt we ?
    this is actually what I'm searching for, since we have to port an
    application to weblogic with was designed using one bean per business
    object - mapping this to the two-bean read-mostly pattern without
    changing the present logic seems impossible without many changes and
    loosing the ability to run on other application servers too.
    so why not a 'read-mostly-bean' instead of manually implementing it ?
    axel.--
    AVAILABLE NOW!: Building J2EE Applications & BEA WebLogic Server
    by Michael Girdley, Rob Woollen, and Sandra Emerson
    http://learnWebLogic.com
    [att1.html]

  • How to read a C structure with string and int with a java server using sock

    I ve made a C agent returning some information and I want to get them with my java server. I ve settled communication with connected socket but I m only able to read strings.
    I want to know how can I read strings and int with the same stream because they are sent at the same time:
    C pgm sent structure :
    char* chaine1;
    char* chaine2;
    int nb1;
    int nb2;
    I want to read this with my java stream I know readline methode to get the first two string but after two readline how should I do to get the two int values ?
    Any idea would be a good help...
    thanks.
    Nicolas (France)

    Does the server sent the ints in little endian or big endian format?
    The class java.io.DataInputStream (with the method readInt()) can be used to read ints as binary from the stream - see if you can use it.

  • Read two CSV files and remove the duplicate values within them.

    Hi,
    I want to read two CSV files(which contains more than 100 rows and 100 columns) and remove the duplicate values within that two files and merge all the unique values and display it as a single file.
    Can anyone help me out.
    Thanks in advance.

    kirthi wrote:
    Can you help me....Yeah, I've just finished... Here's a skeleton of my solution.
    The first thing I think you should do is write a line-parser which splits your input data up into fields, and test it.
    Then fill out the below parse method, and test it with that debugPrint method.
    Then go to work on the print method.
    I can help a bit along the way, but if you want to do this then you have to do it yourself. I'm not going to do it for you.
    Cheers. Keith.
    package forums.kirthi;
    import java.util.*;
    import java.io.PrintStream;
    import java.io.BufferedReader;
    import java.io.FileReader;
    import java.io.IOException;
    import krc.utilz.io.ParseException;
    import krc.utilz.io.Filez.LineParser;
    import krc.utilz.io.Filez.CsvLineParser;
    public class DistinctColumnValuesFromCsvFiles
      public static void main(String[] args) {
        if (args.length==0) args = new String[] {"input1.csv", "input2.csv"};
        try {
          // data is a Map of ColumnNames to Sets-Of-Values
          Map<String,Set<String>> data = new HashMap<String,Set<String>>();
          // add the contents of each file to the data
          for ( String filename : args ) {
            data.putAll(parse(filename));
          // print the data to output.csv
          print(data);
        } catch (Exception e) {
          e.printStackTrace();
      private static Map<String,Set<String>> parse(String filename) throws IOException, ParseException {
        BufferedReader reader = null;
        try {
          reader = new BufferedReader(new FileReader(filename));
          CsvLineParser.squeeze = true; // field.trim().replaceAll("\\s+"," ")
          LineParser<String[]> parser = new CsvLineParser();
          int lineNumber = 1;
          // 1. read the column names (first line of file) into a List
          // 2. read the column values (subsequent lines of file) into a List of Set's of String's
          // 3. build a Map of columnName --> columnValues and return it
        } finally {
          if(reader!=null)reader.close();
      private static void debugPrint(Map<String,Set<String>> data) {
        for ( Map.Entry<String,Set<String>> entry : data.entrySet() ) {
          System.out.println("DEBUG: "+entry.getKey()+" "+Arrays.toString(entry.getValue().toArray(new String[0])));
      private static void print(Map<String,Set<String>> data) {
        // 1. get the column names from the table.
        // 2. create a List of List's of String's called matrix; logically [COL][ROW]
        // 3. print the column names and add the List<String> for this col to the matrix
        // 4. print the matrix by inerating columns and then rows
    }

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • Remove char(160) and space in SSIS

    Hi everyone,
    I'm using SSIS package to import from excel source to SQL db , before do that, i want to remove all space and char(160) if any .
    But with this : REPLACE(REPLACE(APP_ID_TRANSACTION,"& chr(160)","")," ","") , i can only remove space , cant remove char(160) .
    Any ideas for this ?
    Many thanks,
    Hong

    Hi, with excel i can use this :=SUBSTITUTE((SUBSTITUTE(C2,CHAR(160),""))," ","") , to remove all of space on any field, but in SSIS, i still cant remove char(160) ..
    Pls help me..

  • Anyone know how to add a string to a 1d array with file info, then be able to read back, display string, and sort data array.

    I need to store a data array and include text that describes what the data is. (using for various configuration files.) Anyway, I would like to be able to save the string as part of the txt file, and somehow read it back, remove the (various length string), and display it in an indicator. All the while, not causing too much problem with the initial data array.
    Thanks in advance!!

    There are several ways to do what you require. A simple method would be to use an ASCII text file. When writing one of those, you just need to basically build a gaint string starting with the description text you want. We like to call that a header. Once you've got the header, make some sort of delimiter like a bunch of "-" or two sets of ( EOL = End of Line = CRLF = \r\n ). After your delimiter, concatenate your array in string form or flatten your array from its native form into a string and tack it on the file (append).
    See the (very quick) example attached.
    Dan Press
    www.primetest.com
    Attachments:
    fileheader.vi ‏41 KB

  • Need to Add and Remove Columns of ADF Read Only table from Backing bean

    I have a scenario where I am trying to Populate TransientVO which is shown has a ADF Read Only Table in page.
    I have couple of Check Boxes Based on their selection I am trying to render and hide certain Columns.
    But the Issue which I am facing is only the Column Header seems to change where as the Rows and Values doesnt..
    even If I apply the expression language rendering condition on the outputText inside those columns.. ..
    So I am thinking to add and remove VO Attribute columns to the table from backing bean.
    Need some sample code snippet or a better design to achieve this. Its kind of urgent too...having an aggressive deadline :(
    Please chip in People..
    Thanks in Advance .
    TK

    Table Code..
    <af:table value="#{bindings.InventoryGridTrans.collectionModel}"
                                    var="row"
                                    rows="#{bindings.InventoryGridTrans.rangeSize}"
                                    emptyText="#{bindings.InventoryGridTrans.viewable ? 'No data to display.' : 'Access Denied.'}"
                                    fetchSize="#{bindings.InventoryGridTrans.rangeSize}"
                                    rowBandingInterval="0" id="t4"
                                    partialTriggers="::sbcSales ::sbcUsage ::cb1">
                            <af:column sortProperty="Period" sortable="false"
                                       headerText="#{bindings.InventoryGridTrans.hints.Period.label}"
                                       id="c38">
                              <af:outputText value="#{row.Period}" id="ot33"/>
                            </af:column>
                            <af:column sortProperty="Past12SalesCount"
                                       sortable="false"
                                       headerText="#{bindings.InventoryGridTrans.hints.Past12SalesCount.label}"
                                       id="c29"
                                       rendered="#{backingBeanScope.IndexPageBackingBean.onUsage != true and backingBeanScope.IndexPageBackingBean.onSales == true}">
                              <af:outputText value="#{row.Past12SalesCount}"
                                             id="ot40"
                                             rendered="#{backingBeanScope.IndexPageBackingBean.onUsage != true and backingBeanScope.IndexPageBackingBean.onSales == true}"
                                             visible="#{backingBeanScope.IndexPageBackingBean.onUsage != true and backingBeanScope.IndexPageBackingBean.onSales == true}">
                                <af:convertNumber groupingUsed="false"
                                                  pattern="#{bindings.InventoryGridTrans.hints.Past12SalesCount.format}"/>
                              </af:outputText>
                            </af:column>
                            <af:column sortProperty="Past12UsageCount"
                                       sortable="false"
                                       headerText="#{bindings.InventoryGridTrans.hints.Past12UsageCount.label}"
                                       id="c40"
                                       rendered="#{backingBeanScope.IndexPageBackingBean.onUsage == true and backingBeanScope.IndexPageBackingBean.onSales != true}"
                                       visible="#{backingBeanScope.IndexPageBackingBean.onUsage == true and backingBeanScope.IndexPageBackingBean.onSales != true}">
                              <af:outputText value="#{row.Past12UsageCount}"
                                             id="ot47"
                                             rendered="#{backingBeanScope.IndexPageBackingBean.onUsage == true and backingBeanScope.IndexPageBackingBean.onSales != true}"
                                             visible="#{backingBeanScope.IndexPageBackingBean.onUsage == true and backingBeanScope.IndexPageBackingBean.onSales != true}">
                                <af:convertNumber groupingUsed="false"
                                                  pattern="#{bindings.InventoryGridTrans.hints.Past12UsageCount.format}"/>
                              </af:outputText>
                            </af:column>
                            </af:column>
                    </af:table>

  • How to read a data file combining strings and data

    Hello,
    I'm having a data file combining strings and datas to read. I'm trying to read the filename, time, constants and comments into four seperate string indicators (the lines for the comments varies for different files). And read the data into a 2-D numeric array. How can I do this? Is there any function that can serch special characters in the spreadsheet file so I can exactly locate where I should start reading the specific data. The following is how the data file appears. Thank you very much.
    Best,
    Richard
    filename.dat
    14:59:00 12/31/2009
    Sample = 2451
    Frequency = 300, Wait time = 2500
    Temperature = 20
    some comments
    some comments
    some comments
    some comments
    some comments
    7.0000E+2    1.5810E-5
    7.0050E+2    1.5400E-5
    7.0100E+2    1.5500E-5
    7.0150E+2    1.5180E-5
    Message Edited by Richard1017 on 10-02-2009 03:10 PM
    Solved!
    Go to Solution.

    Hi,
         I'm fairly new to the NI forums too and I think you just have to wait longer.  Your post was done right.  I do a similiar function as to what you are talking about except I read in numbers from a file.  I create an ini file (just a notepad file of type *.ini) that is is set up with sections inside brackets [] and keys with whatever name followed by an = sign.  You may be able to use a *.dat file too, I just haven't.  Then the vi attached goes to that file and reads the keys from those sections.  You just repeat for the different sections and keys you want to read.  You can use similar provide VI's to write to that same file or create it.  Let me know how that works. 
    Attachments:
    Help1.ini ‏1 KB
    Help1.vi ‏10 KB

  • I have an iMac. I noticed that I have both versions 8 and 9 of Adobe Reader. Can I remove version 8 ?

    I have an iMac. I noticed that I have both versions 8 and 9 of Adobe Reader. Can I remove version 8 ?

    Yes.
    If you double click on a PDF and your system uses 8 to open it, you may have to repair (Help>Repair... in Reader) the 9 installation or you could change your file associations to use 9 to open PDF files.
    If 9 opens them already, just remove 8 and you're good to go.

  • How to design and implement an application that reads a string from the ...

    How to design and implement an application that reads a string from the user and prints it one character per line???

    This is so trivial that it barely deserves the words "design" or "application".
    You can use java.util.Scanner to get user input from the command line.
    You can use String.getChars to convert a String into an array of characters.
    You can use a loop to get all the characters individually.
    You can use System.out.println to print data on its own line.
    Good luck on your homework.

  • This latest update of Firefox, 5.0.1, has a graphic in the toolbar space that looks like some kind of ad, had HP7 on it, Harry Potter ad maybe, and it obscures my tabs and toolbar and makes them very, very difficult to read, how do I remove it?

    This latest update of Firefox suddenly had this ad (that's what it looks like), for what I am guessing is that it is an ad for a Harry Potter movie, as it says "HP7", (are there really seven of them now??? Wow, talk about overselling something....), but I need to remove it, can't see or read my tabs and tool bar headings, not good! My operating system is Windows XP, tablet PC edition, SP3.

    Is it possible you accidentally installed a Persona? If it's a Persona or other kind of theme, you can revert to the default theme here:
    orange Firefox button ''or'' Tools menu > Add-ons > Appearance
    Right-click default and choose it (your menu choice could say "wear it" or it might say something less strange)
    Any luck?

Maybe you are looking for

  • Creation of batch job Schedule

    How to create new batch job Schedule ? Regards, Sridhar

  • USB is painfully slow all of a sudden

    Hey there i have a macbook running 10.4.9 and usb has always been fine until today when times for pulling data off usb have become agonizingly slow. like 30 minutes for 70MB. EDIT writing to usb is slow, and i have tried this on flash drives and my u

  • Windows 7 64 bits and Forms 10gR2

    Hi All, I have read a lot of postings regarding the above in this forum but am a little bit confused of the actual solution. Could someone please walk me through a concise solution to this problem. You would have safe my life if the solution work. My

  • Displaying pdf file in a jsp page...

    hi, I want to display a pdf document in a jsp page. i got that pdf file from a servlet class(MVC Architecture) through session. now my problem is to display that pdf file in my jsp page. can anyone give me a idea to solve this problem..

  • Re: Multiple Array

    double[] discountValues = { 0.0, 0.15, 0.25, 0.3 };