Manipulating chars

Hi everyone,
I'm still new to Java (and programming!). Can anyone advise me on how to manipulate chars, particularly how to convert a given lowercase char to uppercase? The only way I can do it right now is to convert to String and use toUpperCase() method, but this seems clumsy.
Thanks!

Just to clarify, pervel is reffering to the java.lang.Character.toUpperCase() method. The Character class is a wrapper class - one exists for every primitive type (Integer, Float etc...) and they are mainly of use when you have a library API which demands to return or accept as an argument something of type java.lang.Object - you can't pass an char since this is not an object.
In this example you would take your char variable...
char c = 'c';
...place it in a wrapper...
Character myChar = new Character(c);
...then set the original variable to the return value of the method...
c = myChar.toUpperCase();
...the variable c will now == 'C'.
Wrappers are good in this sense but if you are, for example, converting a large text file to upper-case, you will be creating a huge number of objects making your code slow and bloating out the memory - for speed use the arithmetic algorithm in the first posting / to show you know your way around the API, use the wrapper class...

Similar Messages

  • How toi deal with ascii code in manipulating strings or chars!

    hi ! my functions shoud do the following
    i pass a char value like 'a' , 'b' ,'c'... and it sould return the next value to me ( a---> b , c----> d)
    i know that i shall use ascii code to resolve that so can anyone grant me the source code for that solution and help me ?

    What about 'z'?char after (char ch) {
        return (ch == 'z') ? 'a' : ch + 1;
    }Kind regards,
    Levi

  • Manipulating the "\" char on a string

    I need to change (or erase) the "\" from a string, it has been impossible because I have tried everything to locate it into the string but it's impossible to detect it, here's my code:
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.io.*;
    public class xmlwriter extends HttpServlet
    public void service(HttpServletRequest req, HttpServletResponse res)
    throws ServletException, IOException
    ServletOutputStream salida = res.getOutputStream();
    res.setContentType("text/HTML");
    String quitar = "$#092";
    String cadena = req.getParameter("cadena");
    String result = cadena.replaceAll("n<","\r<");
    String result1 = result.replaceAll(quitar, "a");
    //System.out.println(result);
    File f1 = new File ("c:/salida.xml");
    FileWriter out = new FileWriter(f1);
    f1.createNewFile();
    out.write(result1);
    out.close();
    salida.println("Finalizado");
    This code should change every "\" for an "a" but, nothing happens, I'm passing the ascii code of the "\" char, I have tried EVERYTHING but nothing happens, anyone can help me??

    Ok, I have solved how to change the "\", now how can I remove it from the string??, is there any cadena.removeAll("\\\\")?

  • BEX Query - Run time char InfoObjects manipulation for query results

    Hello Experts,
    Iam working on BEX queries to meet following requirements,
    1)  Display only first 2 char of an InfoObject in the query results. Actual infoprovider has 24 char for this InfoObject. Example: Summary code FABCDXXXX.. But we want to show only 'FA' in query results
    2) We split a 120 char description into two 60 char InfoObjects, we need to concatenate the split char into one for query result display
    3) We need to show a specific hierarchy level, nothing before or after. For example: Eighth level for a cost element hierarchy.
    4) We need to show a constant in a column (which is not there in the table)
    We are planning to use both BEX analyzer and Crystal for the reports.
    I read in one of the thread that we can use formula variables/Customer exit for this but iam not very clear, if you know how to do it or if you have any step by step procedure, can you please pass it to me. Thanks in advance for your help.
    Regards,
    Raman

    Thanks for quick response. We have users who need BEX analyzer, is there any way to handle the above requirments for BEX analyzer.

  • Characters(char,int,int) problem ???

    Hello, I'm using SAX to parse an XML file, but i run into a problem with the characters() method (well that's what I think). I use it to get the contents of text nodes, but sometimes (actually, only once) it will return only half of the text. But, you will tell me, this should be normal since the parser is allowed to call it twice in a row.
    But I create a StringBuffer myStringBuffer, then in characters(char[] buffer,int offset,int length) i do myStringBuffer.append(buffer,offset,length) and in endElement i try to process myStringBuffer, but still it contains only half of the text. This is all the more strange to me that my XML file is full of text nodes much longer than the troublesome one and i have no problem with those...and when i extract this s***** node and put it into another XML file, it works fine.
    So I'm somehow confused.
    Thanks in advance.

    Your logic certainly looks like it should work, but obviously it doesn't. Here's what I do:
    1. In the startElement() method, create an empty StringBuffer.
    2. In the characters() method, append the characters to it. Incidentally you don't need to create a String like you do, there's an "append(char[] str, int offset, int len)" method in StringBuffer which is very convenient in this case.
    3. In the endElement() method, use the contents of the StringBuffer.
    As for why parsers split text nodes, don't try to second-guess that. It's not your business as a user of the SAX parser. Some parsers will read the XML in blocks, and if a block happens to end in the middle of a text node, it splits the text node there. Others will see the escape "&amp;" in a text node and split the node into three parts, the part before the escape, a single "&" character, and the part after it. Saves on string manipulation, you see. There could be other reasons, too.

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • Character to uni-code manipulation

    I'm trying to take a character to uni-code and perform some manipulations, however I am having problems converting uni-code back into a string. I've found the GetNumeric Value but cannot find the corresponding method to convert it back.
    public static void main(String[] argC)
    Person aPerson = new Person();
    Character aCharacter;
    int anInt = Character.getNumericValue('b');
    System.out.println(anInt);
    anInt++;
    char aTempCar = anInt;
         aCharacter = new Character(aTempCar);
         System.out.println(Character.getNumericValue(aTempCar));
         System.out.println (aCharacter.toString());
    }

    You can convert a char to the unicode code point by casting to int. Most of the time the cast is implicit and doesn't need to be written.char character = 'b';
    int codepoint = character; // = 98Similarly a code point can be converted to a char by casting but this time the casting is necessary:int codepoint = 98;
    char character = (char) codepoint;You should remember that not all code points can be represented with a single char so simply casting does not give you the correct value with every code point. The [url http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Character.html#toChars(int)]Character.toChars(int) method can be used to convert a code point to characters (starting with version 1.5).
    The getNumericValue() method returns you the numeric value of a character, e.g. the numeric value of '5' is 5 and the numeric value of the roman numeral 'X' is 10. It does not return the unicode code point of a character.

  • Return chars to right of first dot from right in string

    Anyone know how to return file extensions in BO? Ex. Return "doc" where file name = "document.doc" or where file name = "document.asdf.lkjh.123.doc".  I have some files names that contain 8 or more dots, so need to find first dot from right and return all chars to the right of it.

    Jeff,
    Without predictability to the data, the ability to identify the placement of the final period (.) in a string and then display that final portion is impossible with the given functions provided in WebI.  Two factors are against us here:  1) you cannot perform a loop (a do, while, end or similiar construct), or 2) there is no "reverse" function.  a Reverse function reads a column (string) and places the last character first, the second-to-last character second, etc, etc.   If there were a reverse function you could reverse the string and then use the POS function to find the first period, then use the left function on what is returned and then reverse again to put it in the original sequence again.  I'm working with MS SQL Server and there is a reverse function there, but I'm not sure what your database vendor is.  If you have a reverse function in your DB engine, perhaps you can perform your string manipulation via the universe, dressing it up at the DB layer and using WebI to perform the reporting phase.
    Thanks,
    John

  • File manipulation in JavFX

    How to perform file manipulations in JavaFX ?

    Future reference, java doc can easily found by doing a google search of java api.
    Here is a link to 6:
    [http://java.sun.com/javase/6/docs/api/]
    Also, you need to learn some java basics these are all java basic questions...
    Simple code in pure java not javafx:
    FileReader inReader=new FileReader("myfile.ext");  // Note you could use a file object or a string here
    FileReader bufReader=new BufferedReader(myInput);  // Taking my input stream into another input stream
    char c=bufReader.read();  // I read the first character of the file
    String line=bufReader.readLine(); // I read the first line of the file ended by a \r or \n (carriage return or line feed) Note skips first character because of the read() above.
    char [] section=new char [100];
    bufReader.read(section, 10, 40);  // I just but into a char [] starting at the 10 value in the array and reading to 40 characters into the array.  Basically 10-39 of the 100 array has the next 40 characters in it starting at second line of stream.Also other things to move back into the stream read javadoc there are plenter of other readers/writers and input/output streams. What is the difference between a reader/writer and input/output stream? You shouldn't care. Above when I said stream I meant reader though.
    Note you need a writer or output stream to write to a file and a reader or input stream to read from a file. The mention above of the array route [] probably won't work in javafx since it doesn't support arrays it uses sequences instead so I'd avoid that method in a javafx script class.
    Good luck and read some java basics or take some java basics tutorial plenty things out there covering this as well as google could have easily told you all of the above.

  • Manipulating VARRAYS

    Can anyone send me an example of varray manipulation (reading and writing) using C/OCI ?
    I'm a beginner.
    Thank's in advance.

    Here is a sample routine :-
    static CONST text CONST_W_PTR selar = (text )
    "SELECT * from customer_va";
    ** execute "selar" statement which select an array column from a table.
    void selectar(envhp, svchp, stmthp, stmthp1, errhp, recurse)
    OCIEnv *envhp;
    OCISvcCtx *svchp;
    OCIStmt *stmthp;
    OCIStmt *stmthp1;
    OCIError *errhp;
    boolean recurse;
    OCIType *names_ar_tdo;
    OCIDefine defn1p, defn2p;
    OCIColl names_ar = (OCIColl ) 0;
    sword custno =0;
    sword status;
    dvoid *elem;
    sb4 index;
    OCIString OCIInd names_null = (OCIInd ) 0;
    boolean exist, eoc, boc;
    OCIIter itr = (OCIIter ) 0;
    OCIBind bnd1p, bnd2p;
    int amount;
    OCIInd ind = -1;
    OCIInd *indp = &ind;
    sb4 collsiz;
    DISCARD printf("\n******** Entering selectar *********\n");
    /* define the application request */
    checkerr(errhp, OCIStmtPrepare(stmthp, errhp, (text *) selar,
    (ub4) strlen((const char *) selar),
    (ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT));
    *names;
    /* bind the input variable */
    checkerr(errhp, OCIDefineByPos(stmthp, &defn1p, errhp, (ub4) 1, (dvoid *) &custno,
    (sb4) sizeof(sword), SQLT_INT, (dvoid *) 0, (ub2 *)0,
    (ub2 *)0, (ub4) OCI_DEFAULT));
    checkerr(errhp, OCIDefineByPos(stmthp, &defn2p, errhp, (ub4) 2, (dvoid *) 0,
    (sb4) 0, SQLT_NTY, (dvoid *) 0, (ub2 *)0,
    (ub2 *)0, (ub4) OCI_DEFAULT));
    checkerr(errhp, OCITypeByName(envhp, errhp, svchp, (const text *) "",
    (ub4) strlen((const char *) ""),
    (const text *) "CHAR_TAB",
    (ub4) strlen((const char *) "CHAR_TAB"),
    (CONST text *) 0, (ub4) 0,
    OCI_DURATION_SESSION, OCI_TYPEGET_HEADER,
    &names_ar_tdo));
    checkerr(errhp, OCIDefineObject(defn2p, errhp, names_ar_tdo, (dvoid **) &names_ar,
    (ub4 *) 0, (dvoid **) &indp, (ub4 *) 0));
    checkerr(errhp, OCIStmtExecute(svchp, stmthp, errhp, (ub4) 0, (ub4) 0,
    (OCISnapshot *) NULL, (OCISnapshot *) NULL,
    (ub4) OCI_DEFAULT));
    /* execute and fetch */
    DISCARD printf(" Execute and fetch.\n");
    while ((status = OCIStmtFetch(stmthp, errhp, (ub4) 1, (ub4) OCI_FETCH_NEXT,
    (ub4) OCI_DEFAULT)) == OCI_SUCCESS &#0124; &#0124;
    status == OCI_SUCCESS_WITH_INFO)
    /* print the customer number */
    DISCARD printf("\nThe customer number is : %d.\n", custno);
    if (*indp == -1)
    DISCARD printf("The varray is NULL.\n\n");
    continue;
    /* check how many elements in the typed table */
    checkerr(errhp, OCICollSize(envhp, errhp, (CONST OCIColl *) names_ar,
    &collsiz));
    DISCARD printf("---> There are %d elements in the varray.\n", collsiz);
    DISCARD printf("\n---> Dump the array from the top to the bottom.\n");
    checkerr(errhp, OCIIterCreate(envhp, errhp, names_ar, &itr));
    for(eoc = FALSE;!OCIIterNext(envhp, errhp, itr, (dvoid **) &elem,
    (dvoid **) &names_null, &eoc) && !eoc;)
    names = *(OCIString **) elem;
    DISCARD printf(" The names is %s\n", OCIStringPtr(envhp, names));
    DISCARD printf(" atomic null indicator is %d\n", *names_null);
    DISCARD printf("\n---> Dump the array from the bottom to the top.\n");
    checkerr(errhp, OCIIterGetCurrent(envhp, errhp, itr, (dvoid **) &elem,
    (dvoid **) &names_null));
    names = *(OCIString **) elem;
    DISCARD printf(" The names is %s\n", OCIStringPtr(envhp, names));
    DISCARD printf(" atomic null indicator is %d\n", *names_null);
    for(boc = FALSE;!OCIIterPrev(envhp, errhp, itr, (dvoid **) &elem,
    (dvoid **) &names_null, &boc) && !boc;)
    names = *(OCIString **) elem;
    DISCARD printf(" The names is %s\n", OCIStringPtr(envhp, names));
    DISCARD printf(" atomic null indicator is %d\n", *names_null);
    checkerr(errhp, OCIIterDelete(envhp, errhp, &itr));
    /* insert into the same table, then dump the info again */
    if (recurse)
    custno = custno * 10;
    checkerr(errhp, OCIStmtPrepare(stmthp1, errhp, (text *) insar,
    (ub4) strlen((const char *) insar),
    (ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT));
    /* bind the input variable */
    checkerr(errhp, OCIBindByName(stmthp1, &bnd1p, errhp, (text *) ":custno",
    (sb4) -1, (dvoid *) &custn o,
    (sb4) sizeof(sword), SQLT_INT,
    (dvoid *) 0, (ub2 *)0, (ub2 *)0, (ub4) 0, (ub4 *) 0,
    (ub4) OCI_DEFAULT));
    checkerr(errhp, OCIBindByName(stmthp1, &bnd2p, errhp, (text *) ":names",
    (sb4) -1, (dvoid *) 0,
    (sb4) 0, SQLT_NTY, (dvoid *) 0, (ub2 *)0, (ub2 *)0,
    (ub4) 0, (ub4 *) 0, (ub4) OCI_DEFAULT));
    /* if custno == 20, then insert null nested table */
    *indp = -1;
    checkerr(errhp, OCIBindObject(bnd2p, errhp, names_ar_tdo,
    (dvoid **) &names_ar,
    (ub4 *) 0, (custno == 20) ? (dvoid **) &indp :
    (dvoid **) 0, (ub4 *) 0));
    checkerr(errhp, OCIStmtExecute(svchp, stmthp1, errhp, (ub4) 1,
    (ub4) 0, (OCISnapshot *) NULL,
    (OCISnapshot *) NULL, (ub4) OCI_DEFAULT));
    checkerr(errhp, OCITransCommit(svchp, errhp, (ub4) 0));
    null

  • Binary Bit Manipulation

    How can i convert a series of characters into corresponding binary bit pattern so that i can perform some bit manipulation (e.g. swapping bit position)?

    You can do that with plain chars; char is essentially an integer type and you can do all the bit manipulations with them as with other integer types.
    For example:
    char c = 'a';
    c = (char) (c | 0x80); // set the 8th bit on
    c ^= 5; // XOR with 5

  • Unsigned char vs. signed char

    When I attempt to assign my unsigned char array to a jbyteArray...the Set...Region method expects a signed char array??? Why, and what impact is there? My code does quite a bit of bit manipulation...

    When I attempt to assign my unsigned char array to a
    jbyteArray...the Set...Region method expects a signed
    char array??? Why, and what impact is there? Because that is what it expects.
    Same impact if you were using a signed char array.

  • How can I select the last 2 chars of a string

    Hello all,
    I am trying to select the last 2 characters of a string to match them to another string. This is for a Poem generator that contains 20 or so words and I have to pick 2 words at random, them look at the last 2 characters to see if they match in each string. Say "Plain" and "Rain" would match because they both have "in" at the end. The length of each word could vary. I don't want this doing for me just what Method should I be looking at.
    Thanks.

    hi,
    try this:
    public class StringTest {
         public static void main(String args[]) {
              StringTest obj = new StringTest();
              obj.same();
         void same() {
              String rain = new String("Rain");
              String plain = new String("Plain");
              int rainLength = rain.length();
              int plainLength = plain.length();
              if(rain.substring(rainLength-2).equals(plain.substring(plainLength-2))) {
                   System.out.println("same");
              else {
                   System.out.println("different");
    }you can use the length method to get the length of a string and then use the substring method, passing it length-2, so get the last 2 chars.

  • Convert Char to float

    Hi all,
    Am wanting to convert a char value (the infoobject is defined as NUMC) into float in order to perform a calculation in the FOX formula. It would not allow the char value to be stored into a float variable...comes back wz an error message - Types of operands F and <infoobject_numc> do not agree.
    Any ways of doing this? I thought about using a function to perform the conversion from char to float format but dont knw if such a function exists on the BI system?
    Thanks!

    Hi,
    I sometimes use a local variable of type STRING to pass on values to other data types:
    DATA local type STRING.
    local = A.
    B = local.
    But I'm not sure it will work for you. If not, you can do it with an exit function.
    D

  • I would like to know why my bill was manipulated to cause an overage fee.

    Halfway through my previous billing cycle, I reduced my data plan from 3GB shared between me and my girlfriend to 2GB shared, so they pro-rated the next bill, showing it to be around $30 cheaper than it would otherwise be. Fast forward to today, I go to check my bill and see that I've incurred a $15 usage fee. However, when I go look at the Usage breakdown, I see the following:
    It clearly shows that the billing cycle started on May 20th, and that I made the change to go down to 2GB on the 30th. They decided to set my usage allowance for that period to 1.06400GB even though I'd already used 1.37900GB, leaving a .31800GB overage. Now, of course it shows that from May 31st through June 19th, I was allowed 1.29000GB and used 1.21100GB, clearly showing that even if it were split properly, I'd still be over.....except for the fact that I have text alerts set to notify me when I'm at 50%, 75%, and 90% usage so I can decide when to stop and avoid an overage fee.
    They seemingly, intentionally, manipulated the numbers in such a way as to get $15 more out of me that they otherwise wouldn't get because they knew I'd stop using data before an overage occurred.
    I am fully aware that $15 isn't a big deal. However, the fact that they apparently manipulated the numbers in such a way as to force an overage fee upon me is just plain wrong and it makes me very angry. The best part about this is that when you add up the 1.06400GB allowance of the first portion and the 1.29000GB of the second portion, that only totals 2.354GB, not the 3GB that the plan originally was. I had chosen to have the changes apply at the beginning of the next billing cycle, so there is no reason for me to have been shorted the 0.646GB of data, much less to have incurred an overage fee.
    Is this sort of thing common? Is it a glitch in the system, or was someone manually making these changes with malicious intent? More importantly, can or will they do anything to fix this?Verizon Wireless Customer Support

    See, the problem is you're both ignoring two facts.
    #1 - of the two options when changing the plan, I chose to have the changes not apply until the beginning of the next billing cycle. There never should have been the possibility of an overage as there should have been no prorating when I chose to have no changes until next billing cycle. The plan I was on was a 3GB plan, I had only used a total of 2.59GB. Again, that's aside from the fact that there shouldn't have been a prorating to begin with as I chose to have the new plan not apply until the next billing cycle.
    #2 - The total allowance between the two pieces of prorated data allowances is only 2.354GB, not the 3GB of the plan I was currently on. Yet again, I don't know how many more times I have to say this for it to be understood, I chose to have the changes not apply until the next billing cycle. This means that the plan should have never been split to begin with, let alone shorted 0.646GB of usage allowance.
    I'm sorry if I'm being short with you, but this is a clear-cut case of an error in the process and I do not appreciate being told I am mistaken when you haven't even bothered to pay attention to all the details and check the math for yourself.

Maybe you are looking for

  • How can I get the content of a text element to insert in a different block

    I have two blocks. The first one (block_control) is a non based block with a text element (EXERCICE). When I go to the second block (block_data) and I want to insert a new record, I need in my record the value of EXERCICE who is in the first block (b

  • Offline backup and RFC adapter

    Hello, I have a scenario where sap ECC 6.0 send data to XI via RFC adapter. Because we are on SP12 I set Maximum Reconection Delay parameter of sender agreement to make sure that adapter will work when the system is up again. Unfortunatly when in CC-

  • Please Clarify

    Hi all, I am new to OAF. I have some doubts on the following: Why do we use RETENTION_LEVEL and MANAGE_STATE in AM? What is the main use of processRequest and processFormRequest and what is the difference between them? Why do we update tuning for EO?

  • HT4623 I'm trying to update my iphone 3gs from version 4.2 but under settings-general there is no update link

    I'm trying to update my iphone 3gs from version 4.2 but under settings-general there is no update link. This is probably a frequently asked question, so I'm sorry but I'm new to this.

  • Can't Install Digital Camera Raw 3.6

    I am having problems installing Digital Camera Raw 3.6. During the install, I get the message "Digital Camera Raw Compatibility Update can't be installed on this disk. This update provides support for Aperture 3 and iPhoto 9 only". I purchased Apertu