VARCHAR(byte) and AL32UTF8

Hi,
I was wondering if anyone can tell me when we would see issues with VARCHAR columns with byte precision - say VARCHAR2(60 byte), when using the charset AL32UTF8? We have had a few issues, but nothing we can specifically reproduce. I would have assumed that because AL32UTF8 requires 4 bytes per character that we would see a quartering of the number of characters that can be stored in a field (e.g. VARCHAR2(60 bytes) allows only 15 characters), but this doesn't seem to be the case.
The varchar data does not yet contain multibyte characters, but this will be a requirement in the near future.
Are there any specific risk to the database by altering the columns to be character definitions for precision? Will the data convert automatically?
The database is 9i - 9.2.0.1.0
Regards,
Dave
Message was edited by:
user575604

Al32UTF8 is a variable width character set, so a VARCHAR2(60 byte) column could contain 15 4-byte characters or 60 1-byte characters or 30 2-byte characters or any combination thereof. When you're dealing with variable-width character sets, I find it far more logical to declare VARCHAR2 columns in terms of characters, not bytes, because it's too hard to explain to users that a column can sometimes hold 60 characters but sometimes a 20 character string is too long.
I'm not sure what "risks" you're worried about. Oracle may have to allocate more space for fixed width (CHAR) columns declared in terms of characters rather than bytes. There is probably some performance overhead to verifying that a string has only a fixed number of characters. I wouldn't expect either of those to be significant, though.
Justin

Similar Messages

  • Trying to understand the details of converting an int to a byte[] and back

    I found the following code to convert ints into bytes and wanted to understand it better:
    public static final byte[] intToByteArray(int value) {
    return new byte[]{
    (byte)(value >>> 24), (byte)(value >> 16 & 0xff), (byte)(value >> 8 & 0xff), (byte)(value & 0xff) };
    }I understand that an int requires 4 bytes and that each byte allows for a power of 8. But, sadly, I can't figure out how to directly translate the code above? Can someone recommend a site that explains the following:
    1. >>> and >>
    2. Oxff.
    3. Masks vs. casts.
    thanks so much.

    By the way, people often introduce this redundancy (as in your example above):
    (byte)(i >> 8 & 0xFF)When this suffices:
    (byte)(i >> 8)Since by [JLS 5.1.3|http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#25363] :
    A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T.Casting to a byte is merely keeping only the lower order 8 bits, and (anything & 0xFF) is simply clearing all but the lower order 8 bits. So it's redundant. Or I could just show you the more simple proof: try absolutely every value of int as an input and see if they ever differ:
       public static void main(String[] args) {
          for ( int i = Integer.MIN_VALUE;; i++ ) {
             if ( i % 100000000 == 0 ) {
                //report progress
                System.out.println("i=" + i);
             if ( (byte)(i >> 8 & 0xff) != (byte)(i >> 8) ) {
                System.out.println("ANOMOLY: " + i);
             if ( i == Integer.MAX_VALUE ) {
                break;
       }Needless to say, they don't ever differ. Where masking does matter is when you're widening (say from a byte to an int):
       public static void main(String[] args) {
          byte b = -1;
          int i = b;
          int j = b & 0xFF;
          System.out.println("i: " + i + " j: " + j);
       }That's because bytes are signed, and when you widen a signed negative integer to a wider integer, you get 1s in the higher bits, not 0s due to sign-extension. See [http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#5.1.2]
    Edited by: endasil on 3-Dec-2009 4:53 PM

  • I pull fiftyfour bytes of data from MicroProcessor's EEPROM using serial port. It works fine. I then send a request for 512 bytes and my "read" goes into loop condition, no bytes are delivered and system is lost

    I pull fiftyfour bytes of data from MicroProcessor's EEPROM using serial port. It works fine. I then send a request for 512 bytes and my "read" goes into loop condition, no bytes are delivered and system is lost

    Hello,
    You mention that you send a string to the microprocessor that tells it how many bytes to send. Instead of requesting 512 bytes, try reading 10 times and only requesting about 50 bytes at a time.
    If that doesn�t help, try directly communicating with your microprocessor through HyperTerminal. If you are not on a Windows system, please let me know. Also, if you are using an NI serial board instead of your computer�s serial port, let me know.
    In Windows XP, go to Start, Programs, Accessories, Communications, and select HyperTerminal.
    Enter a name for the connection and click OK.
    In the next pop-up dialog, choose the COM port you are using to communicate with your device and click OK.
    In the final pop
    -up dialog, set the communication settings for communicating with your device.
    Type the same commands you sent through LabVIEW and observe if you can receive the first 54 bytes you mention. Also observe if data is returned from your 512 byte request or if HyperTerminal just waits.
    If you do not receive the 512 byte request through HyperTerminal, your microprocessor is unable to communicate with your computer at a low level. LabVIEW uses the same Windows DLLs as HyperTerminal for serial communication. Double check the instrument user manual for any additional information that may be necessary to communicate.
    Please let me know the results from the above test in HyperTerminal. We can then proceed from there.
    Grant M.
    National Instruments

  • Convert.To​Byte and logical masks, from VB to LabVIEW

    Hi guys,
    I have to write a VI communicating via RS232 with a microcontroller for a PWM application. A colleague has given me the GUI he has developpd in VB and I would like to integrate it into my LabVIEW programme by converting the VB code into Labview code.  Here's the code:
    Private Sub LoadButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles loadButton.Click
            Dim bufLen As Integer = 12      // Buffer length
            Dim freq As Integer = 0      // frequency
            Dim pWidth As Integer = 0      // pulse width
            Dim dac As Integer = 0       // Value used in oscillator setting for generating pulse frequency
            Dim addr As Integer = 0      // Address of the pulse width in the pulse generator.
            Dim rTime As Integer = 0      // duration of machining/pulse train in ms.
            Dim returnValue As Byte = 0      // A variable for storing the value returned by the 
                                                           //microcontroller after it receives some data
            Dim buffer(bufLen) As Byte       // creates an array of bytes with 12 cells => buffer size = 8 x12 = 96 bits
    // can you explain a bit please I know you're converting the entered values into byte and put them one by one in a specific order. This order is how the 
    //microcontroller expects them
                addr = (Floor((pWidth - Tinh) / Tinc)) // Formula from hardware, calculates address setting for pulse generator to set pulse width.
                buffer(0) = Convert.ToByte(Floor(3.322 * (Log10(freq / 1039)))) // Caluclates OCT value for use in setting oscillator for pulse freq.
                dac = (Round(2048 - ((2078 * (2 ^ (10 + buffer(0)))) / freq)))  // Calculates DAC value for use in setting oscillator for pulse freq.
                buffer(1) = Convert.ToByte((dac And &HF00) >> 8)                         //
    // &H is the vb.net to tell it its hex code, F00 gives the top 4 bits from a 12 bit value.
                buffer(2) = Convert.ToByte(dac And &HFF) // For values that are larger than 256 (8bits) the value needs to be split across 2 bytes (16 bits) this gets the //bottom 8 bits.  &H is the vb.net to tell it its Hex.
                buffer(3) = Convert.ToByte((addr And &HFF0000) >> 16) // This value may be so large it requires 3 bytes (24bits). This gets the top 8 bits.
                buffer(4) = Convert.ToByte((addr And &HFF00) >> 8) // This gets the middle 8 bits.
                buffer(5) = Convert.ToByte(addr And &HFF)// This gets the bottom 8 bits.
                buffer(6) = Convert.ToByte((rTime And &HFF0000) >> 16) //This value may also be 3 bytes long.
                buffer(7) = Convert.ToByte((rTime And &HFF00) >> 8)
                buffer(8) = Convert.ToByte(rTime And &HFF)
                buffer(9) = Convert.ToByte(2.56 * ocpBox.Value)  // The ocp pulse period is formed of 256 steps or counts, so if a percentage is requested, this formula gives the number of steps/counts required to set the pulse width
                buffer(10) = Convert.ToByte(tempBox.Value)
                If (tempCheck.Checked = True) Then
                    buffer(11) = 1
                Else
                    buffer(11) = 0
                End If
                freq = ((2 ^ buffer(0)) * (2078 / (2 - (dac / 1024))))
                pWidth = Tinh + ((((Convert.ToInt32(buffer(3))) << 16) Or ((Convert.ToInt32(buffer(4))) << 8) Or (Convert.ToInt32(buffer(5)))) * Tinc)
                ' Connect to device
                serialPort1.Write(1) // Sends the number 1. This tells the microcontroller we are about to start sending data. It should respond with a zero if it is ready 
                                             //and the connection is good.
    The line "serialPort1.Write(buffer, 0, bufLen)" sends the buffered data where the variables are: buffer =  the buffered data; 0 = the position in the buffer to start from; bufLen = the position in the buffer to stop sending data.
    What's the best way to deal with the Convert.ToBytes and the logical masks And ??
    Thanks in advance for your time and consideration,
    regards
    Alex
    Solved!
    Go to Solution.

    Try this
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Object to byte [] and byte [] to Object

    Hi,
    I am quite new to java and need some help. My question is how can I convert an Object to byte [] and convert back from byte [] to Object.
    I was thinking of something like:
    From object to byte []
    byte [] object_byte=object.toString().getBytes();
    When I try to do:
    String obj_string=object_byte.toString();
    But obj_string is not even equal to object.toString().
    Any suggestions?
    Thanks

    // byte[] to Object
    byte[] b = new byte[]{1};
    Object o = (new ObjectInputStream(new ByteArrayInputStream(b))).readObject();
    // Object to byte[]
    Object o = new Object();
    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();    
    ObjectOutputStream objStream = new ObjectOutputStream(byteStream);       
    objStream.writeObject(o);  
    byte[] b = byteStream.toByteArray();

  • System.InvalidCastException: Operator ' =' is not defined for type 'Byte()' and type 'Integer'.

    I'm getting the following error
    System.InvalidCastException: Operator '>=' is not defined for type 'Byte()' and type 'Integer'.
       at Microsoft.VisualBasic.CompilerServices.Operators.InvokeObjectUserDefinedOperator(UserDefinedOperator Op, Object[] Arguments)
       at Microsoft.VisualBasic.CompilerServices.Operators.InvokeUserDefinedOperator(UserDefinedOperator Op, Object[] Arguments)
       at Microsoft.VisualBasic.CompilerServices.Operators.CompareObjectGreaterEqual(Object Left, Object Right, Boolean TextCompare)
       at Customer_Fee_Calculator.Form1.GetCost2(Object dimension) in C:\Users\Chris\Documents\Visual Studio 2013\Projects\Customer Fee Calculator\Customer Fee Calculator\Form1.vb:line 259
       at Customer_Fee_Calculator.Form1.CustomerSelect_SelectedIndexChanged(Object sender, EventArgs e) in C:\Users\Chris\Documents\Visual Studio 2013\Projects\Customer Fee Calculator\Customer Fee Calculator\Form1.vb:line 72
                Dim inotes = rs1("INOTES").Value
      (Line 72)          If inotes.Length > 1 Then
                    cost = GetCost2(rs1("INOTES").Value)(0)
                    desc = GetCost2(rs1("INOTES").Value)(1)
                Else
                    PkgType = CInt(rs1("PKGTYPE").Value)
                    cost = GetCost(PkgType)(0)
                    desc = GetCost(PkgType)(1)
                End If
     Function GetCost2(dimension)
            Dim tempSql1, rs1
            Dim MyResult As New ArrayList
            Select Case dimension
    (line 259)            Case 1 To 24
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 13"
                Case 25 To 43
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 14"
                Case 44 To 71
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 16"
                Case 72 To 120
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 17"
                Case 121 To 499
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 18"
                Case 500
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 19"
                Case Else
                    tempSql1 = "SELECT * FROM PRODUCT WHERE PRODUCTID = 14"
            End Select
            rs1 = dbconnect(tempSql1, 1)
            MyResult.Add(rs1("RETAIL1").Value)
            MyResult.Add(rs1("DESCRIPTION").Value)
            MyResult.Add(rs1("PRODUCTID").Value)
            rs1 = Nothing
            tempSql1 = Nothing
            Return MyResult
        End Function

    Chris,
    I'm not a database guy - so in many ways, I'm not really qualified to answer this, but I'll make a suggestion anyway:
    At the very top of your code, put the following:
    Option Strict On
    Option Explicit On
    Option Infer Off
    With strict on and infer off, it will force you declare types - which I see missing in many places in the code.
    It's entirely possible that by you then going through and fixing those issues, you'll have corrected the problem.
    I don't know that, but it's possible -- and it should be done this way anyway, so there's certainly no harm to be done; in fact you'll actually speed up the program to a small degree because objects don't have to be inferred for the type that they actually
    are.
    ***** EDIT *****
    Also this: DON'T declare any of the types simply as "Object"; that's just trading off late binding for boxing, so you haven't really done much good. Declare them each as their correct actual type.
    Still lost in code, just at a little higher level.

  • Bits, bytes, and all the rest

    I need clarification on what some stuff really represents.
    My project is to build a Huffman tree (no problem). However, all tutorials and examples that I can find on the net take from a text file with the format <letter> <frequency>, such as:
    a 12
    b 3
    c 4
    d 8
    Mine has to read any kind of file, such as a text file.
    For example, if myDoc.txt contains:
    This is my document.
    I have to have my program read the bytes from the infile, count the frequency of each byte from 0 through 255. Then the frequencies must be placed on a list of single node Huffman trees, and build the tree from this list.
    I think I am having trouble because I just cannot get straight what a byte "looks like". My project says ensure you are not counting the "letters" of the infile, but the "bytes".
    So what does a byte look like? When I read in the file as above, what is a "byte" representation of the letter T, i,h, etc?
    Any ideas?

    Ok, Roshelle....here is a little history lesson that you should have learned or should have been explained to you by your instructor before he/she gave you the assignment to construct a Huffman tree.
    A bit is a binary digit which is either 0 or 1 -- it is the only thing that a computer truly understands. Think about it this way, when you turn on the light switch, the light comes on (the computer sees it as a 1), when you turned the switch off, the light goes out (the computer sees it as a 0).
    There are 8 bits to a byte -- you can think of it as a mouthful. In a binary number system, 2 to the power of 8 gives you 256. So, computer scientists decided to use this as a basis for assigning a numerical value to each letter of the English alphabets, the digits 0 to 9, some special symbols as well as invisible characters. For example, 0 is equivalent to what is known as null, 1 is CTRL A, 2 is CTRL B, etc...... (this may vary depending on the computer manufacturers).
    Now, what was your question again (make sure that you count the byte and not the letters)?
    As you can see, when you read data from a file, there may be characters that you don't see such as a carriage return, line feed, tab, etc.....
    And like they said, the rest is up to the students!
    V.V.

  • High bytes and low bytes?

    What are high bytes and low bytes? Thanks for helping.

    This may not be enough information to answer your question. How and where are the terms High bytes and Low bytes used?
    Often, a data type is represented using multiple bytes. For example, short is 2 bytes. Sometimes the most significant byte is the high byte while the least significant byte is the low byte.

  • Working set, virtual bytes and private bytes

    Hello everyone,
    I am using Windows Server 2003 Performance Counter tool to monitor the memory consumed by my process.
    The interested terms are working set, virtual bytes and private bytes. My questions are,
    1. If I want to watch the real physical memory consumed by current process, which one should I monitor?
    2. If I want to watch the physical memory + swap file consumed by current process, which one should I monitor?
    3. Any more clear description of what these terms mean? I read the help from Performance Counter tool, but still confused
    which one(s) identifies the real used physical memory, and which one(s) identifies the real used physical memory + page swap file, and which one(s) identifies the required memory (may not be really allocated either in physical memory or in swap page file).
    If there are any related learning resource about the concepts, it is appreciated if you could recommend some. :-)
    thanks in advance,
    George

    And for further benefit:
    "Virtual bytes" is the total size of the not-free virtual address space of the process. It includes private committed, mapped, physically mapped (AWE), and even reserved address spaces. 
    "Private committed" is the portion of "virtual bytes" that is private to the process (as opposed to "mapped", which may be shared between processes; this is used by default for code). This normally includes program globals,
    the stacks (hence locals), heaps, thread-local storage, etc. 
    Each process's "private committed" memory is that process's contribution to the system-wide counter called "commit charge". There are also contributions to commit charge that aren't part of any one process, such as the paged pool. 
    The backing store for "private committed" v.m. is the page file, if you have one; if you have (foolishly in my opinion) deleted your page file, then all private committed v.m., along with things like the paged pool that would normally be backed
    by the page file, has to stay in RAM at all times... no matter how long ago it's been referenced (which is why I think running without a page file is foolish). 
    The "commit limit" is the size of RAM that the OS can use (this will exclude RAM used for things like video buffers) plus the total of the current size(s) of your pagefile(s). Windows will not allow allocations of private committed address space
    (e.g. VirtualAlloc calls with the COMMIT option) that would take the commit charge above the limit. If a program tries it, Windows will try to expand the page file (hence increasing the commit limit). If this doesn't succeed (maybe because page file expansion
    is disabled, maybe because there's no free disk space to do it) then the attempt to allocate committed address space fails (e.g. the VirtualAlloc call returns with a NULL result, indicating no memory was allocated). 
    The "Working set" is not quite "the subset of virtual pages that are resident in physical memory", unless by "resident" you mean "the valid bit is set in the page table entry", meaning that no page fault will occur
    when they're referenced. But this ignores the pages on the modified and standby page lists. Pages lost from a working set go to one of these lists; if the modified list, they are written to the page file and then moved to the standby list. From there they
    may be used for other things, such as to satisfy another process's  need for pages, or (in  Vista and later) for SuperFetch. But until that happens, they still contains the data from the process they came from and are still associated with that process
    and with that processes' virtual pages. Referring to the virtual page for which the physical page is on the standby or modified list will result in a "soft" page fault that is resolved without disk I/O. Much slower than referring to a page that's
    in the working set - but much faster than having to go to disk for it. 
    So the most precise definition of "working set" is "the subset of virtual pages that can be referenced without incurring a page fault." 

  • Byte and char !!!

    Hi!
    What is the diference between an byte and char in Java! Could I use char instead of byte and reverse?
    Thanks.

    TYPE BYTE BITS SIGN RANGE
    byte 1 8 SIGNED -128 to 127
    char 2 16 UNSIGNED \u0000 to \uFFFF
    Both the date types can be interchanged
    Have a look at this
    class UText
         public static void main(String[] args)
              byte c1;          
              char c2= 'a';
              c1 = (byte) c2;
              c2 = (char) c1;          
              System.out.println(c1 +" "+c2);
    But It Leads to confusion while interchanging because of its SIGN. And the range of byte is very less.So its better to prefer char.

  • Use of UTF8 and AL32UTF8 for database character set

    I will be implementing Unicode on a 10g database, and am considering using AL32UTF8 as the database character set, as opposed to AL16UTF16 as the national character set, primarily to economize storage requirements for primarily English-based string data.
    Is anyone aware of any issues, or tradeoffs, for implementing AL32UTF8 as the database character set, as opposed to using the national character set for storing Unicode data? I am aware of the fact that UTF-8 may require 3 bytes where UTF-16 would only require 2, so my question is more specific to the use of the database character set vs. the national character set, as opposed to differences between the encoding itself. (I realize that I could use UTF8 as the national character set, but don't want to lose the ability to store supplementary characters, which UTF8 does not support, as this Oracle character set supports up to Unicode 3.0 only.)
    Thanks in advance for any counsel.

    I don't have a lot of experience with SQL Server, but my belief is that a fair number of tools that handle SQL Server NCHAR/ NVARCHAR2 columns do not handle Oracle NCHAR/ NVARCHAR2 columns. I'm not sure if that's because of differences in the provided drivers, because of architectural differences, or because I don't have enough data points on the SQL Server side.
    I've not run into any barriers, no. The two most common speedbumps I've seen are
    - I generally prefer in Unicode databases to set NLS_LENGTH_SEMANTICS to CHAR so that a VARCHAR2(100) holds 100 characters rather than 100 bytes (the default). You could also declare the fields as VARCHAR2(100 CHAR), but I'm generally lazy.
    - Making sure that the client NLS_LANG properly identifies the character set of the data going in to the database (and the character set of the data that the client wants to come out) so that Oracle's character set conversion libraries will work. If this is set incorrectly, all manner of grief can befall you. If your client NLS_LANG matches your database character set, for example, Oracle doesn't do a character set conversion, so if you have an application that is passing in Windows-1252 data, Oracle will store it using the same binary representation. If another application thinks that data is really UTF-8, the character set conversion will fail, causing it to display garbage, and then you get to go through the database to figure out which rows in which tables are affected and do a major cleanup. If you have multiple character sets inadvertently stored in the database (i.e. a few rows of Windows-1252, a few of Shift-JIS, and a few of UTF8), you'll have a gigantic mess to clean up. This is a concern whether you're using CHAR/ VARCHAR2 or NCHAR/ NVARCHAR2, and it's actually slightly harder with the N data types, but it's something to be very aware of.
    Justin

  • PreparedStatement, VARCHAR"(4000) and umlauts

    Can someone explain this to me, please?
    I have a varchar2 column of size 4000. Just big enough so I don't have to LOB it.I can insert a String of size 4000 no problem. But, if my String contains special characters, such as umlauts, I get an error that the String size is too big for the column. Surprises me, as my column type is char rather than byte. However, if I don't use prepared Statements the same String will insert OK.
    Further, if I reduce the column size to 1, I can insert 'ä' using prepared statements.
    Simple test code is below. Can someone explain to me what is going on? Thanks.
    create table test_table (
    name varchar2(4000 char);
    public static void main(String[] args) {
    final String url = "jdbc:oracle:thin:@host:port:DB";
    final String username = "me";
    final String password = "password";
    String veryBigString = "";
    for (int i = 0; i < 4000; i++) veryBigString = veryBigString + "a";
    System.out.println("vbs length: " + veryBigString.length());
    Connection connection = null;
    try {
    Class.forName("oracle.jdbc.driver.OracleDriver");
    connection = DriverManager.getConnection(url, username, password);
    Statement s = connection.createStatement();
    int inserted = s.executeUpdate("insert into test_table values ('" + veryBigString + "')");
    System.out.println("1: inserted " + inserted);
    inserted = s.executeUpdate("insert into test_table values ('" + ("ä" + veryBigString.substring(1)) + "')");
    System.out.println("2: inserted " + inserted);
    PreparedStatement ps = connection.prepareStatement("insert into test_table values (?)");
    ps.setString(1, veryBigString);
    inserted = ps.executeUpdate();
    System.out.println("3: inserted " + inserted);
    ps.setString(1, "ä" + veryBigString.substring(1));
    inserted = ps.executeUpdate();
    System.out.println("4: inserted " + inserted);
    catch (Exception e) {
    e.printStackTrace();
    finally {
    if (connection != null) try {
    connection.close();
    catch (SQLException e) {
    e.printStackTrace();
    }

    I am unable to reproduce your problem
    I took your code, I had to change the connect string but otherwise it is the same
    I was able to load all four rows with no problem
    My character set is US7ASCII for the database and for the client
    I suspect you might be having a character-set-conversion issue, you know, where the client character set does not match the db character set so it has to do a translation, sometimes it is called 'lossy' if the client char set is not a complete subset of the db char set
    So - I changed the NLS_LANG env var on the client to WEISO... (the one you mentioned in your mail), but it still worked
    So, then I changed the driver - I was using classes12.jar so I changed it to ojdbc14.jar, it still worked
    I am using 10.2, is that what you guys are using?
    One thing you might want to try - do a
    SELECT DUMP(NAME) FROM TEST_TABLE
    It will give you the byte-by-byte ascii info that was loaded into the table, that might give you some insight
    Maybe load just 1000 characters and see what it is putting in the db for the As as opposed to the umlaut-As
    If you look at the data I loaded, it put in the umlaut-A as a single byte
    (but - that makes sense for the US7ASCII character set)
    (I trimmed the output)
    SQL> select dump(name) from test_table;
    DUMP(NAME)
    Typ=1 Len=4000: 228,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,...
    Typ=1 Len=4000: 97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,9...
    Typ=1 Len=4000: 97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,9...
    Typ=1 Len=4000: 228,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,97,...
    See below, char 228 is the umlaut-A
    SQL> select chr(228) from dual;
    C
    ä
    Here is the code, it is almost identical, I just put the user/pass in the connect string
    import java.sql.*;
    import java.math.*;
    import java.io.*;
    import oracle.jdbc.*;
    import java.util.*; // for the Properties
    class Varchar
    public static void main(String[] args) {
    final String url = "jdbc:oracle:thin:user/pass@host:1521:db";
    String veryBigString = "";
    for (int i = 0; i < 4000; i++) veryBigString = veryBigString + "a";
    System.out.println("vbs length: " + veryBigString.length());
    Connection connection = null;
    try {
    Class.forName("oracle.jdbc.driver.OracleDriver");
    connection = DriverManager.getConnection(url);
    Statement s = connection.createStatement();
    int inserted = s.executeUpdate("insert into test_table values ('" + veryBigString + "')");
    System.out.println("1: inserted " + inserted);
    inserted = s.executeUpdate("insert into test_table values ('" + ("ä" + veryBigString.substring(1)) + "')");
    System.out.println("2: inserted " + inserted);
    PreparedStatement ps = connection.prepareStatement("insert into test_table values (?)");
    ps.setString(1, veryBigString);
    inserted = ps.executeUpdate();
    System.out.println("3: inserted " + inserted);
    ps.setString(1, "ä" + veryBigString.substring(1));
    inserted = ps.executeUpdate();
    System.out.println("4: inserted " + inserted);
    catch (Exception e) {
    e.printStackTrace();
    finally {
    if (connection != null) try {
    connection.close();
    catch (SQLException e) {
    e.printStackTrace();
    }

  • JMS paging store - what is difference between bytes and messages threshholds?

              You can activate and configure both "bytes paging" and "message paging". What
              is the difference? Why use one or the other or are both required?
              Thanks,
              John
              

    Hi Mr. Black,
              Cool name.
              Message paging is based on number of messages. Bytes paging is based
              on the size of message payload (properties, body, corr-id, and type),
              where the payload does not include the header size. One can set
              either or both -- if both are set, paging kicks in as soon
              as the first threshhold is reached.
              As for which one to use, you may wish to set both. The former
              accounts for large numbers of small messages, and the latter
              handles large messages. (eg 1000 small 10 byte messages
              will charge 10,000 bytes toward quota but actually use up
              128,000 bytes of JVM memory once the header is thrown in...)
              Tom
              John Black wrote:
              > You can activate and configure both "bytes paging" and "message
              paging". What
              > is the difference? Why use one or the other or are both required?
              >
              > Thanks,
              > John
              

  • Conversion of String to Bytes and Length Calculation

    hi,
    How can I convert
    String to Bytes
    Bytes to String
    int to Bytes
    Bytes to int
    & int to String
    And I have to calculate the Length in Bytes of a String and int.
    Thanks

    double d = Double.parseDouble(new String(byteDouble)); Java doesn't seem to accept to convert byteDouble to a String...
    Exception in thread "main" java.lang.NumberFormatException: For input string: "[B@1d9fd51"
      at java.lang.NumberFormatException.forInputString (NumberFormatException.java:48)
      at java.lang.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1213)

  • Increase UDP sending size over 64k bytes and get error -113,sending buffer not enough

    Dear all,
    I have a case that I must send a data over 64k bytes in a socket with UDP . I got a error-113 shows "A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram was smaller than the datagram itself.".I searched for this issue and got the closest answer as below:
    http://digital.ni.com/public.nsf/allkb/D5AC7E8AE545322D8625730100604F2D?OpenDocument
    It said I have to change buffer size with Wsock.dll. I used the same mathod to increaes the send buffer to 131072 bytes by choice optionname to so_sndbuf (x1001) and give it value with 131072 and it worked fine without error. However I still got an error 113 while sending data with " UDP Write.vi ". It seems UDP write.vi reset the buffer size? Are there any other things cause the error?
    I attached example code. In UDP Sender.vi you can see I change send buffer size to 131072 and send date included a 65536 bytes data.There is also a UDP receiver.vi and there is some missing VI which you can get from the LINK. But it's not necessary.
    Attachments:
    UDP Sender.vi ‏14 KB
    UDP Receiver.vi ‏16 KB
    UDP_set_send_buffer.vi ‏16 KB

    The header for a UDP packet includes a 16 bit field that defines the size of the UDP message (HEADER AND DATA)
    16 bits limits you to a total size of 65635 bytes, minus the header sizes; a minimum of 20 bytes are required to define an IP packet and 8 bytes for UDP leaving an effective data payload of 65507
     bytes.
    LabVIEW is not the issue...
    http://en.wikipedia.org/wiki/User_Datagram_Protocol#Packet_structure
    Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
    If you don't hate time zones, you're not a real programmer.
    "You are what you don't automate"
    Inplaceness is synonymous with insidiousness

Maybe you are looking for

  • Problems with landscape mode

    Hi, I've a problem with my iPhone 3G. In apps like notes, sms or even ipod, it doesn't rotate in landscape mode. But in "not apple apps" the accelerometer is working. I've tested it with Labyrinth LE. After restoring the device it doesn't rotate too.

  • Don't want to load photos - can I keep it from trying?

    I'm a new Ipod user and I LOVE it! I have over 5500 photos in my photo file on my computer. I am not interested in loading photos onto my Ipod, just mostly music and a few music videos. Each time I sync my Ipod, it starts automatically loading these

  • Business Rule Query

    Hi All, Is it possible to copy data between different Cubes using Business Rules ? Business Rules are run under a specific Outline. Thanks, Amol

  • JEE MP for SCOM 2012

    Hello, i've got a problem about JEE MPs. i'm using SCOM 2012 and i imported JEE MPs. in readme.txt said "When you've imported the Java Management packs matching your environment the application servers will be automatically discovered " but when i cl

  • After r12 installation  error autocofig

    Hi, after install R12 1.1 on rhel 5 while running autocofig takes 3 hours to complet. ERRORCODE = 0 ERRORCODE_END please guide me