OPEN DATASET Big/Little Endian

Thread opened on behalf of colleague:
SOURCE
I want to create a file on a system that is Big Endian with codepage 4102.
TARGET
I want to read this file in a system that is Little Endian with codepage
4103.
What is the exact 'open dataset' syntax on the SOURCE and TARGET?
The file on the little endian system is not being read correctly. It shortdumps with CONVT_CODEPAGE.
Thanks.

Hi Wang,
Check thie following link:
http://www.s001.org/ABAP-Hlp/abapopen_dataset.htm
Hope this helps you.
Regards,
Chandra Sekhar

Similar Messages

  • How to find machine is Big/little endian?

    Hi,
    Can any one tell me that how to find machine on which we are working i.e. Big/Little Endian in using java?
    One of my friend told me that in C Programming we can find this by using following code:
    #include <stdio.h>
    #include <arpa/inet.h>
    int main() {
        if (htons(1) == 1) puts("big endian");
        else puts("little endian");
        return 0;
    }Can any one help me in solving this mystery?
    Cheers,
    Ram

    java.nio.ByteOrder.nativeOrder();~D

  • Linear PCM - Big Little Endian

    Sorry - I'm at a loss.
    Assumptions:
    Linear PCM is the correct format for exporting audio for FCP - ie Lossless
    Ok...
    Big Endian seems to be a default with Little Endian wanting a checkbox.
    IS there a reason FCP wouldn't accept one or the other?
    CaptM

    Drew13 had this to say:
    Little Endian and Big Endian is how the data is stored with the most signifcant byte on one end or the other (taken from Gulliver's Travels and how the egg should be cracked.(ETA) One of the conflicts in the book is between people who preferred cracking open their soft-boiled eggs from the little end, and the people who preferred the big end.). For the most part you do not need to concern yourself with this (unless you like finding out about those things more) it is transparent in your workflow on the Mac using these apps.(FCP, FCE etc.)
    PPC Mac > Big Endian (default)
    WIndows PC > Little Endian (default)
    • Endianness
    This is all well and good except when it comes to the Intel Macs and Dolby AC3 creation with Compressor.
    At the moment there's a byte-order compatability Problem mixing UB versions of AC3 with PPC versions of AC3 within the same DVD SP 4.1 project.
    Read this:
    http://discussions.apple.com/thread.jspa?messageID=3130608&#3130608
    MaxR says:
    Apple's developer site specifically cites byte-order as a concern for developers of Universal binaries, but seems to have let this slip out anyway. Shame, shame, Apple.
    G5 1.8 DP (PCI-X)   Mac OS X (10.4.8)   ATI X800 XT, 4GB RAM, 20" & 23" ACDs, M-Audio Revolution 5.1, Fostex D15 DAT

  • Java big/little endian file support

    Hi, hope you can help,
    Firstly is Java bigendian or little endian or is this non manifest.
    I have a binary file that may exist either in big endian or little endian format.
    Is there support for reading differing byte orders, if not how do I go about reading either format.
    Many thanks in advance,
    Aaron

    Yes, Java has support for both byte orders.
    http://developer.java.sun.com/developer/JDCTechTips/2002/tt0507.html is an issue of JDC Tech Tips that discusses the use of java.nio.channels for this and has example code for writing and reading both orders.
    Many thanks.
    Sorry but there seems to be no way to award duke points a second time :(
    Aaron

  • Little endian & big endian format system

    Hi,
    I have code which works fine for Little endian format system, can somebady please tell how to get data from big endian format system.
    or is sap has setting where i can turn on and than run my code on that system.
    Thanks,
    John.

    hi
    chk this
    OPEN DATASET dset IN LEGACY TEXT MODE [(BIG|LITTLE) ENDIAN] [CODE PAGE cp]
    Effect
    Data is read or written in a form which is compatible to BINARY MODE in Releases <= 4.6. This addition is primarily used to convert a file into the code page format specified already when it is opened. At runtime, the system uses the format of the system code page of the application server. The system saves the file then again in the code page specified. This procedure is important if data is exchanged between systems using different code pages. For more information, see READ DATASET and TRANSFER.
    Notes
    on BIG ENDIAN, LITTLE ENDIAN
    These additions specify the byte sequence in which to store numbers (ABAP types I, F, and INT2) in the file.
    These additions may only be used in combination with the additions IN LEGACY BINARY MODE and IN LEGACY TEXT MODE. If these are not specified, the system assumes that the byte sequence determined by the hardware of the application server is used in the file.
    If the byte sequence specified differs from that determined by the hardware of the application server, READDATASET and TRANSFER make the corresponding conversions.
    These additions replace the language element TRANSLATE ... NUMBER FORMAT ... which must not be used in Unicode programs.
    on CODE PAGE cp
    This addition specifies the code page which is used to represent texts in the file.
    This addition may only be used in combination with the additions IN LEGACY BINARY MODE and IN LEGACY TEXT MODE. If this addition is not specified, the system uses the code page defined by the text environment current at the time a READ or TRANSFER command is executed (see SET LOCALE LANGUAGE).
    This addition replaces the language element TRANSLATE ... CODE PAGE ... which must not be used in Unicode programs.
    open datset ... IN LEGACY BINARY MODE [(BIG|LITTLE) ENDIAN] [CODE PAGE cp]
    Effect
    Data is read or written in a form which is compatible to BINARY MODE in Releases <= 4.6. This addition is primarily used to convert a file into the code page format specified already when it is opened. At runtime, the system uses the format of the system code page of the application server. The system saves the file then again in the code page specified. This procedure is important if data is exchanged between systems using different code pages. For more information, see READ DATASET and TRANSFER.

  • Oracle guids little endian, big endian problem

    hello every one,
    I am having an strange issue. I am developing a .net app against an Oracle DB. Well, when I insert a guid into a raw oracle changes the order of the byte array.
    In VB.net I see (with all the representations of guids, with or without {,- or whatever):
    ac6d5c4f-542e-4fc8-b8b6-f53821811be3
    In Oracle I see (tested with toad and sql tools):
    4F5C6DAC2E54C84FB8B6F53821811BE3
    It goes ok with the rightests numbers but fails the the leftests.
    Any suggestion?

    I have to admit that I get the meaning of big/little endian confused.
    As JosAH says; Big endian means the big end is first,
    NOT that the big value is at the end.And I have to admit that I find that whole big/little-endian jargon quite confusing
    too, I mean the most significant byte is stored at the lowest address and we
    call it big-endian ... If the number is stored 'backwards' we call it little endian.
    In the old days when computers were made of wood and ran on steam, we
    used to call it 'low-byte-first' and 'high-byte-first'. Sigh ... those were the days ;-)
    kind regards,
    Jos

  • OPEN DATASET FOR INPUT IN TEXT MODE - linesize issue

    Hi,
    I faced a problem when opened ANSI file with CYRILLIC in ECC 6.0 Unicode system - the system cuts the line to 250 characters. Below is snip of code:
    REPORT  Z_TEST_01 LINE-SIZE 1023.
    DATA:
      file TYPE char40 VALUE 'ansi_file.txt',
      line TYPE char1024, len TYPE i.
    OPEN DATASET file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    WHILE sy-subrc = 0.
          READ DATASET file INTO line ACTUAL LENGTH len.
            WRITE: / len, line.
    ENDWHILE.
    CLOSE DATASET file.
    In this case, variable LEN always get value 250. File-content is correctly converted from ANSI and all CYRILLIC is displayed to the screen. I changed type for LINE - initialy the type was STRING, actially.
    Further, tried to open it in BINARY - like this:
    DATA:
      file TYPE char40 VALUE 'ansi_file.txt',
      line TYPE char1024, len TYPE i.
    FIELD-SYMBOLS <hex_container> TYPE x.
    OPEN DATASET file FOR INPUT IN BINARY MODE.
    ASSIGN line TO <hex_container> CASTING.
    DO.
      READ DATASET file INTO <hex_container>.
      IF sy-subrc = 0.
        WRITE: / line.
      ELSE.
        EXIT.
      ENDIF.
    ENDDO.
    CLOSE DATASET file.
    WRITE: / line.
    In this case I got bigger linesize (obviously 1024), but faced conversion issues - the file contains some CYRILLIC and it is messed. Played for few hours with conversions - using additions: IN LEGACY BINARY MODE... BIG/LITTLE ENDIAN, CODE PAGE... without success. So decided to ask...
    Well, I searched SDN for a similar issue, but didn't found, except this one:
    Re: OPEN DATASET STRING Problem
    Could someone points me what am I doing wrong? How can I read my ANSI file with line-size more than 250 chars? Actually, in my case line size may vary up to 1800 chars. Further, afrer conversion and some validation, I should save it back to the AS.
    Many thans in advance.
    Regards,
    Ivaylo Mutafchiev

    Sorry for the noise - it is not an issue anymore.

  • What is Little Endian / Big Endian in Sound settings for Apple Intermediate

    Hi.
    In Final Cut Express, I am trying to splice together multiple video clips, combining footage from:
    1) HDV camcorder imported into iMovie 08 as 960 x 540 (the "lower resolution" option from iMovie '08 import), 16-bit @ 48 KHz (Big Endian), 25 FPS
    2) AVI files (DiVX 512 x 384, MP3 at 44.1 KHz, 23.98 FPS)
    3) Canon Camera "movie" files (Apple OpenDML JPEG 640 x 480, 16-bit Little Endian, Mono @ 44.1 KHz, 30 FPS)
    Questions
    1) What is this little endian / big endian thing?
    2) What is the best codec for me to edit in? My targets are NTSC DVD and also a HD version served via iPod connected to HDTV.
    I am not sure what codec to convert everything to, so that I can edit without having to RENDER every time I do something. I tried to export using QuickTime Pro to convert to Apple Intermediate Codec but am not sure about the option for "Little Endian" (I am using an Intel Mac; I assume I do NOT use little endian? Can someone help clarify?)
    Many thanks!!!

    1. They're compression formats. Different codec use different compression schemes.
    2. You should convert your material to QuickTime using the appropriate DV codec and frame rate.
    None of your material is HD. Some of it is low resolution, lower than even DV. There is no good way to make this material HD.

  • Any need for conversation from big endian and little endian?

    Hi,
    I am planning to migrate an Oracle 9i Database on AIX 5.3 to Oracle 11g R2 Windows 2008, and have planned to use transportable tablespace. But prior to that task is the conversation required from big endian and little endian using RMAN?
    Appreciate any suggestions, comments and hints
    Thanks

    Hi,
    Check V$TRANSPORTABLE_PLATFORM, it shows the ending for each supported platform. Given the results on my 11g, I suspect that you'll have to convert the tablespaces...
    SYSTEM@oracle11 SQL>select *
      2   from V$TRANSPORTABLE_PLATFORM
      3  ;
    PLATFORM_ID PLATFORM_NAME                                                                                         ENDIAN_FORMAT
              7 Microsoft Windows IA (32-bit)                                                                         Little
              6 AIX-Based Systems (64-bit)                                                                            Big
              8 Microsoft Windows IA (64-bit)                                                                         Little
             12 Microsoft Windows x86 64-bit                                                                          LittleHtH
    Johan

  • TCP Programming / Why do not need to worry about Big and little Endian?

    Please help, I do not understand this concept please explain.
    The architecture of a CPU is either little-endian or big-endian; some modern CPU's allow a choice via software.
    The TCP/IP protocol standard specifies that all the bytes that make up an item must be sent in "network order", which happends to be big-endian. Intel Pentium CPU's are little-endian.
    This implies that on an Intel machine the TCP software will have to chop an int into bytes and then reverse the bytes before transmitting them.
    Why does the JAVA TCP software does not need to perform the reversal?
    Thanks,
    Alex

    But why would I need to use the DataOutputStream,You don't have to.
    But that's what the Java API provides for sending java primitives over a stream. You wouldn't have to use that. You could chop the int into bytes yourself, and send the bytes, and your Java code still wouldn't have to worry about the endiannes of it, because the VMs handle that.
    DataOutputStream just does the chopping and reassembling for you, so it's easier than doing it yoursefl.

  • Why do not need to worry about Big and little Endian in Java

    Please help, I do not understand this concept please explain.
    The architecture of a CPU is either little-endian or big-endian; some modern CPU's allow a choice via software.
    The TCP/IP protocol standard specifies that all the bytes that make up an item must be sent in "network order", which happends to be big-endian. Intel Pentium CPU's are little-endian.
    This implies that on an Intel machine the TCP software will have to chop an int into bytes and then reverse the bytes before transmitting them.
    Why does the JAVA TCP software does not need to perform the reversal?
    Thanks,
    Alex

    Java doesn't give you direct access to the individual bytes of a larger data item such as an integer. For this reason you don't have the usual endian problems that occur in C. The actual handling of this is in the DataOutputStream and DataInputStream where the integer is coverted to and from bytes using arithmetic, not by fiddling with the internal structure.
    Note that regardless of the machine architecture the operation value%256 will return the low order 8 bits. It's less efficient than assigning an int* to a char*, but it's not fraught with the endian problems or any of the other hardware baggage.

  • How do I swap 64-bit and 32-bit floats from little-endian to big-ending

    I am trying to read a file that could contain a list of 64-bit floats or 32-bit floats that were written on a PC so they are little-endian.
    I need to convert the float values to big-endian so that I can process the values. I know that straight swapping each byte with the adjacent byte doesn't work (especially since their floating point values). I've tried swapping them end for end (i.e., byte 15 from the file becomes byte 0 in my array) and that didn't work either.
    I know that if I were to read the little-endian float into the big-endian float type (float or double) that the format is pretty much lost (from little I understand about how floating point values are stored in memory).
    So, what I need is a way to read in a series of little-endian floating point values (64-bit and 32-bit) into a big-endian array of floating point values.
    Anyone have any ideas on how to do this? Any help would be much appreciated.

    A 64-bit double is represented by the sign bit, an 11-bit (biased) exponent
    followed by a 52-bit mantissa. Both x86 and SPARC use the exact same
    representation for 64-bit double. The only difference is the endianness
    when stored to memory, as you observed.
    A 32-bit float is represented by the sign bit, an 8-bit (biased) exponent
    followed by a 23-bit mantissa. Again, both x86 and SPARC use the exact
    same representation for 32-bit float modulo endianness.
    As you can see, a 64-bit double is not merely a pair of 32-bit float.
    You need to know exactly how the floating-point data was written:
    if a 32-bit float was written, you must endian-swap it as a 32-bit float;
    if a 64-bit double was written, you must endian-swap it as a 64-bit double.

  • VDI 3.1 Big Endian vs Little Endian: VDI Cluster to Remote mysql server

    Hello,
    We are implementing a Dual Data center VDI environment and with that, will be using VCS to Cluster a remote Mysql instance. My question, if the VDI Cluster is on x86 (Little Endian), can the remote Mysql DB be on Sparc (Big Endian)?
    Thanks,

    The endian matching is only a requirement for the management and data nodes of a MySQL cluster. If you are going to be using a remote MySQL DB, the VDI servers will be connecting to that remote MySQL DB as a client connection, and there is no requirement for clients to match the endian type.
    http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-limitations-exclusive-to-cluster.html

  • Big Endian and Little Endian formats.

    How can I read numbers from file in both big and little endian formats?

    Not without reading them a byte at a time and doing
    bit-shifting arithmetic yourself.I was hoping I will not get that kind of answer :) Well, if that's the only way,
    can you tell me what kind of format is used in Java? Big or Little endian?

  • Big-endian or little-endian

    Hello,
    Is JVM big-endian or little-endian? Or is it platform dependent?
    How does one check this?
    Regards.

    I also saw over the internet that java is big-endian. But when you run this small piece of code,
    class Test {
         public static void main(String[] args) {
              short x = 10;
              byte high = (byte)(x >>> 8);
              byte low = (byte)x;/* cast implies & 0xff */
              System.out.println( "x=" + x + " high=" + high + " low=" + low );
    The output is:
    x=10 high=0 low=10
    This sample code I have taken from "mindprod.com" site only.
    Is storage different from display? Or am I missing something here?
    Regards.

Maybe you are looking for