Returning records from binary file

Hi,
I am having trouble returning all the records from a binary file. As my knowledge is limited, I haven't been able to devise a method for counting the records and then displaying them one by one.
Any help you can provide is appreciated. Here is the basis of my code:
FileInputStream in = new FileInputStream("stud.bin");
ObjectInputStream s = new ObjectInputStream(in);
//System.out.println(s.available());
String studentFile = (String)s.readObject();
String gpaFile = (String)s.readObject();
classRoll.addStudent (studentFile, gpaFile);
System.out.println(studentFile + " has a GPA of " + gpaFile);
System.out.flush();
classRoll.addStudent("andy", "4.5");
s.close();
System.out.println(classRoll);

are these files in text format? You should use BufferedReader class- or some native, file component of the java.io API to retrieve the values. ObjectInputStreams are meant to exchange Object instances between resources, in other words you take a specific class, generalize it and then put it in a stream. You should not use ObjectStream for retrieving IO values because you are basically taking a specific class, making it general and then converting it back to a specific class. You should only use an ObjectInputStream when you need to send a current Object to some resource (often a remote resource) that does not have access to it.
Do some thing like:
FileInputStream f = new FileInputStream("file");
BufferedReader in = new BufferedReader(new InputStreamReader(f));
while(true) {
String line = in.readLine();
if (line == null)
break;
The above code is not necessarily the best, but it might help give you a sense of how to solve the problem. You should probably stick to the java.io API excusivly and avoid the ObjectStream class for this particular problem.

Similar Messages

  • "Read From Binary File" function Help ambiguity

    I must be getting tired, but for some reason a doubt crept in my mind as I was designing a new piece of code this morning:
    "is the "Read From Binary File" using the last file position or is it starting from the beginning of the file?"
    "That's a stupid question", I told myself.
    "I used this function a million times and have always assumed it is reusing the last file position. Moreover, there is no file offset input to that function, so WTH am I afraid of?"
    So, for kicks, I fired up the Help window and read the following description (*):
    Reads binary data from a file and returns it in data. How the data is read depends on the format of the specified file. This function does not work for files inside an LLB.
    (*) BTW, has anybody ever complained that you can't select and copy anything from the floating Help Window?
    Not much there. I particularly admire the phrasing of the second sentence... What about: "This function can do a lot of things, but it would much to complex to describe this is extensive details, so if you are asking, you probably can't afford using it"?
    Anyhow, I clicked on the "Detailed Help" and got this (among other things):
    Use the Set File Position function if you need to perform random access.
    WHAT? I am pretty darn sure I DO NOT USE the Set File Position when I read a file in successive and contiguous chunks. I just pass the file refnum into a shift register and back to the function and that's it.
    Now, the description of the "Refnum Out" ouput says: If file is a refnum or if you wire refnum out to another function, LabVIEW assumes that the file is still in use until you close it. Translated in plain English, is that supposed to mean that if the file is not closed it is open, or is that implying that it contains more info that just "the file is open and can be found here"?
    I started searching around and finally ended up with the entry for "refnums, file I/O". Down the bottom of the (long) article, I found this under the heading "References to Objects or Applications" (but nothing specific to files, BTW):
    ...LabVIEW creates a refnum associated with that file, device, or network connection...
    [...]  LabVIEW remembers information associated with each refnum, such as the current location for reading from or writing to the object and the degree of user access, so you can perform concurrent but independent operations on a single object. If a VI opens an object multiple times, each open operation returns a different refnum. LabVIEW automatically closes refnums for you when a VI finishes running, but it is a good programming practice to close refnums as soon as you are finished with them to most efficiently use memory and other resources.
    So it seems that my recollection was correct. I do not know what the "degree of user access" for a file is, but that's not the topic of today's post. 
    So, my point is: the Help File for this function is incomplete or ambiguous at best. Please correct it. And provide a link to the "refnum, file I/O" Help entry in its detailed Help. It would H E L P...
    Thanks for reading.

    Reading in succesive chunks is *NOT* random access. An open file always has
    a current position, which is updated with each read or write operation.
    You only need to set the file position if you want to start elsewhere.
    LabVIEW Champion . Do more with less code and in less time .

  • "Read from Binary File" and efficiency

    For the first time I have tried using Read from Binary File on sizable data files, and I'm seeing some real performance problems. To prevent possible data loss, I write data as I receive them from DAQ, 10 times per second. What I write is a 2-D array of double, 1000 data points x 2-4 channels. When reading in the file, I wish I could read it as a 3-D array, all at once. That doesn't seem supported, so I repeatedly do reads of 2-D array, and use a shift register with Build Array to assemble the 3-D array that I really need. But it's incredibly slow! It seems that I can only read a few hundred of these 2-D items per second.
    It also occurred to me that the Build Array being used in a shift register to keep adding on to the array, could be a quadratic-time operation depending on how it is implemented. Continually and repeatedly allocating bigger and bigger chunks of memory, and copying the growing array at every step.
    So I'm looking for suggestions on how to efficiently store, efficiently read, and efficiently reassemble my data into the 3-D array that I need. Perhaps I could simplify life if I had  "write raw data" and "read raw data" operations that write and read only the numbers and no metadata.then I could write the file and read it back in any size chunks I please -- and read it with other programs besides. But I don't see them in the menus.
    Suggestions?
    Ken
    Solved!
    Go to Solution.

    I quote the detailed help from Read from Binary File:
    data type sets the type of data the function uses to read from
    the binary file. The function interprets the data starting at the current file
    position to be count instances of data type.
    If the type is an array, string, or cluster containing an array or string, the
    function assumes that each instance of that data type contains size information.
    If an instance does not include size information, the function misinterprets the
    data. If LabVIEW determines that the data does not match the type, it sets data
    to the default for the specified type and returns an error.
    So I see how I could write data without any array metadata by turning off "prepend array or string size information", but I don't see any way to read it back in such bare form. If I did, I'd have to tell it how long an array to read, and I don't see where to do that. If I could overcome this, I could indeed read in much larger chunks.
    I'll try the auto-indexing tunnel anyway. I didn't tell you the whole truth, the 3-D array is actually further sliced up based on metadata that I carry, and ends up as a 4-D array of "runs". But I can do that after the fact instead of trying to do it with shift registers as I'm reading.
    Thanks,
    Ken

  • FM FOR CREATING NEW CONDITION RECORDS FROM INPUT FILE

    Hello vikas,
    I need to develop a interface program FOR CREATING NEW CONDITION RECORDS FROM INPUT FILE.
    Is there ay function module to update or create the condtion records,
    if u have any example interface program to update conditions records please send me.
    regards
    ram.

    This must be your compiler output.
    Basically, it is telling you two things that are wrong - in syntax.
    1. On line number 11 of the file RationalCollection1.java, the compiler expects a type identifier - that would be the object or return type such as int, String, boolean, etc.
    The reason it is doing this is probably due to your not ending a previous statement - like the "expected ';'" error statement. Check your code, make sure that methods (brackets) are closed correctly and there are no open statements (i.e. missing the semi-colen at the end).
    2. On line number 33 of the file RationalCollection1.java, the compiler expected the closing bracket. Thus, you didn't put the bracket where the compiler wants it.
    It appears that you have skipped some lines of code. Those lines are the problem, post them - post lines 30-36 and 9-15 so we can see what is happening around those error lines.

  • Siebel 8.0.0.12 Fix Pack; Unable to get the seed from binary file.

    Hello Folks,
    Can anyone throw some light into what action is required on my scenario.
    I have applied Fix Pack Siebel 8.0.0.12 on top of 8.0.0.11 SBA. After it is appled, I am facing a documented issue within the Release Notes for the 8.0.0.12 Fix Pack
    The issue is "UNABLE TO LAUNCH URL AFTER APPLYING SIEBEL 8.0.0.12". I tried the steps given with the MR document, however, I am still having this issue.
    I am also not sure what is expected at the step of; Run the following command: seedgeneratorutil myseed.dat abcdef .
    It's asking me for a value to enter for seed at command prompt. "Enter the seed":
    what I should give here. As an assumption values,I gave SADMIN and tried to launch but still shows up the same error
    Please Assit
    Steps Details from Release Notes:
    UNABLE TO LAUNCH URL AFTER APPLYING SIEBEL 8.0.0.12
    Component: Server Infrastructure
    Subcomponent: SWSE
    Product Version: Siebel 8.0.0.12
    Base Bug ID: 11938270
    **Users are unable to launch the URL after applying the Siebel 8.0.0.12 Fix Pack.
    **Use the following workaround to address this issue:
    Navigate to the eappweb/bin directory from the command line on the SWSE installation.
    Run the following command:
    seedgeneratorutil myseed.dat abcdef
    NOTE: In the example, myseed.dat is a filename. You can give any file name you wish.
    The myseed.dat file is generated in the eappweb/bin directory.
    Edit eapps.cfg to include the following parameters under the SWE section:
    seedfile = < complete path for myseed.dat >
    Bounce the web server.
    (For Linux only) Copy libmod_swe.so from the eappweb/bin folder to the web/ohs/modules folder
    Thanks
    Kumar

    Wilson,
    Thanks for your reply.I have repeated the steps and regenerated the error messages.
    Browser
    Message:
    An error occurred while trying to process your request. This error indicates a problem with the configuration of this server and should be reported to the webmaster (along with any errors listed below). We apologize for the inconvenience
    Initialization error:
    Unable to get the seed from binary file.
    Log
    2021 2011-09-20 23:23:01 0000-00-00 00:00:00 +0530 00000000 001 003f 0001 09 ss110920_7068 7068 7852 E:\sba80\SWEApp\log\ss110920_7068.log 8.0.0.12 [20444] ENU
    ProcessPluginState     ProcessPluginStateError     1     000000024e781b9c:0     2011-09-20 23:23:01     7852: [SWSE] Unable to get the seed from binary file.
    Eapps.cfg
    [swe]
    Language = enu
    Log = errors
    LogDirectory = $(SWSERoot)\log
    ClientRootDir = $(SWSERoot)
    SessionMonitor = False
    AllowStats = true
    LogSegmentSize = 0
    LogMaxSegments = 0
    DisableNagle = False
    seedfile = E:\sba80\SWEApp\BIN\80012seed.dat
    Thanks
    Kumar

  • Read an avi file using "Read from binary file" vi

    My question is how to read an avi file using "Read from binary file" vi .
    My objective is to create a series of small avi files by using IMAQ AVI write frame with mpeg-4 codec of 2 second long (so 40 frames in each file with 20 fps ) and then send them one by one so as to create a stream of video. The image are grabbed from USB camera. If I read those frames using IMAQ AVI read frame then compression advantage would be lost so I want to read the whole file itself.
    I read the avi file using "Read from binary file" with unsigned 8 bit data format and then sent to remote end and save it and then display it, however it didnt work. I later found that if I read an image file using "Read from binary file" with unsigned 8 bit data format and save it in local computer itself , the format would be changed and it would be unrecognizable. Am I doing wrong by reading the file in unsined 8 bit integer format or should I have used any other data types.
    I am using Labview 8.5 and Labview vision development module and vision acquisition module 8.5
    Your help would be highly appreciated.
    Thank you.
    Solved!
    Go to Solution.
    Attachments:
    read avi file in other data format.JPG ‏26 KB

    Hello,
    Check out the (full) help message for "write to binary file"
    The "prepend array or string size" input defaults to true, so in your example the data written to the file will have array size information added at the beginning and your output file will be (four bytes) longer than your input file. Wire a False constant to "prepend array or string size" to prevent this happening.
    Rod.
    Message Edited by Rod on 10-14-2008 02:43 PM

  • Read From Binary File doesn´t work on MCB2400 in LV2009 Embedded ARM

    Hello,
    I try to read a Binary File from SD-Card on my MCB2400 Board with LV2009 Embedded for ARM.
    But the output is always 0, if I use my VI on the MCB2400. If use the same VI on the PC it works fine with the same binary file.
    The
    access to the SD-Card on the MCB2400 works in other cases finde, if I
    try to read from a text-file it works without any problems.
    Are thre any constraints for the "Read From Binary File"-Node in Embedded in comparison to the same node on PC ?
      I have noticed that there is also a problem
    with the reading of textfiles. If the sice of the file is about 100Byte
    it doesn´t work anymore, too. I can´t understand it, because I read
    always just one Byte. And even if the implementation in Labview is so
    bad that it allways reads the total file in the ram it sould work. The
    MCB2400 has 32MegaByte RAM, so 100Byte or even a few MegaByte should
    work.
    But this doesn´t seem to be the Problem for the Binary-Problem. Because even a 50byte Binary-File doesn´t work.
    bye & thanks
    amin
    Solved!
    Go to Solution.
    Attachments:
    SD_Card_Read-test.vi ‏12 KB

    Hello,
    thank you for your Help.
    But I just want to read a Binary File, which is build by another program. And this is coded with 8Bit (like a normal Binary File) and not just with 7Bit (ASCII). So the workaround doesn't work in my case.
    I posted the Test-VI in my first post (here once again as picture). And it works fine on the PC, but if I try it on my MCB2400 the "Read Form Binary File" Node doesn't work.
    And it is also possible to open the Bin file with the "Read Text File" Node and see the cryptic content of the Bin-File. So the Problem seems to be in the "Read Form Binary File" Node.
    bye & thanks again
    amin
    Message Edited by aminat on 09-30-2009 03:28 AM

  • A programme that will extract particular records from a file

    Let's assume that there is a text file containing arround 50 records in the following format:
    ABCDEFGH:AB:12345:-09876:567:876
    The file to be read by the programme consist of a sequence of records each of which is split into fields using separator characters just like as we have in the Unix. The command line arguments should be given to execute the programme
    The most important part of the programme is that the condition for extracting a particular record from that file will be that specified fields shall contain a particular string of characters.
    The command line arguments to be given are:
    -tx This flag specifies the field separator character "x"
    -<num> This flag introduces a match requirement for field "num". Fields are numbered starting at 1. This flag may be followed by further flags and then by the string to look for.
    -f Fold case. I.e. string matching is not case sensitive.
    -t Transposition match. A match is recorded even if the strings differ provided the difference consists of just 2 characters transposed so that, for example: "compiler and complier" will be recorded as a match.
    -d A differ match. A match is recorded even if the two strings differ by one letter. For example: "gest and rest" will be recorded as a match.
    The file to be processed will be specified in the final command line argument.
    If match requirements are specified for more than one field the implication is that all fields specified must separately satisfy their individual match requirements. If any fields referred to in the match requirements are missing from a particular record then the record will not satisfy the overall match requirements.

    And your question is??

  • Deleting records from a file

    how do i delete records from a file on my presentation server after a successfull read operation

    Hi,
    First store records in to internal table using giu_upload.
    then read the records and then delete the records which ever u want based on some condition and download the records to the presentation server using gui_download
    Rgds,
    Prajith

  • Creating BOM using BDC :How to display no of records from flat file under

    Hi,
          How to display no of records from flat file under one (Alternative BOM) vertically.
        When i execute, the records are replacing one by one.
    Here my coding:
    report ZBOM1
           no standard page heading line-size 255.
    *include bdcrecx1.
    DATA: BEGIN OF bdc OCCURS 0,
           matnr(18),
           werks(4),
           stlan(1),
          END OF BDC.
    DATA: BEGIN OF BDC1 OCCURS 0,
           idnrk(18),
           MENGE(18),
           MEINS(3),
           postp(1),
          END OF bdc1.
    DATA: BEGIN OF BDCDATA OCCURS 0,
             matnr(18),
             werks(4),
             stlan(1),
             idnrk(18),
             MENGE(18),
             MEINS(3),
             postp(1),
             posnr(4),
          END OF BDCDATA.
    data: ibdcdata type  standard table of bdcdata WITH HEADER LINE.
    *start-of-selection.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        FILENAME                      = 'C:\Documents and Settings\dilipkumar.b\Desktop\soft.txt'
       FILETYPE                       = 'ASC'
       HAS_FIELD_SEPARATOR            = ','
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
      CODEPAGE                      = ' '
      IGNORE_CERR                   = ABAP_TRUE
      REPLACEMENT                   = '#'
      CHECK_BOM                     = ' '
      VIRUS_SCAN_PROFILE            =
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                     =
      HEADER                         =
      TABLES
        DATA_TAB                      = BDCDATA
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_READ_ERROR               = 2
      NO_BATCH                      = 3
      GUI_REFUSE_FILETRANSFER       = 4
      INVALID_TYPE                  = 5
      NO_AUTHORITY                  = 6
      UNKNOWN_ERROR                 = 7
      BAD_DATA_FORMAT               = 8
      HEADER_NOT_ALLOWED            = 9
      SEPARATOR_NOT_ALLOWED         = 10
      HEADER_TOO_LONG               = 11
      UNKNOWN_DP_ERROR              = 12
      ACCESS_DENIED                 = 13
      DP_OUT_OF_MEMORY              = 14
      DISK_FULL                     = 15
      DP_TIMEOUT                    = 16
      OTHERS                        = 17
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    *perform open_group.
    loop at bdcdata.
    perform bdc_dynpro      using 'SAPLCSDI' '0100'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29N-STLAN'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'RC29N-MATNR'
                                  'SOFTDRINKS'.
    perform bdc_field       using 'RC29N-WERKS'
                                  'WIND'.
    perform bdc_field       using 'RC29N-STLAN'
                                  '1'.
    perform bdc_field       using 'RC29N-DATUV'
                                  '16.09.2008'.
    perform bdc_dynpro      using 'SAPLCSDI' '0110'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'RC29K-BMENG'
                                  '1'.
    perform bdc_field       using 'RC29K-STLST'
                                  '1'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29K-EXSTL'.
    perform bdc_dynpro      using 'SAPLCSDI' '0111'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29K-LABOR'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_dynpro      using 'SAPLCSDI' '0140'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POSTP(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=FCBU'.
    perform bdc_field       using 'RC29P-IDNRK(01)'
                                  BDCDATA-IDNRK.
    perform bdc_field       using 'RC29P-MENGE(01)'
                                  BDCDATA-MENGE.
    perform bdc_field       using 'RC29P-MEINS(01)'
                                  BDCDATA-MEINS.
    perform bdc_field       using 'RC29P-POSTP(01)'
                                  BDCDATA-POSTP.
    perform bdc_dynpro      using 'SAPLCSDI' '0130'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POSNR'.
    perform bdc_field       using 'RC29P-POSNR'
                                   BDCDATA-POSNR.            "'0010'.
    perform bdc_field       using 'RC29P-IDNRK'
                                  BDCDATA-IDNRK.             "'15'.
    perform bdc_field       using 'RC29P-MENGE'
                                  BDCDATA-MENGE.             "'1'.
    perform bdc_field       using 'RC29P-MEINS'
                                  BDCDATA-MEINS.             "'ml'.
    perform bdc_dynpro      using 'SAPLCSDI' '0131'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POTX1'.
    perform bdc_field       using 'RC29P-SANKA'
                                  'X'.
    *perform bdc_transaction using 'CS01'.
    *perform close_group.
    CALL TRANSACTION 'CS01' USING IBDCDATA MODE 'A' UPDATE 'S'.
    REFRESH IBDCDATA.
    endloop.
           Start new screen                                              *
    FORM BDC_DYNPRO USING PROGRAM DYNPRO.
      CLEAR iBDCDATA.
      iBDCDATA-PROGRAM  = PROGRAM.
      iBDCDATA-DYNPRO   = DYNPRO.
      iBDCDATA-DYNBEGIN = 'X'.
      APPEND ibDCDATA .
    ENDFORM.
           Insert field                                                  *
    FORM BDC_FIELD USING FNAM FVAL.
    IF FVAL <> NODATA.
        CLEAR iBDCDATA.
        iBDCDATA-FNAM = FNAM.
        iBDCDATA-FVAL = FVAL.
        APPEND iBDCDATA .
    ENDIF.
    ENDFORM.

    Hi,
    the BDCDATA structure must be fnam, fval,dynbegin,dynpro,program.
    You have to declare like this and pass this in your CALL TRANSACTION statement.
    Please give some other table name for BDCDATA you declared for and also for IBDCDATA.

  • Deleting records from XML file--just a little problem

    Hi!
    My xml file has a simple form like this:
    <?xml version="1.0" encoding="UTF-8"?>
    <AdressBook>
        <Name>Suzy</Name>
        <Lastname>Love</Lastname>
        <Adress>You street 22</Adress>
        <Phone>911</Phone>
        <Email>[email protected]</Email>
      </Record>
    <Record>
       <Name>Judy</Name>
       <Lastname>Goblin</Lastname>
       <Adress>Milkyway</Adress>
       <Phone>911</Phone>
      <Email>[email protected]</Email>
    </Record>
    </AdressBook>Now, whenever I delete a record from that file it leaves a remaining tag <Record /> on the loction were previous record was. How do I delete that remaining tag or better yet, how do I delete Record element in whole. I am using jdom.
    Here is the code I use to delete records:
    public void deleteRecordFromFile(Record r)
              Element root=doc.getRootElement();
              List records=root.getChildren();
              Iterator listIt=records.iterator();
              while(listIt.hasNext())
                   Element rec=(Element)listIt.next();
                   Record p=new Record("","","","","");
                   List recChildren=rec.getChildren();
                   Element e=null;
                   for( int i=0;i<recChildren.size();i++)
                        e=(Element)recChildren.get(i);
                        switch (i)
                        case 0: p.setName(e.getText()); break;
                        case 1: p.setLastName(e.getText());break;
                        case 2: p.setAddress(e.getText());break;
                        case 3: p.setPhone(e.getText());break;
                        case 4: p.setEmail(e.getText());break;
                        default: ;
                   if(p.equals(r))
                        rec.removeContent();
                        System.out.println("Record deleted!");
              writeToFile(filename);
         }Like you see above I use the method removeContent(), should I use some other method? I need to get rid of that remaining tags in order to make my editing of the records stored in there easier. I guess I could make a seperate routine that would clean my file of those tags, but I think that is just too much time consuming...or not?
    Please help me out here.:)
    Message was edited by:
    byteminister

    You cannot remove the elements of a List while iterating over them without the risk of throwing a ConcurrentModificationException, whatever method you would use. So you can only delete the records in two steps: collect them all, and delete them afterwards.
    public void deleteRecordFromFile(Record r)
              Element root=doc.getRootElement();
              List records=root.getChildren();
              Iterator listIt=records.iterator();
              // Create a container for storing references to the records that will be deleted.
              ArrayList<Element> deleteList = new ArrayList<Element>();
              while(listIt.hasNext())
                   Element rec=(Element)listIt.next();
                   Record p=new Record("","","","","");
                   List recChildren=rec.getChildren();
                   Element e=null;
                   for( int i=0;i<recChildren.size();i++)
                        e=(Element)recChildren.get(i);
                        switch (i)
                        case 0: p.setName(e.getText()); break;
                        case 1: p.setLastName(e.getText());break;
                        case 2: p.setAddress(e.getText());break;
                        case 3: p.setPhone(e.getText());break;
                        case 4: p.setEmail(e.getText());break;
                        default: ;
                   if(p.equals(r))
                        // This record will be deleted.
                        deleteList.add(rec);
              // Delete all the records in deleteList.
              records.removeAll(deleteList);
              writeToFile(filename);
         }

  • How to parse and retrieve records from xml files into columns in Table

    Hi
    I attached the thing what i tried.
    Table to hold the XML COntent:
    create table xmlfile(xml_con sys.xmltype);
    Inserting Xml file content into the Above table:
    insert into xmlfile values(sys.xmltype.CreateXml('<Root><name>RAM</name><age>23</age></Root>'))
    SQL> select * from xmlfile;
    XML_CON
    <Root>
    <name>RAM</name>
    <age>23</age>
    </Root>
    SQL> select extractValue(xml_con, '/Root/name') content from xmlfile;
    CONTENT
    RAM
    This one works fine
    But if the file content is as below( contains MUltiple Records)
    insert into xmlfile values(sys.xmltype.CreateXml('<Root><Record><name>RAM</name><age>23</age></Record><Record><name>SAM</name><age>23</age></Record></Root>'))
    SQL> select extractValue(xml_con, '/Root/Record/name') content from xmlfile;
    ERROR at line 1:
    ORA-19025: EXTRACTVALUE returns value of only one node
    Can anyone help me 4 this issue-How to extract multiple records from the XML file inthis manner(from PL/SQL without using JAVA)
    OR
    If there is anyother way to do this please tell me?

    SQL> SELECT EXTRACTVALUE (COLUMN_VALUE, '//name') NAME,
           EXTRACTVALUE (COLUMN_VALUE, '//age') age
      FROM TABLE
              (XMLSEQUENCE
                  (EXTRACT
                      (XMLTYPE
                          ('<Root>
                              <Record>
                                <name>RAM</name>
                                <age>23</age>
                              </Record>
                              <Record>
                                <name>SAM</name>
                                <age>23</age>
                              </Record>
                            </Root>'
                       '/Root/Record'
    NAME       AGE      
    RAM        23       
    SAM        23       
    2 rows selected.

  • Return records from Stored Procedure to Callable Statement

    Hi All,
    I am createing a web application to display a students score card.
    I have written a stored procedure in oracle that accepts the student roll number as input and returns a set of records as output containing the students scoring back to the JSP page where it has to be put into a table format.
    how do i register the output type of "records" from the stored function in oracle in the "registerOutParameter" method of the "callable" statement in the JSP page.
    if not by this way is there any method using which a "stored function/procedure" returning "record(s)" to the jsp page called using "callable" statement be retrieved to be used in the page. let me know any method other that writing a query for the database in the JSP page itself.

    I have a question for you:
    If the stored procedure is doing nothing more than generating a set of results why are you even using one?
    You could create a view or write a simple query like you mentioned.
    If you're intent on going the stored procedure route, then I have a suggestion. Part of the JDBC 2.0 spec allows you to basically return an object from a CallableStatement. Its a little involved but can be done. An article that I ran across a while back really helped me to figure out how to do this. There URL to it is as follows:
    http://www.fawcette.com/archives/premier/mgznarch/javapro/2000/03mar00/bs0003/bs0003.asp
    Pay close attention to the last section of the article: Persistence of Structured Types.
    Here's some important snippets of code:
    String UDT_NAME = "SCHEMA_NAME.PRODUCT_TYPE_OBJ";
    cstmt.setLong(1, value1);
    cstmt.setLong(2, value2);
    cstmt.setLong(3, value3);
    // By updating the type map in the connection object
    // the Driver will be able to convert the array being returned
    // into an array of LikeProductsInfo[] objects.
    java.util.Map map = cstmt.getConnection().getTypeMap();
    map.put(UDT_NAME, ProductTypeObject.class);
    super.cstmt.registerOutParameter(4, java.sql.Types.STRUCT, UDT_NAME);
    * This is the class that is being mapped to the oracle object. 
    * There are two methods in the SQLData interface.
    public class ProductTypeObject implements java.sql.SQLData, java.io.Serializable
        * Implementation of method declared in the SQLData interface.  This method
        * is called by the JDBC driver when mapping the UDT, SCHEMA_NAME.Product_Type_Obj,
        * to this class.
        * The object being returned contains a slew of objects defined as tables,
        * these are retrieved as java.sql.Array objects.
         public void readSQL(SQLInput stream, String typeName) throws SQLException
            String[] value1 = (String[])stream.readArray().getArray();
            String[] value2 = (String[])stream.readArray().getArray();
         public void writeSQL(SQLOutput stream) throws SQLException
    }You'll also need to create Oracles Object. The specification for mine follows:
    TYPE Detail_Type IS TABLE OF VARCHAR2(1024);
    TYPE Product_Type_Obj AS OBJECT (
      value1  Detail_Type,
      value2 Detail_Type,
      value3 Detail_Type,
      value4 Detail_Type,
      value5 Detail_Type,
      value6 Detail_Type,
      value7 Detail_Type,
      value8 Detail_Type);Hope this helps,
    Zac

  • Del records from flat files

    Hi all,
    I have a text file abc.txt, i want to delete records from a table take the input from the txt file from linux operating. lets see
    My table table structure is
    desc unix_logins
    Name Null? Type
    UNIX_LOGIN NOT NULL VARCHAR2(7)
    NAME_LAST VARCHAR2(20)
    NAME_FIRST VARCHAR2(20)
    NAME_MI VARCHAR2(1)
    LOGIN_CONTACT_TN VARCHAR2(12)
    and abc.txt
    UNIX_LOGIN
    abc
    bdf
    fyz
    rmt
    lik
    My Oracle is installed in Linux plat form. Can please suggest me how can do it.

    >
    I have a text file abc.txt, i want to delete records from a table take the input from the txt file from linux operating.
    >
    It's not clear what you mean. Do you want to delete records from a table if those records exist in a txt file?
    You can create an EXTERNAL TABLE on the text file and then use that 'table' in a query that deletes records from your table.
    DELETE FROM myTable where LAST_NAME in (SELECT NAME_LAST from myExternalTable); See this Oracle-base example of creating an external table on a text file
    http://www.oracle-base.com/articles/9i/external-tables-9i.php

  • How do I add multiple text block records from text file?

    The data manager documentation (page 151) for MDM 5.5 SP3 indicates that one or more new text blocks can be added to the Text Blocks object table from files. It is noted that the files must be plain text files.
    I use notepad and create a text file with two lines as follows:
    Test 1
    Test 2
    When I try to add the text blocks following documentation mentioned above, it only adds one record for the Data Group I have chosen and the record contains the entry "Test 1" from the first line in the text file.
    How can I add multiple records to the data group from a file?

    From my testing it appears that you need to have one text file per text block record in Data Manager.
    I wrote VBA macro to so that I could input my text blocks into an Excel spreadsheet and then the macro will take the contents of each cell in a highlighted column and create one text file per cell.
    Then using Data manager, I can select all of the text files at once and it will import them, creating one record per text file.

Maybe you are looking for

  • What is the meaning of this column in the pricing  procedure?

    Dear Gurus, What is the meaning of this column in the pricing procedure? thanks..

  • Ipod touch 2nd generation, won't charge but it will still sync with itunes

    Everything works except it wont charge. it has the 'done charging symbol' in the battery except its pretty much dead. i know its not a bad usb port or anything because you can still put songs on it and stuff. now it just keeps switching from charging

  • Event trigger on transport of ABAP table entries

    Hi Experts, I have an issue with respect to the scenario mentioned. Scenario is that I have a Z-table in Dev which is maintained through SM30 once the data is saved ,the event"02 After saving the data in the database" ,I have linked a include program

  • Which 750GB drives used in an XServe RAID?

    Could somebody with an XServe RAID with 750GB drives please tell me, which drives are used in this configuration? You can look it up in the XServe RAID admin utility, under Arrays and Drives when selecting a drive in "drive" mode. Thanks a lot, Floh

  • Export -import error  'SYS.DBMS_DATAPUMP'

    1-->Hi I exported data from oracle10 g by imp utility.on the same server Now I want to import the dump file on the oracle 9i user. when I give command on linux server c:\> imp username/password@oracle9i .. It will give mi error as UDI-00008: operatio