In Oracle Forms 10g, what is the best approach in read an excel file (csv)?

Hi All!
Could someone please advise me on what is the best way for an oracle form reading and extracting data from an excel spreadsheet (csv) that is located in the local drive directory.
Someone had already advised me to use TEXT_IO function or use a javabean through the Implementation Class property.
I'd really appreciate it if someone can give me insights.
Thanks! beeals

Other approach to load a CSV file, you can use EXTERNAL TABLES. SQL Reference manual has examples.
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2095331
[urlhttp://www.astral-consultancy.co.uk/cgi-bin/hunbug/doco.cgi?11210] simple example of external tables loading data into oracle
or here
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:68212348056
Regards
Ros
Message was edited by:
Rosario Vigilante

Similar Messages

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What's the best way for reading this binary file?

    I've written a program that acquires data from a DAQmx card and writes it on a binary file (attached file and picture). The data that I'm acquiring comes from 8 channels, at 2.5MS/s for, at least, 5 seconds. What's the best way of reading this binary file, knowing that:
    -I'll need it also on graphics (only after acquiring)
    -I also need to see these values and use them later in Matlab.
    I've tried the "Array to Spreadsheet String", but LabView goes out of memory (even if I don't use all of the 8 channels, but only 1).
    LabView 8.6
    Solved!
    Go to Solution.
    Attachments:
    AcquireWrite02.vi ‏15 KB
    myvi.jpg ‏55 KB

    But my real problem, at least now, is how can I divide the information to get not only one graphic but eight?
    I can read the file, but I get this (with only two channels):
    So what I tried was, using a for loop, saving 250 elements in different arrays and then writing it to the .txt file. But it doesn't come right... I used 250 because that's what I got from the graphic: at each 250 points it plots the other channel.
    Am I missing something here? How should I treat the information coming from the binary file, if not the way I'm doing?
    (attached are the .vi files I'm using to save in the .txt format)
    (EDITED. I just saw that I was dividing my graph's data in 4 just before plotting it... so It isn't 250 but 1000 elements for each channel... Still, the problem has not been solved)
    Message Edited by Danigno on 11-17-2008 08:47 AM
    Attachments:
    mygraph.jpg ‏280 KB
    Read Binary File and Save as txt - 2 channels - with SetFilePosition.vi ‏14 KB
    Read Binary File and Save as txt - with SetFilePosition_b_save2files_with_array.vi ‏14 KB

  • What is the best way to read in a file?

    i'm currently doing it like this:
    Vector inVector = new Vector();
    try
    File file = new File( "C:/1jkiesel/testAgent/Customers.txt" );
    BufferedReader in = new BufferedReader( new FileReader( file ) );
    String str;
    while( ( str = in.readLine() ) != null )
    inVector.add( str );
    in.close();
    catch( IOException e )
    e.printStackTrace();
    is there a better way? especially th full file path thing...

    I did a search in the formums for
    "best practice read file"
    there were too many results so it was not very helpful
    I think that the best practice is to look at the "read" methods for
    each stream super class and decide which one is best for the particular
    application you have.
    I ask a follow up question:
    Is there a web page where we can find some comparisons of the various methods? Comparisons like speed, size, exception handling?
    hope this helps

  • What is the best way to read an excel document ?

    I just want to be able to read excel spreadsheets. Do I need to purchase AppleWorks (with the other older stuff on it) or Office for Mac? Is there a better way? Thanks!
    MacBook Pro 17"   Mac OS X (10.4.6)  

    hi Rick - Welcome to Apple Discussions!
    do i understand correctly that you only want/need to view xcl spreadsheets, not work with them etc?
    if so, this might work for you:
    http://www.versiontracker.com/dyn/moreinfo/macosx/14982
    an alternative is NeoOffice, which is an open source office suite that can read (and work with) excel documents, among other things...
    http://www.neooffice.org/
    cheers

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What is the best approach to portalize a struts application to oracle porta

    Hey guys,
    what is the best approach to portalize a struts application to oracle portal... would it by created PDK Portlets or JSR 168 portlets... and is there any direction to do so... can some one please him this girl in need...

    HELLO!!! WELCOME BACK!! I THINK YOU SHOULD USE
    deploy....Hey Thanks.
    is there any network congestion OR any other problem that i can anticipate before i use "deploy" utility. I have heared of some problems(i couldn't remember them now ....because honestly, i couldn't understand them atall when a BEA consultant told me those...).
    so, any problem that may arise ....that i need to think about before deploying ~10 applications to like ~70-80 clusters ....all at a time.
    thanks again for your advise. iam learning to see the big picture of application deployment.
    -sangita

  • What is the best way to read and manipulate large data in excel files and show them in Sharepoint

    Hi ,
    I have a large excel file that has 700,000 records in it. The excel file has a few columns that change every day.
    What is the best way to read the data form the excel file in fastest and most efficient way.
    2 nd Problem,
    I have one excel file that has many rows each row contain some data that has certain keywords.
    What I want is  to segregate the data of rows into respective sheets(tabs ) in the workbook.
    for example in rows have following data 
    1. Alfa
    2beta
    3 gama
    4beta
    5gama
    6gama
    7alfa
    in excel
    I want there to be 3 tabs now with each of the key words alfa beta and gamma.

    Hi,
    I don't really see any better options for SharePoint. SharePoint use other production called 'Office Web App' to allow users to view/edit Microsoft Office documents (word, excel etc.). But the web version of excel doesn't support that much records as well
    as there's size limitations (probably the default max size is 10MB).
    Regarding second problem, I think you need some custom solutions (like a SharePoint timer job/webpart ) to read and present data.
    However, if you can reduce the excel file records to something near 16k (which is supported rows in web version of excel) then you can use SharePoint Excel service to refresh data automatically in the excel file in SharePoint from some external sources.
    Thanks,
    Sohel Rana
    http://ranaictiu-technicalblog.blogspot.com

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What is the best approach to generate control numbers from bpel?

    1. If we want to control ISA/GS/ST control numbers from bpel, what is the best approach to do that?
    2.  how to generate these control numbers and where to store them to get a sequence out of it?
    Thanks,
    Kathar

    Internally Oracle B2B uses DB sequence for generating the control numbers. It is the best approach but at the same time it is not very straight forward, specially in case of clustered database. So you may carefully implement same with BPEL.
    Hi Anuj,
    If we let B2B to generate control numbers in the clustered environment, is there any settings we have to do?
    So you may carefully implement same with BPEL. BTW, what is the use case behind this?
    We were thinking about using this to send out duplicate messages to two TPs but we decided to go with java callout as you suggested in another thread.
    Thanks!
    Kathar

  • What is the best way to keep my personal files stored in iCloud separate from my work-related files?

    What is the best way to keep my personal files stored in iCloud separate from my work-related files? It seems that I'm not allowed to link my iPad mini and my work iMac using two different accounts, so I'm wondering how to make my single personal account work for both, while keeping personal vs work files reasonably separated.
    Thanks for any suggestions.

    Is it possible for you to upgrade your account to iCloud Drive? That would mean, all Macs upgraded to Yosemite, and all mobile devices to iOS8?
    See:  iCloud Drive FAQ and iCloud: About using iWork for iOS and iCloud
    In iCloud Drive you could create two separate custom folders, one for work documents and one for private documents and organize your documents there.  Don't use the app specific folders (Keynote, Pages, Numbers, ...) , because you can only create one level of folders inside these folders.

  • What is the best practise to provide a text file for a Java class in a OSGi bundle in CQ?

    This is probably a very basic question so please bear with me.
    What is the best way to provide a .txt file to be read by a Java class in a OSGi bundle in CQ 5.5?
    I have been able to read a file called "test.txt" that I put in a structure like this /src/resources/<any-sub-folder>/test.txt  from my java class  at /src/main/java/com/test/mytest/Test.java using the bundle's getResource and getEntry calls but I was not able to use the context.getDataFile. How is this getDataFile method call to be used?
    And what if I want to read the file located in another bundle, is it possible? or can I add the file to some repository and then access it - but I am not clear how to do this.
    And I would also like to know what is the best practise if I need to provide a large  data set in a flat file to be read by a Java class in CQ5.
    Please provide detailed steps or point me to a how to guide or other helpful resources as I am a novice.
    Thank you in advance for your time and help.
    VS

    As you can read in the OSGi Core specification (section 4.5.2), the getDataFile() method is to read/write a file in the bundle's private persistent area. It cannot be used to read files contained in the bundle. The issue Sham mentions refers to a version of Felix which is not used in CQ.
    The methods you mentioned (getResource and getEntry) are appropriate for reading files contained in a bundle.
    Reading a file from the repository is done using the JCR API. You can see a blueprint for how to do this by looking at the readFile method in http://svn.apache.org/repos/asf/jackrabbit/tags/2.4.0/jackrabbit-jcr-commons/src/main/java /org/apache/jackrabbit/commons/JcrUtils.java. Unfortunately, this method is not currently usable as it was declared incorrectly (should be a static method, but is an instance method).
    Regards,
    Justin

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • What's the best way to read JSON data?

    Hi all;
    What is the best way to read in JSON data? And is the best way to use it once read in to turn it into XML and apply XPath?
    thanks - dave

    jtahlborn wrote:
    without having a better understanding of what your definition of "use it" is, this question is essentially unanswerable. Jackson is a fairly popular library for translating json to/from java objects. the json website provides a very basic library for parsing to/from xml. which one is the "best" depends on what you want to do with it.Good point. We have a reporting product ([www.windward.net|http://www.windward.net]) and we've had a number of people ask us for JSON support. But how complex the data is and what they want to pull is all over the place. The one thing that's commin is they generally want to pull down the JSON data, and then put specific items from that in the report.
    XML/XPath struck me as a good way to do this for a couple of reasons. First it seems to map well to the JSON data layout. Second it provides a known query language. Third, we have a really good XPath wizard and we could then use it for JSON also.
    ??? - thanks - dave

Maybe you are looking for