Best way to do an Excel data file load

Hi
I need to load Excel file’s data into an ORACLE table (on a frequent basis). While loading it, I need to validate each column's contents (using PL SQL code). Are there any packages/procs/APIs provided by ORACLE to do this kind of activity? What would be the best way to do an Excel file load within ORACLE ? FYI, I have Visual Basic code that reads data from Excel file and loads it into a temporary ORACLE table, then I validate data in this temporary table using a PL-SQL code in stored procedure. I am trying to avoid the "front end" process of this effort in VB and want to do the whole thing within ORACLE itself. Please let me know if you have any ideas.
Your help is greatly appreciated!!
Thanks in advance,
Ram

If you are running on Windows, you could try COM Automation which means moving your VB process into a stored procedure. I've never tried this myself, having been quite satisfied with Heterogeneous Connectivity.
Tak

Similar Messages

  • Best way to view and edit OpenOffice files on my new iPad Mini (originally Word and Excel files). Thanks!

    Can someone tell me the best way to view and edit OpenOffice files on my new iPad Mini (originally Word and Excel files)? Thanks!

    wmf files is a MS Windows video file, you will need a tool to covert the wmf files to another file format OS X will read. Simply Google to find a converter you are interested in and use it.

  • What is the best way to export a .csv/.txt file to a MS Windows shared folder via a scheduled job?

    What is the best way to export a .csv/.txt file to a MS Windows shared folder, when the export process is a scheduled job?

    You can't directly, as scheduled jobs can only export to server-side data stores, which can only write to the landing area. You'll need to use an external script (e.g. batch file) to move the file from the landingarea after export. You can use an external task in the scheduled job to invoke the script.
    regards,
    Nick

  • The best way to move 2TB of data from Win to Mac

    Hello,
    what is the best way me to move 2TB (multiple files) data from windows to Mac?
    I think that over the LAN, even using crossover cable connecting 1GB NICs would be painfully long. (Windows disk is quite fragmented)
    If I put windows (ntfs partitioned) disc to one of Mac Pro bays, will Mac allows me to access the date on it?
    if I take the disk of the windows PC and put it to [eSATA Hard Drive docking solution|http://eshop.macsales.com/item/Newer%20Technology/FWU2ES2HDK>, will Mac be able to mount it and let me copy the files from the windows disk?
    Any other ways of moving 2TB data from windows to Mac?

    Being able to read, also means, being able to mount said drive volumes.
    Windows 7 64-bit (Pro version if you have dual processor, which at this time really isn't needed that much) would work fine too.
    There are programs, some or most have their own limitations, allow Windows to read and write to Mac HFS+ file system volumes; or the reverse and allow Mac OS to read/write NTFS. None are what I call perfect.
    Becuase of limitations in older file systems and partitioning, Apple and Microsoft have implemented support for volumes larger than 2TB with GPT to replace aging Master Boot Record (Windows 7 even can't run off MBR with volumes larger than 1.9TB). Mac Pro uses GPT and can boot from very large volumes.
    eSATA docking stations. Depends on the interface and controller card if any, whether using SATA PCIe for one thing (you would not ever want to use USB though maybe USB3 PCIe if you can find one), therefore, use internal drive bays. Also, more concern with the drive temperature in one, they can get very warm in my experience.
    But maybe you do, maybe you don't, actually need to move data, maybe just have it read-only accessible.
    Putting the drive in a FW800 case would probably make the most sense, both now and for further use for booting (backup clones are handy and allowed) or for backup with Mac TimeMachine.
    But I would install Windows 7 on your Mac Pro. I use GPT for data drives in Windows and to read from in Mac OS environment.

  • What is the best way to kill/stop a data load?

    Hi.
    What is the best way to kill/stop a data load?
    I have a data load from my QA R/3 system that is extracting 115.000.000+ records. The problem is that the selection in the function module used in the data source does not work, and the problem was not detected because of the nature of the data on the development system.
    I could kill processes owned by my background user (on both R/3 and BW) but I risk killing other loads, and sometimes the job seems to restart if I just try to kill processes. If I remove transactional RFCs in SM58 the load does not terminate; I only skip one or more datapackages. I have also tried to change the QM-status in the monitor to red, but that does not stop the load either...
    So isn't there a nice fool-proof way of stopping a dataload?
    Best regards,
    Christian Frier

    Hi,
    There r 2 ways to kill the job.
    One is using transation RSMO locate the job and display the status tab double click on the yellow light that is shown on the line total, a pop will come 'set overall status ' is displayed select the desired status that is red and save it. Then return to the monitor page and select the header tab double ckick on the data target right click and then goto 'manage',there should be request sitting there probably with yellow lights , highlight the line with the faulty request click the delete button then click refresh button.
    Second is goto SM37 and click on the active selection and enter the jobname and then click excute the particulr job should appear highlight the jobname then click on the stop iconthat appears on the taskbar( 3 rd from left)
    hope it is clear.
    Regards-
    Siddhu

  • Best way to  back up your data

    Which is the fastest and best way to back up your data in case of any problem ? Still to transfer to an external HD ?
    Thanks

    Hi Ferro;
    Best and simplest way to back up is Time Machine to an external drive.
    I think that any backup plan should alway be to an external drive. If you backup to an internal drive and the Mac fails, what good is your backup then?
    Allan

  • Best way to transfer internal HD data in Mavericks to Mac with Yosemite ?

    Hello,
    I'm using a 2007 iMac with Mavericks and will be getting a new one which will presumably come with Yosemite installed. What's the best way to transfer all the data from my internal HD on the old system, to the new one ?
    I use SuperDuper to make backups to external HD's, so if I make a bootable copy of the mac HD to an ext HD using SuperDuper, will everything function fine despite the different OS's ?
    Thanks,
    Matrose.

    You can make a bootable copy of your system now, but you won't be able to boot from it with the new computer. They are not usually backwards compatible with the OS. But when you first boot into the new system, use SetUp Assistant to migrate the data from the cloned copy. That will work just fine.

  • What's the best way to master an MPEG 2 file for DVD Studio Pro.

    In version 4.5 of FCP I was able to use QuickTime conversion to encode to MPEG2. Now that option is not there. I hate using compressor 2 to create MPEG 2 because when I do the MPEG 2 files always seem to studder and I don't know why. I shot my project in HDV. It is about 64 minutes long. Any suggestions on the BEST way to master an MPEG 2 file for DVD Studio pro? Thanks in advance!
    Peter
    PS - I'm using FCP 5

    Some people have reported a field reversal problem with HDV to SD DVD encodes. There's an Apple technical document on this issue somewhere, but the problem was supposed to be addressed by a Pro Applications update a couple of years ago... This site might help you get to the bottom of the problem.
    http://www3.telus.net/bonsai/Step-by-Step.html

  • Best way to stream lots of data to file and post process it

    Hello,
    I am trying to do something that seems like it should be quite simple but am having some difficulty figuring out how to do it.  I am running a test that has over 100 channels of mixed sensor data.  The test will run for several days or longer at a time and I need to log/stream data at about 4Hz while the test is running.  The data I need to log is a mixture of different data types that include a time stamp, several integer values (both 32 and 64 bit), and a lot of floating point values.  I would like to write the data to file in a very compressed format because the test is scheduled to run for over a year (stopping every few days) and the data files can get quite large.  I currently have a solution that simply bundles all the date into a cluster then writes/streams the cluster to a binary file as the test runs.  This approach works fine but involves some post processing to convert the data into a format, typically a text file, that can be worked with in programs like Excel or DIAdem.   After the files are converted into a text file they are, no surprise, a lot larger than (about 3 times) the original binary file size.
    I am considering several options to improve my current process.  The first option is writing the data directly to a tdms file which would allow me to quicly import the data into DIAdem (or Excel with a plugin) for processing/visualization.   The challenge I am having (note, this is my first experience working with tdms files and I have a lot to learn) is that I can not find a simple way to write/stream all the different data types into one tdms file and keep each scan of data (containing different data types) tied to one time stamp.  Each time I write data to file, I would like the write to contain a time stamp in column 1, integer values in columns 2 through 5, and floating point values in the remaining columns (about 90 of them).  Yes, I know there are no columns in binary files but this is how I would like the data to appear when I import it into DIAdem or Excel.  
    The other option I am considering is just writing a custom data plugin for DIAdem that would allow me to import the binary files that I am currently creating directly into DIAdem.  If someone could provide me with some suggestions as to what option would be the best I would appreciate it.  Or, if there is a better option that I have not mentioned feel free to recommend it.  Thanks in advance for your help.

    Hello,
    Here is a simple example, of course here I only create one value per iteration in the while loop for simplicity. You can also set properties of the file which can be useful, and set up different channels.
    Beside, you can use multiple groups to have more flexibility in data storage. You can think of channels like columns, and groups as sheets in Excel, so you see this way your data when you import the tdms file into Excel.
    I hope it helps, of course there are much more advanced features with TDMS files, read the help docs!

  • What is the Best way To Copy and paste data from 1 book to another

     I have 18 sheets in 5 different books that I want to extract data from specific cells.  What is the best way to do this?  Example:  1 sheet is called Numbers E-O1 data in 13:WXYZ. The data updates and moves up 1 row every time I enter
    a new number. So let's say I enter the number 12. Through a lot of calculations the output goes in 13:WXYZ. To what I call a counter which is a 4 digit number.  Anyways, how can I send that 4 digit number to a totally different sheet?  To bullet
    what I'm talking about
    data in cells Row 13:WXYZ in book called Numbers sheet E-O1
    send data to book called "Vortex Numbers" Sheet E-O row 2001:CDEF
    What formula or Macro can I use to make this work?
    thank you!

    Hello Larbec,
    Syntax:
    '[BookName]SheetName'!Range
    Enter in cell  2001:CDEF:
    ='[Numbers]E-O1'!13:WXYZ
    This assumes that the file is open in Excel. Otherwise you need to add the path:
    'ThePath[BookName]SheetName'!Range
    Best regards George

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What is the best way to stack DAQ aquired data in labview?

    I'm developing an application to work with an M-series daq card and labview 8.5 to output a signal and then record on 8 differential inputs for a short period of time (~10 ms). I need to stack my data, however, because the incoming signal will be very, very small, even after amplification. So basically i'm running a slightly modified version of the multifunction Synch AI-AO.vi (included with the install of daqmx). What is the best way for me to rerun this vi a set number of times and add new data directly to the old data (not cat-ing or anything, like |sample 1 of run 1| + |sample 1 of run 2| = stacked stample 1).
    A slightly modified version of the mutlifunction synch AI-AO.vi is attached.
    Attachments:
    des_v2_Multi-Function-Synch AI-AO.vi ‏143 KB

    Hi LSU,
    see attachment on how to "stack" several measurements. I simply add the waveforms and use a shift register to keep the last iterations value.
    Writing to files in each iteration is extremly CPU consuming - especially with express vis. Using for loops for just one iteration is "senseless". You could enable the conditional terminal of the for loop to realize your stop feature.
    For your message 4:
    Have you ever tried all the things you asked for? Sometimes it's easiest to just try&error
    And for the "n=n+x" question: It really helps to take the free online courses offered by NI!
    Message Edited by GerdW on 11-11-2009 06:27 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    des_v2_Multi-Function-Synch AI-AO.vi ‏128 KB

  • Best strategy to upload a big data file and then to insert in db the content

    Hi,
    Here's our requirement. We have a web business application developed on JSF 1.2, JE66, WebLogic for application server, and Oracle for the data back end tier. We need to upload big data files (80 to 100 Mb) from a web page, and persist the content in database tables.
    What's the best way in terms of performance to implement this use case ? Once the file is uploaded on server a command button is available on the web page  to trigger a JSF controller action in order to save data in database.
    Actually we plan to keep in memory the content of the http request, and call insert line on each line of the file. But i think it's bad and not scalable.
    Is is better to write the file on server's disk and then use multiple threads to send the lines to the database ? How to use multi threading in a JSF managed bean ?
    Thanks

    In addition, LoadFromFile is overloaded to handle both BLOB and CLOB:
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       BLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT
    <BR>
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       CLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT

  • Whats the best way to get a big text file in a CLOB variable?

    Hi,
    I have a very premitive type of question.
    I have a big text file, say 30 MB, in a Unix directory (Solaris). I want to get the whole text of that file in a CLOB variable.
    I saw the procedure dbms_lob.loadfromfile/loadclobfromfile. In both these cases, according to the documentation, the target where is collect the data will be a BLOB and not a CLOB. So, I have to convert that blob into a clob.
    If I want to avoid all this conversion process, whats the best way to get a text from a file into a CLOB variable?
    Please suggest.
    Regards

    In addition, LoadFromFile is overloaded to handle both BLOB and CLOB:
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       BLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT
    <BR>
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       CLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT

  • Best way to manage global application data

    Hello,
    I'm looking for the best way to manage my global application data. I have a program containing about 70 application settings which are loaded from an ini file and organised as clusters and loose variables. I want to be able to use these settings across multiple vi's. The application settings can be modified during execution. Some vi's can change settings and these changes need to be visible accros all vi's.
    What is the best way to manage this. Ideally, I would have one cluster or class containing all the application settings but since the sub vi's run independend, I can't wire them through. 
    From what I've read so far, global variables ain't really well suited for this due to race conditions. The FGV might be a solution but it is not clear to me how to implement this for many variables. (One FGV or multiple FGV's, etc... Does anyone have a good example of this?)
    Are there any other good solutions? Any advice is welcome!
    Best regards,
    Wouter

    You can also create a singleton LVOOP object (maybe a few if you want to logically organize your settings). When you initialize the system you create the object and it uses a DVR internally to store your data. Since this is a singleton object you do not need to wire it through your code. Your subVIs simply call the appropriate methods and it will interact with the class data. A FGV is basically the same thing except that you only have a single VI so you are more limited with your inputs and outputs. A singleton object let's you refine the connector pains based on the methods and what they are doing.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

Maybe you are looking for