Export BLOB to file system folder in Oracle 8i

Hi Folks,
I want export the doc and zip files from oracle column BLOB to folder on my desktop / network . Can anyone please suggest the easiest way .
Thanks

For 8i, I think you will have to rely on Java.
http://www.oracle-base.com/articles/8i/ExportBlob.php
For 9i and above, you can use UTL_FILE built in.
http://www.oracle-base.com/articles/9i/ExportBlob9i.php
Cheers
Sarma.

Similar Messages

  • Export a NAS File System within Sun Cluster

    could some one please tell me how can i export a NAS file system within a Sun Cluster

    Thanks I made sucessuly.
    The final procedure:
    1 create the disks in vertias (not the volumen)
    2 if you add new disk, recreate the did
    scdidadm -L
    scdidadm -r (must exec in all nodes)
    scgdevs (in all nodes)
    3 Disable the resouces using scswitch -n -j (the resouce)
    in my case we disable all resouce but NOT the resouce group. (only exec in one node)
    Make the new file system. :-D. makefs. (in one node)
    4 make the new pount mount in all nodes.
    5 insert the new line in vfstab (remenber the mount at boot to set to no)
    scrgadm -pvv | grep "FilesystemMounPoints="
    6 You must put the "FilesystemMounPoints=" in same orden in vfstab
    I use the sunplex (more easy) ..
    Renable to online your resources.
    Test to switch over to another node.
    The new filesystem may be automaticaly mounted.
    Bye an thanks for the help

  • Export 500gb database size to a 100gb file system space in oracle 10g

    Hi All,
    Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

    user533548 wrote:
    Hi Linda,
    The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
    For instance :
    dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • Decision on File system management in Oracle+SAP

    Hi All,
    In my production system we use to have /oracle/SID/sapdata1 and oracle/SID/sapdata2. Initially there was many datafiles assigned to the table sapce PSAPSR3, few as autoextend on and few as autoextend off. As per my understanding DB02 shows you the information just tablespace wise it will report AUTOEXTEND ON as soon as at least one of the datafiles has AUTOEXTEND ON. In PSAPSR3 all the datafile with autoextend ON are from SAPDATA1 which has only 50 GBs left. All the files as Autoextend OFF are from SAPDATA2 which has 900 GBs of sapce left.
    Now the question is :
    1.Do I need to request for additional space for SAPDATA1 as some of the tablespaces are at the edge of autoextend and that much space is not left in the FS(sapadat1) , then how will they extend? DB growth is 100GB per month.
    2.We usually were adding 10 GB of datafile in the tablespace with 30GB as autoextend.
    Can we add another datafile from sapdata2 this time with autoextend ON and the rest will be taken care automatically.
    Pleae suggest.
    Regards,
    VIcky

    Hi Vicky,
    As you have 100GB/month growth suggestion here would be
    1) Add 2 more mount points sapdata3 and sapdata4 with around 1 TB space.
       This is to distribute data across 4 data partitions for better performance
    2) As sapdata1 has datafiles with auto extend ON, you need to extend the file system to 500 GB atleast so that whenever data is written on datafiles under sapdata1, it will have space to grow using autoextend feature. Without sufficient disk space it may lead to space problem and transaction may result in dump.
    3) No need to change anything on sapdata2 as you already have 900GB free space
    Hope this helps.
    Regards,
    Deepak Kori

  • Export BLOB into Files

    Hi, I want to export BLOB data into normal files in the OS. The Database is not in my local server. I can only connect through the TNS. Would you please give me some suggestions? Thanks a lot!

    declare
    l_output utl_file.file_type;
    l_amt number default 32000;
    l_offset number default 1;
    l_length number default nvl(dbms_lob.getlength(p_blob),0); -- l_length = size of blob
    begin
    l_output := utl_file.fopen(p_dir, p_file, 'w', 32760);
    if utl_file.is_open(l_output) then
    while ( l_offset < l_length ) loop
    utl_file.put(l_output,
    UTL_RAW.CAST_TO_VARCHAR2(dbms_lob.substr(p_blob,l_amt,l_offset)));
    utl_file.fflush(l_output);
    l_offset := l_offset + l_amt;
    end loop;
    utl_file.new_line(l_output);
    utl_file.fclose(l_output);
    end if;
    end;

  • How to export repository to file system?

    Hello,
    within my bachelor thesis I am engaged in parsing ABAP Objects to calculate different code metrics. The analysis tool and many of the known parser-apis do require source-files at the file system layer.
    As SAP stores everything in a DB, I wonder if there is a way to export the repository to the file system. And if this is not possible, is there another practicable way to access the source code?
    Another point is: Does anyone know a source for a complete grammar (like ebnf or so) for ABAP Objects? that could make my life quite a lot easier...
    mfg
    - clemens heppner

    Thanks for your advice! I've written this report:
    START-OF-SELECTION.
      DATA: source TYPE REF TO cl_oo_source,
            source_tab TYPE seop_source_string,
            classname TYPE seoclskey VALUE 'CL_OO_SOURCE'.
      CREATE OBJECT source EXPORTING clskey = classname.
    * Copy source->SOURCE table because gui_download( ) needs a changeable
    * argument and source->source is read-only.
      source_tab = source->source.
      cl_gui_frontend_services=>gui_download( EXPORTING filename =
    'c:\source.ao' CHANGING data_tab = source_tab ).
    But that approach has some serious drawbacks:
    I have to save each class one by one (is there a way to automate that? P.e. find all classes in one package?)
    I can not access the code of reports and such because cl_oo_source requires a classname.
    Is there another way or an extension to this one, to solve those problems?
    Thanks,
    - clemens

  • Exctract a BLOB to file system

    Hello,
    I've saved some archives (PDFs, DOCs) in a BLOB Field with Initialize_Container built-in.
    How could I extract these archives to file system ? (inverse operation).
    Thanks. Regards.

    It depends on what OLE automation interfaces Acrobat Reader supports - you may need the full version of acrobat to do any decent manipulation.

  • EFS(Encrypting File System ) Folder can work for cloning Windows 7 PC?

    Dear All,
      I am using EFS  (Encrypting File System) on a Windows 7 notebook to encrypt a folder.
      I would like to clone this notebook to several other notebooks.
      Will the EFS still work on the cloned PC?

    Hi,
    As I know, after cloning the system, the SID may be changed. So, the EFS folder unable to open.
    But if you have the EFS certificate, you can still use the folder with no issue.
    In theory, the EFS can work on the cloned PC, please backup the certificate first.
    Back up Encrypting File System (EFS) certificate:
    http://windows.microsoft.com/en-in/windows/back-up-efs-certificate#1TC=windows-7
    Hope it helps.
    Regards,
    Blair Deng
    Blair Deng
    TechNet Community Support

  • Need help with File system creation fro Oracle DB installation

    Hello,
    I am new to Solaris/Unix system landscape. I have a Sun enterprise 450 with 18GB hard drive. It has Solaris 9 on it and no other software at this time. I am planning on adding 2 more hard drives 18gb and 36gb to accommodate Oracle DB.
    Recently I went through the Solaris Intermediate Sys admin training, knows the basic stuff but not fully confident to carry out the task on my own.
    I would appreciate some one can help me with the sequence of steps that I need perform to
    1. recognize the new hard drives in the system,
    2. format,
    3. partition. What is the normal strategy for partitioning? My current thinking is to have 36+18gb drives as data drives. This is where I am little bit lost. Can I make a entire 36GB drive as 1 slice for data, I am not quite sure how this is done in the real life, need your help.
    4. creating the file system to store the database files.
    Any help would be appreciated.

    Hello,
    Here is the rough idea for HA from my experience.
    The important thing is that the binaries required to run SAP
    are to be accessible before and after switchover.
    In terms of this file system doesn't matter.
    But SAP may recommend certain filesystem on linux
    please refer to SAP installation guide.
    I always use reiserfs or ext3fs.
    For soft link I recommend you to refer SAP installation guide.
    In your configuration the files related to SCS and DB is the key.
    Again those files are to be accessible both from hostA and from hostB.
    Easiest way is to use share these files like NFS or other shared file system
    so that both nodes can access to these files.
    And let the clustering software do mount and unmount those directory.
    DB binaries, data and log are to be placed in shared storage subsystem.
    (ex. /oracle/*)
    SAP binaries, profiles and so on to be placed in shared storage as well.
    (ex. /sapmnt/*)
    You may want to place the binaries into local disk to make sure the binaries
    are always accessible on OS level, even in the connection to storage subsystem
    losts.
    In this case you have to sync the binaries on both nodes manually.
    Easiest way is just put on shared storage and mount them!
    Furthermore you can use sapcpe function to sync necessary binaries
    from /sapmnt to /usr/sap/<SID>.
    For your last question /sapmnt should be located in storage subsystem
    and not let the storage down!

  • Writing/Exporting BLOB to users local folder

    Hi,
    Apex 3.2 in a Linux 4.7
    I have a requirement to export/write all the BLOB content in a table to users local folder using APEX. Actually, the blob contents are uploaded using apex, as well. The page contain only a button that when click will export/write the BLOB content to a file. The table contains around 500 BLOB records and we want to write/export each record to a file in local folder (1 record to 1 file scenario) at once.
    I had tried wpg_docload.download_file function but users complain due to the number of records need to be open/save.
    I had tried using the SYS.UTL_FILE function but fails since database directory, as its parameter, must be on the database server and not on the user's local machine.
    I also tried using the java BOBHANDLER from oracle-base.com but I can't make it work.
    Any suggestion is highly appreciated.
    Regards
    Aries

    Did you bother searching the forum before posting?
    See +{thread:id=1115748}+ for an implementation of Jari's suggestion.

  • Create file system directory, no oracle directory

    I want to create a directory in the filesystem, example / test / directory1,
    I want to create a directory tree example:
    / test / directory1
    / test / dir2
    / test / dir2 / folder
    as if a linux mkdir.
    Is it possible with sql?
    thanks
    Greetings

    i need Java (jserver) installed in oracle no?,
    for load in the database, with this:
    declare
    f file_type;
    fz file_type;
    r number;
    begin
    -- get a handle for the "tmp" directory
    f:=file_pkg.get_file('/tmp');
    -- create a new temporary directory where the zip archive is being
    -- extracted into ... make the filename unique using TIMESTAMP
    fz := f.create_dir(+
    'zipdir_temp_'||user||'_'||to_char(systimestamp, 'YYYYMMDD_HH24MISS.SSSS')
    -- DOIT:
    -- extract the zipfile; the -qq switch is very important here - otherwise
    -- the OS process will not come back
    r := os_command.exec('unzip -o -qq [PATH/TO/ZIPFILE] -d '||fz.file_path);
    -- if the result is 0 (=success) load the contents of the temporary directory
    -- (recursively) with ONE (!) SQL INSERT command
    if r = 0 then
    insert into document_table (
    select
    seq_documents.nextval id,
    e.file_path,
    e.file_name,
    file_pkg.get_file(e.file_path).get_content_as_clob('iso-8859-1') content
    from table(file_pkg.get_recursive_file_list(fz)) e
    end if;
    -- finally delete the temporary directory and its contents
    fz := fz.delete_recursive();
    end;
    sho err
    and how install jserver??
    Edited by: rafa298 on 05-may-2011 2:22

  • Is it possible to change  the file system owner in Oracle EBS R12.1.3

    We are in R12.1.3 runnig in Linux 5. the OS owners of the EBS is orcrp2ebs and apcrp2ebs because of 9 character length the ps-ef shows uid instead of compelte username. So we are plannign to change the owner name to 8 character length.
    What are the post steps we need to take care from the EBS database and application side after changing the owner in the Unix level.

    953216 wrote:
    We are in R12.1.3 runnig in Linux 5. the OS owners of the EBS is orcrp2ebs and apcrp2ebs because of 9 character length the ps-ef shows uid instead of compelte username. So we are plannign to change the owner name to 8 character length.
    What are the post steps we need to take care from the EBS database and application side after changing the owner in the Unix level.- Stop the application services
    - Change the ownership from the OS
    - Change the value of "s_dbuser" and "s_dbgroup" context variables in the database context file
    - Run AutoConfig on the database tier node
    - Change the value of "s_appsuser" and "s_appsgroup" in the application context file
    - Run AutoConfig on the application tier node
    - Start the application services
    Thanks,
    Hussein

  • Oracle eSSO - File System Synchronization

    I'm having a lot of trouble just setting up a simple file system synchronization for Oracle Enterprise Single Sign-On v10.1.0.2 or v10.1.0.4 (I've tried both). I've tried it many ways, but the agent never seems to be visible to the admin console. Here is the simplest example:
    On a single test machine, install the Admin Console. Use the Admin Console to create a new set of Global Agent Settings. Leave everything to default except under Synchronization, click Manage Synchronizers. Create a new synchronizer with synch type 'File System'. Go to the Required subkey and enter a network share under 'Server'. In this example, I created a share on the same local machine. The share gives 'Full Control' to 'Everyone' and 'Domain Users' groups. Save these settings in the Admin Console. Then Go to Tools > Generate Customized MSI in the Admin Console, exporting the newly created Global Agent Settings to the customized agent installer MSI. Then on the same test machine, install the agent by using the customized MSI. During the agent installation, choose 'Custom' and enable installation of Extensions > Synchronization Manager and File System synchronization type.
    So, that's what I've done. When I then go back to the Admin Console and connect to the repository (the share setup on the local machine) and select 'Configure SSO Support', the result is that it creates a folder called 'People' and two override objects: ENTLIST and ADMINOVERRIDE. That's it. No matter how many or where I install agents on this computer or elsewhere on the network 1) the admin console does not seem to see them 2) the ENTLIST of customized applications never gets pushed to the agents. In fact the ENTLIST shown in the Repository within the Admin Console does not seem to update when the apps list in the admin console is updated (even if I press 'Refresh' in the repository).
    Can anyone help? Is it something silly like the order in which I've done things, or did I miss a step somewhere?

    Hi,
    I have the same problem, but I use an Active Directory in the basckend :o(
    I'm able to configure the admin console (connection to AD, create a global agent for my AD server). When I install the ESSO-Agent on the same machine (where the administrative console is already installed)...it's quite easy because the following "option" is available : "Export Apps to Agent"!!! Using this, there is no problem.
    On the other hand, when I install the ESSO-Agent on the user's workstation...it's really an other story :o( For the monent, I'm not able to get the credentials (pre-configured applications) from Active Directory. I think that the (Global Agent) settings have been successfully pushed into AD, but I'm not able to get them from the user's workstation.
    What are the steps to realize this operation (get the pre-configured applications from AD)?
    I always get a connection window which requires the following three parameters: username, password and user path.
    Because AD is my backend server, I thought that the following "values" were correct...., but that's not the case, and I'm sure that I'm not connected to AD:
    username: cn=toto toto,cn=users,dc=name,dc=com
    password: mypwd
    user path: dc=name,dc=com
    I hope that you have resolved your problem, and that someone could help me :o)
    Thanks a lot.
    c u

  • SAP/ORACLE File Systems Disappeared in Local Zone

    I created some 20-30 files systems as meta devices for a SAP/Oracle installation on my 6320 SAN for a local zone.
    I did a newfs on all of them and then mounted them in the global zone with their appropriate mount points.
    I then did a zonecfg with the following input.
    zonepath: /export/zones/zsap21
    autoboot: true
    pool:
    inherit-pkg-dir:
    dir: /lib
    inherit-pkg-dir:
    dir: /platform
    inherit-pkg-dir:
    dir: /sbin
    inherit-pkg-dir:
    dir: /opt/sfw
    inherit-pkg-dir:
    dir: /usr
    fs:
    dir: /oracle
    special: /oracle_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/stage/920_64
    special: /oracle/stage/920_64_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /temp
    special: /temp_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/local
    special: /usr/local_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap
    special: /usr/sap_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/install
    special: /usr/sap/install_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/trans
    special: /usr/sap/trans_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /export/home/zsap21
    special: /export/home_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1
    special: /oracle/FS1
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/920_64
    special: /oracle/FS1/920_64
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/mirrlogA
    special: /oracle/FS1/mirrlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/oraarch
    special: /oracle/FS1/oraarch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogA
    special: /oracle/FS1/origlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogB
    special: /oracle/FS1/origlogB
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/saparch
    special: /oracle/FS1/saparch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/sapdata1
    special: /oracle/FS1/sapdata1
    raw not specified
    type: lofs
    options: []
    ***********more available but I truncated it here**********************
    I successfully installed and configured the zone.
    I then logged into the local zone and installed a new version of Oracle and a working SAP instance. I know that it worked because I was able to log into it.
    Upon rebooting the server and the local zone I have lost access to all of my file systems in the Oracle/SAP zone.
    It's almost as if the global zone has not mounted its file systems over the local zones mount points. In the local zone I can see no files in the directories where I would expect Oracle or SAP files.
    I either mis-configured the zonecfg or missed a step somewhere.
    I suspect that my file system contents are still around somewhere waiting to be hooked up with a zone's mount point.
    In the local zone a df -k shows all of the file systems (like the local zone knows about what file systems belong) but they all have the same size and free space (probably from the root zone).
    Any thoughts appreciated.
    Atis

    Do you have amount point within the zone path for these oradb mounts?
    You have to add a directory entry for /oradb in the zone's root directory, i.e. ``zone_path''/root/oradb

Maybe you are looking for

  • Is it possible to remount an external drive without restarting?

    I'm a video editor .. and have as many as 5 external drives connected to my G5 at all times. Ocassionally I eject one or two hoping to add a little speed to my work, if they are drives I don't need at that time. Sometimes, I need to remount a drive o

  • My iPhone 5 can't play music

    Please help! My iPhone 5 can't play music. The songs which convert by iTunes are not ok but the songs download from iTunes Store can be played. I am using iPhone 5. iOS 8.0.2(12A405)

  • Which is better, AMD or Intel?

    Hello everybody, I still a newbie about parts of computer. I heard alot about Amd is better for gaming but not for Intel Pentium 4. Can any 1 tell mi exactly what is the different between the two of them and also the advantage and benefit of using ea

  • How to call the windows popup  on SRM  portal

    Hi , how to call the windows popup on SRM portal. For example when I hit "Check" on the shoppoing cart screen I get a popup  saying  that "Shopping cart Test has no errors and can be orderd". I am looking to thorw a popup which has got "yes" and "no"

  • Client copy  COMPUTE_INT_PLUS_OVERFLOW

    Hi Gurus, I have problem with client copy using SCC9, it wont process 4 tables even after active restart. I got ABAP dump, enyone can help resolving issue ? Regards SQL2008R2 Runtime Errors         COMPUTE_INT_PLUS_OVERFLOW Exception              CX_