Small file to big file

Hi All,
can anyone tell me how to convert a small file tablespace to bigfile tablespace? below is my database version
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
PL/SQL Release 10.2.0.2.0 - Production
CORE 10.2.0.2.0 Production
TNS for Solaris: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production

DUPLICATE*
===============================================
small file to big file
===============================================

Similar Messages

  • XI Tunneling - File to File scenario - BIG Files

    We have a big size file that we are going to drop using XI.
    I have read that we can do tunnelling in XI to improve the performance since we do not have any mappings in XI.
    How does this work? Please suggest....
    John

    I cannot split the file into multiple files. Our files can go upto 150 MB.
    Do I have any other option without spliting the file? How exactly can tunneling help in this case?
    What other option do I have as the File is created in SAP and needs to go another UNIX server to be processed?
    I initially thought I can use XI to transfer it but now is causes big overhead due to File Adapter being very slow.

  • Big File vs Small file Tablespace

    Hi All,
    I have a doubt and just want to confirm that which is better if i am using Big file instead of many small datafile for a tablespace or big datafiles for a tablespace. I think better to use Bigfile tablespace.
    Kindly help me out wheather i am right or wrong and why.

    GirishSharma wrote:
    Aman.... wrote:
    Vikas Kohli wrote:
    With respect to performance i guess Big file tablespace is a better option
    Why ?
    If you allow me to post, I would like to paste the below text from my first reply's doc link please :
    "Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile."
    Regards
    Girish Sharma
    Girish,
    I find it interesting that I've never found any evidence to support the performance claims - although I can think of reasons why there might be some truth to them and could design a few tests to check. Even if there is some truth in the claims, how significant or relevant might they be in the context of a database that is so huge that it NEEDS bigfile tablespaces ?
    Database opening:  how often do we do this - does it matter if it takes a little longer - will it actually take noticeably longer if the database isn't subject to crash recovery ?  We can imagine that a database with 10,000 files would take longer to open than a database with 500 files if Oracle had to read the header blocks of every file as part of the database open process - but there's been a "delayed open" feature around for years, so maybe that wouldn't apply in most cases where the database is very large.
    Checkpoints: critical in the days that a full instance checkpoint took place on the log file switch - but (a) that hasn't been true for years, and (b) incremental checkpointing made a big difference the I/O peak when an instance checkpoint became necessary, and (c) we have had a checkpoint process for years (if not decades) which updates every file header when necessary rather than requiring DBWR to do it
    DBWR processes: why would DBWn handle writes more quickly - the only idea I can come up with is that there could be some code path that has to associate a file id with an operating system file handle of some sort and that this code does more work if the list of files is very long: very disappointing if that's true.
    On the other hand I recall many years ago (8i time) crashing a session when creating roughly 21,000 tablespaces for a database because some internal structure relating to file information reached the 64MB hard limit for a memory segment in the SGA. It would be interesting to hear if anyone has recently created a database with the 65K+ limit for files - and whether it makes any difference whether that's 66 tablespaces with about 1,000 files, or 1,000 tablespace with about 66 files.
    Regards
    Jonathan Lewis

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • I was loading a big file of photos from iPhoto to Adobe Photoshop CS3 and it keep collapsing, yet each time I reopen photoshop it load the photos again and cal laps again. is there a way to stop this cycle?

    I was loading a big file of photos from iPhoto to Adobe Photoshop CS3 and more then midway it keep collapsing, yet each time I re-open photoshop to load a small amount of photos, it load the previous photos again, and again it collapse. is there a way to stop this cycle and start a new?

    http://helpx.adobe.com/photoshop.html

  • Rt2860 wifi network hangs on downloading big files.

    After upgrade to 3.2 kernel now my rt2860 pci card is not working properly. Wifi connects fine, and i can browse the internet and download small files. But if i try to download a big file (> 1 gb) over the LAN, it starts and then hangs after downloading 8-10 mb. I have to disconnect the network, and connect again.
    I fixed it by installing rt2860 package from aur: https://aur.archlinux.org/packages.php?ID=14557 and blacklisting 2800pci.
    I would be happy with this solution, but now every time a kernel is updated i loose rt2860 after restart, and i have to manually recompile and instal the aur rt2860 package again.
    Are there any tweaks or config to fix the rt2800pci hanging problem ? or how can i make it not to loose aur package after every kernel upgrade ?

    I do not use rt2800pci on either Arch or Ubuntu. For me, it just doesn't work. You are doing better than I, because I can't get a connection at all with it.
    I put up with the effort of recompiling rt2860 after every kernel update, because it works. Besides, it helps me keep up my chops on compiling and installing kernel modules.
    Tim

  • USB Harddrive hangs on access from Windows Vista during copy of big Files

    We are using an Airport Extreme in a Windows Vista environment. We are not able to copy big files (around 40 GByte) from a Vista MacBook Pro to the Airport USB drive. Vista times out with the message "The spicified network name is no longer available".
    If we copy smaller files (tested with chunks of 2 GByte) we are able to sustain file copy for hours with the sytem.
    Any ideas?

    I guess file size limit is 4GBytes if your USB HDD is FAT32 format.
    If your HDD is HFS+ format, I'm not sure but it's kind of bug for AEBSn.

  • TS3276 server says file too big to send-file says 33MB server says 44MB

    I cannot send a short video which is 33.1 MB in the description but the server won't process as it claims the file is too big(44MB) and it can handle only 41MB.
    I've tried to compress it but still reads as "too big to send"

    You seem focused on the difference between 33 megabytes binary and a 44 megabytes transfer — the messages have to be encoded as SMTP mail was never intended as a binary transfer mechanism — mail servers transfer printable text messages — and the text-encoded attached binary files are inherently going to be larger than the original files.
    You're going to have to raise the mail server limits (which requires administrative access to the mail server), or find an alternate means of transferring the file — mail servers aren't very efficient at large transfers, large files tend to blow out mail server storage quotas, and blow out mail files.   Or chunk up the file into smaller files, send the chunks independently, and then concatenate the file remotely.  (There are tools around to do this — at the command line, have a look at the zipsplit tool — but I'd generally discourage this approach as it adds effort and complexity to the process.)
    The usual approach is a file transfer service.   Dropbox or SpiderOak or some other file hosting or file transfer service are the most common approaches here, and are also more efficient than the overhead involved with large attachments sent via a mail server.   Many of these file-hosting services offer more than enough storage here, and often as part of a free service tier.  
    Some folks can and do run their own file-hosting services, but that generally involves configuring and managing a server system and the associated network connections.

  • Spilt a big file into 3 files??

    Hello,
    Is there any software I can use to spilt a very big file into
    3 or 4 small files please?
    Thanks.

    Use one of these applications to split the file.
    (14020)

  • What Is A big file?

    Hi All. I'm making a smart folder for big files when the question hit me. When does a file become a big file? 100MB? 500MB? 1GB? I would love to hear anyone's opinions.

    It's all relative. Its fair to define the relative size of a file in terms of entropy and utility, both of which are measurable (entropy empirically, utility through observation).
    The entropy is a measure of the randomness of the data. If it's regular, it should be compressible. High entropy (1 per bit) means a good use of space, low entropy (0 per bit), a waste.
    Utility is simple, what fraction of the bits in a file are ever used for anything. Even if it's got a high entropy, if you never refer to a part of the file, it's existence is useless.
    You can express the qualitative size of a file as the product of the entropy and the utility. Zero entropy means the file itself contains just about no information -- so, regardless how big it is it's larger than it needs to be. If you never access even one bit of a file, it's also too big regardless how many bits are in the file. If you use 100% (1 bit per bit) of a file, but it's entropy is 0.5 per bit, then the file is twice as big as it needs to be to represent the information contained in it.
    An uncompressed bitmap is large, whereas a run-length encoded bitmap that represents the exact same data is small.
    An MP3 file is based on the idea that you can throw away information in a sound sample to make it smaller, but still generate something that sounds very similar to the original. Two files representing the same sound, but the MP3 is smaller because you can sacrifice some of the bits because you are lowering the precision with which you represent the original (taking advantage that human perception of the differences is limited).
    So, 100M would seem like a ridiculous size for a forum post, but it sounds small for a data file from a super-high resolution mass spectrometer.

  • HP C3183 Doesn't Print Big Files

    I've bought new computer with Windows 7, installed HP software and drivers from official web site.  The problem is the device doesn't print big files, only very small files, approx 5kb. When I'm trying to print file larger then 5 kb, it simply doesn't do anything with files collected in print queue.  I' ve tried to reinstall drivers several times, but with the same result.  Did anybody have this kind of problem? Maybe there is a problem with printer internal memory?

    Hello and thank you for your updated information.
    I understand when you turn off your computer, you are unable to print. You shouldn't have to restart the Print Spooler everytime in order to print. You may have a problem with the Print Spooler on your computer. Please follow these steps:
    First, I would recommend bypassing the Print Spooler to verify if you are able to print:
    1) Go into Devices and Printers
    2) Next, right click on the Photosmart C3183 printer
    4) Click on Printer Properties, then click on the Advance Tab (very top)
    5) Click on "Print Directly to the printer", then click Apply.
    6) Try to print a document
    If this step doesn't work, then follow this entire document on Print Spooler Keeps stopping Automatically. I know you are not receiving this error code, but it has valid troubleshooting steps to perform. Make sure the "Startup Type" is set to Automatic.
    Also, try this tool from Microsoft here, which will diagnose and fix printing problems.
    Please post your results, as I will be looking forward to hearing from you.
    I worked on behalf of HP.

  • Big file size

    Hi. I notice some of my files are 8 times bigger than others.
    Average size for some books is 250 kb. Similar files in other books is 2000plus.
    I import referenced graphics.
    Graphics sizes are tiny (16 kb, 8 kb, etc)
    Only 1 or 2 graphics per file.
    Number of pages similar (5-10)
    Any idea why some are so big and others arent?
    Any idea how to down size the big ones??
    Thanks

    Mike,
    The Save FrameImage option forces FM to always embed an uncompressed
    copy of any referenced image in FrameImage format. This is a sure fire
    way to bloat your file.
    Note, if you have any OLE objects, then they will also be stored
    internally in an uncompressed manner.
    The surest way is to save one of the small files and one of the big
    files as MIF and then use either a text editor or Graham Wideman's
    MifBrowser (see
    http://www.grahamwideman.com/gw/tech/framemaker/mifbrowse.htm tool to
    inspect for imported graphics to see if it's just a link to a file
    name or if some image or vector facets are actually embedded.

  • Problem reading big file. No, bigger than that. Bigger.

    I am trying to read a file roughly 340 GB in size. Yes, that's "Three hundred forty". Yes, gigabytes. (I've been doing searches on "big file java reading" and I keep finding things like "I have this huge file, it's 600 megabytes!". )
    "Why don't you split it, you moron?" you ask. Well, I'm trying to.
    Specifically, I need a slice "x" rows in. It's nicely delimited, so, in theory:
    (pseudocode)
    BufferedFileReader fr=new BufferedFileReader(new FileReader(new File(myhugefile)));
    int startLine=70000000;
    String line;
    linesRead=0;
    while ((line=fr.ReadLine()!=null)&&(linesRead<startLine))
    linesRead++; //we don't care about this
    //ok, we're where we want to be, start caring
    int linesWeWant=100;
    linesRead=0;
    while ((line=fr.ReadLine()!=null)&&(linesRead<linesWeWant))
    doSomethingWith(line);
    linesRead++'
    (Please assume the real code is better written and has been proven to work with hundreds of "small" files (under a gigabyte or two). I'm happy with my file read/file slice logic, overall.)
    Here's the problem. No matter how I try reading the file, whether I start with a specific line or not, whether I am saving out a line to a string or not, it always dies with an OEM at around row 793,000,000. the OEM is thrown from BufferedReader->ReadLine. Please note I'm not trying to read the whole file into a buffer, just one line at a time. Further, the file dies at the same point no matter how high or low (with reason) I set my heap size, and watching the memory allocation shows it's not coming close to filling memory. I suspect the problem is occurring when I've read more than int bytes into a file.
    Now -- the problem is that it's not just this one file -- the program needs to handle a general class of comma- or tab- delimited files which may have any number of characters per row and any number of rows, and it needs to do so in a moderately sane timeframe. So this isn't a one-off where we can hand-tweak an algorithm because we know the file structure. I am trying it now using RandomAccessFile.readLine(), since that's not buffered (I think...), but, my god, is it slow... my old code read 79 million lines and crashed in under about three minutes, the RandomAccessFile() code has taken about 45 minutes and has only read 2 million lines.
    Likewise, we might start at line 1 and want a million lines, or start at line 50 million and want 2 lines. Nothing can be assumed about where we start caring about data or how much we care about, the only assumption is that it's a delimited (tab or comma, might be any other delimiter, actually) file with one record per line.
    And if I'm missing something brain-dead obvious...well, fine, I'm a moron. I'm a moron who needs to get files of this size read and sliced on a regular basis, so I'm happy to be told I'm a moron if I'm also told the answer. Thank you.

    LizardSF wrote:
    FWIW, here's the exact error message. I tried this one with RandomAccessFile instead of BufferedReader because, hey, maybe the problem was the buffering. So it took about 14 hours and crashed at the same point anyway.
    Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
         at java.util.Arrays.copyOf(Unknown Source)
         at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source)
         at java.lang.AbstractStringBuilder.append(Unknown Source)
         at java.lang.StringBuffer.append(Unknown Source)
         at java.io.RandomAccessFile.readLine(Unknown Source)
         at utility.FileSlicer.slice(FileSlicer.java:65)
    Still haven't tried the other suggestions, wanted to let this run.Rule 1: When you're testing, especially when you don't know what the problem is, change ONE thing at a time.
    Now you've introduced RandomAccessFile into the equation you still have no idea what's causing the problem, and neither do we (unless there's someone here who's been through this before).
    Unless you can see any better posts (and there may well be; some of these guys are Gods to me too), try what I suggested with your original class (or at least a modified copy). If it fails, chances are that there IS some absolute limit that you can't cross; in which case, try Kayaman's suggestion of a FileChannel.
    But at least give yourself the chance of KNOWING what or where the problem is happening.
    Winston

  • How can i send a big file as pararameter of any method in a Web Service???

    Hello,
    i have a problem,,,,, i want to send a file of 2mb as parameter of a web service method.
    When i send this file as a vector of bytes i have the error out of memory...
    If the file is 200kb or smaller works fine....
    How can i send a big file as pararameter of any method in a Web Service???
    thanks in advance and excuse me for my bad english

    you can think about streams.
    in our case, what we did is, we will place a file in a common ftp server and return the url to the client.
    regards,
    mukunt

  • Gimp: big files [solution inside]

    i try to glue together images to a bigger one (each of them are about 3000x2000px and there are 5 of them)
    what i tried:
    - open all 5 files in gimp
    - create a new image in gimp, with the size 8000x8000px
    this worked, but then if i copy one of the smaller images and go to the bigger one and paste it, the system becomes horribly slow and hdd is working a lot --- gimp redraws windows then extremly slowly
    i have 768MB ram and only about 300 is occupied while trying this
    i think gimp is using only a part of the memory and then has the problem working
    why is gimp not using more memory?
    also if you know a other method how to glue pics together, i would be glad hearing, as i'm newbie in this sector
    what i did as workaround is: i scaled all the pics down to a smaller size and then glued them together. the product you can see here:
    http://daperi.home.solnet.ch/uni/bio4/p … logie.html
    (click on the first image)
    the original resolution is much more and the target is to also use the full resolution to glue them to a much better image
    any suggestions welcome
    thanx in advance

    Dusty wrote:Is File-->prefrences-->environment-->tile cache size what you're looking at?
    it was set to 64MB !!! i changed it to 400 and now gimp works again normally also with big files - thanx a lot
    Dusty wrote:Another option is to edit smaller files. :-D
    this is not an option but a workaround for me, because i need to glue images taken under the microscope, that must keep resolution and size but should be glued together (to keep the details) - with this method, i can construct the whole probe i had a look at the microscope in the computer in one picture, what gives great possibilities in archiving it --- but the trouble is that each pic is about 6*10^6px and if you glue 10 such pics, you need obviously more than 64mb :-)
    thanx for helping

Maybe you are looking for

  • Burning coasters about 50% of the time recently

    Anyone else having this problem?  As far as I can tell it took place at the same time as the 2.6.19 upgrade and the switch to cdrkit.  My IDE dvd-rw drive is loaded as sr0/sg0 as I understand it should be after 2.6.19, and all of the required modules

  • HOW The **** DO I PUT MY DVDS ONTO MY IPOD

    ok i may not bell super smart but i got like 300 dvds dieing to go onto my ipod how to i put them onto my p.c to put them on my ipod huh

  • CSS alternative for tables

    Hi, I used to do more web pages back in the late 1990s when layout was done in tables. I have tried to use just CSS, but there are some things that I am not confident of how to do in CSS and make it look consistent across browsers and resolutions.  I

  • It's possible install Printer HP m1132 MFP in Arch?

    Hello, I try 1000 methods for install this printer.. with hplipopensource.com drivers, with official repositories drivers (hplip) and, hplip-plugin of AUR, but i cant conf this printer... any idea?? Greetings and thanks!

  • HT2736 Why do I get invalid gift error when sending iTunes gift

    How to solve error "invalid gift" enter valid email address?