Allocation units based on file size

Hi Experts,
I have 11gR2 RAC environment which i installed recently with ASM.
When i create a datafile of size 12GB its taking 2MB extra.
For storing what information this 2MB is taken, as far as my understanding, 1MB is for maintaining block extent mappings inode information, how about the remaining 1MB?
2. Does the number of allocation unit varies if i create 32GB like additional 4MB will be taken by oracle asm, is there any specific formula
Your help will be much appreciated
Thanks,
Shine

Resizing is very much dependent on Image compression algorithms (or what ever they are called)… My Guess is that this is beyond the 'average' scripter if you wanted to deal with the open file data size then this can be done. If you want to deal with saved file's system sizes you may well be out of luck…

Similar Messages

  • How to pick the multiples based on file size

    Hi All,
    my sender file adapter needs to pick up 5 files based on file size.
    for example 1file size is 500kb,2nd file size 300kb,3rd file size 400kb, 4file size 100kb and 5file size 600kb.
    here my requirement is, my file adapter needs to pick in the below order like 5th file,1st file,3rd file 2nd file and 4th file.
    means based on file size, i need to pick up my file adapter.
    could you please ang inputs on this requirement.
    Thanks & Regards,
    AVR

    Hi AVR,
    for case 2:
    1. At specific time each day  "23:58:00" hours ,count the number of files in directory say "c:\apps\acm".
    2. sort the files on basis of their size.
    3. place the files one by one after definite  time interval (more than polling time of the sender communication channel).
    in target directory say "c:\apps\acm1" from where PI server  picks up the files for further processing. The file with smallest size is placed last.
    4. In this case you need to ensure all files are present in the directory "c:\apps\acm" before "23:58:00" hours.
    for case 2  java code
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.util.Calendar;
    import java.util.GregorianCalendar;
    public class sortFilesOnSpecificTime {
    * @param args
    public static void main(String[] args) {
    // TODO Auto-generated method stub
    try
    * In Unix/Linux OS, dir1="/usr/apps/test"  etc
    int pollingInterval=10,sleepTime=1*60;
    String dir1="c:\\apps\\acm";
    String dir2="c:\\apps\\acm1";
    File fread=new File(dir1);
    File fwrite=new File(dir2);
    if(fread.canRead()==false)
    System.out.println("error: "+dir1+" does not have read permission. Program Terminates.");
    return;
    if(fwrite.canWrite()==false)
    System.out.println("error: "+dir2+" does not have write permission. Program Terminates.");
    return;
    String fileNames[],fileNamesOut[];
    long fileSize[];
    int i,j;
    byte b[];
    int t=4;
    Calendar cal;
    int hour24,min,fileCopyHour=23,fileCopyMin=58;
    long waitSeconds=1,currentTime=0;
    while(t>0)
    cal = new GregorianCalendar();
    hour24 = cal.get(Calendar.HOUR_OF_DAY);     // 0..23
    min = cal.get(Calendar.MINUTE);             //0..59
    System.out.println("current time="+hour24+":"+min);
    /*loop unless time reaches a specific predetermined value
    * predetermined values are provided by values
    * fileCopyHour=8,fileCopyMin=30 i.e say 08:30 hours
    currentTime=(hour24*60+min)*60;
    waitSeconds=(fileCopyHour*60+fileCopyMin)*60 - currentTime;
    if(waitSeconds>0)
    * in case you wanna to make this thread sleep for
    * say sleepTime(10) minutes before it checks the files once again
    * because looping continuously causes wastage of CPU cycles.   
    Thread.sleep(waitSeconds*1000);
    //read list of files
    fileNames=fread.list();
    if(fileNames.length;=0)
    * time is up but there are no file
    * in dir1 to copy. Then this program
    * goes to sleep for some time and
    * checks only at 11:55 hours. That is
    * end of the day
    continue;
    fileSize=new long[fileNames.length];
    fileNamesOut=new String[fileNames.length];
    //read their sizes
    for(i=0;i<fileNames.length;++i)
    fileNamesOut<i>=fileNames<i>;
    fileNames<i>=dir1+System.getProperty("file.separator";)+fileNames<i>;
    fileSize<i>=new File(fileNames<i>).length();
    System.out.println(fileNames<i>+" size="+fileSize<i>);
    //sorting on basis of file size descending order
    long value;
    String temp;
    for(i=1;i<fileSize.length;++i)
    value=fileSize<i>;
    temp=fileNames<i>;
    for(j=i-1;j>=0 && fileSize[j]<value;--j)
    fileSize[j+1]=fileSize[j];
    fileNames[j+1]=fileNames[j];
    fileSize[j+1]=value;
    fileNames[j+1]=temp;
    //now copy files to dir2
    b=new byte[512];
    for(i=0;i<fileNames.length;++i)
    System.out.println(fileNames<i>+" size="+fileSize<i>);
    for(i=0;i<fileNames.length;++i)
    File f=new File(fileNames<i>);
    FileInputStream in=new FileInputStream(f);
    FileOutputStream out=new FileOutputStream(dir2+System.getProperty("file.separator";)+fileNamesOut<i>);
    int len=0;
    while(2>1)
    if((len=in.read(b))<0)
    break;
    out.write(b,0,len);
    in.close();
    //delete files after copying from dir1
    f.delete();
    out.close();
    //put each file after polling interval is over
    Thread.sleep(pollingInterval*1000);
    catch(Exception e)
    e.printStackTrace();
    The code runs in infinite loop. You need to run them in command line in DOS environment as you indicated that you OS is WIN XP.  I have a few print statements which I kept for debugging, you can safely remove them and run the codes. This code is independent of the Operating System you are using. Only change is required in values of "dir1","dir2", timings and file count, which I think you can take care easily.
    Hope this solves your problem.
    regards
    Anupam

  • Batch Resizing Based On File Size

    I have a lot of very large (400-600Mb) tiff files (from a scanner) that all have different ratios (some square, some 4:3, some extreme panoramic). Is there a way I can batch resize them all to a certain file size? (eg 200Mb)
    I've been looking everywhere but can't find anyone on any forums trying to do the same thing. Strange because i thought it might be quite a common problem.
    Thanks for any help/advice,
    Lewis

    Resizing is very much dependent on Image compression algorithms (or what ever they are called)… My Guess is that this is beyond the 'average' scripter if you wanted to deal with the open file data size then this can be done. If you want to deal with saved file's system sizes you may well be out of luck…

  • Filtering by file size

    Hi,
    I was *so* hoping that we'd finally be able to filter (or create a Smart Collection) based upon file size.
    There are many reasons why this would be useful but the thing that trips me up every once in a while is that I'll forget to flatten my layers in Photoshop and the resulting tiff could be over 500 MB! Everyone once in a while I'll manually find the large files, flatten the layers, then re-import them into LR - in the specific case I mentioned the file was reduced to 17MB.
    This seems like an obvious, useful, and easy to implement feature. Not sure why we don't have it yet...
    Thanks!
    RJ

    In the Any Filter implementation, the incremental cost of an additional search criterion such as File Size is very small -- just a few lines of code.  I'd guess that the same would hold true for the LR implementation.
    But there's a fairly long list of additional search criteria that are important to some number of LR users, including:  nested smart collections, virtual copies, stack size and position, file size, (cropped) megapixels, (cropped) width and height, numeric aspect ratio, explicit keywords, additional IPTC and IPTC extension fields,  GPS lat/long/altitude/distance, subject distance, all the develop settings.   I know that these are plausibly important from reading the forums and requests received from Any Filter users.  Even excluding the develop settings, Any Filter roughly doubles the number of criteria.
    I think Adobe has tried to identify a small subset of search criteria that satisfies the great bulk of workflow needs.  Overall, I think they've been pretty successful at that.  But if they decided to add one or two of the criteria from the list above, they'd probably find themselves on a slippery slope, not being able to distinguish the priority of one criterion versus another (e.g. file size, numeric aspect ratio, megapixels, GPS distance).  
    But if they add more than a couple of criteria, they'd face some non-trivial design and engineering issues:
    - Should they index every additional criterion in the catalog database, to make the smart-collection searches instantaneous (as they do with the current criteria)?   Or would increasing the number of indexed criteria by 2x significantly slow down importing and incremental metadata editing?  Alternatively, should they document that some of the less-used criteria won't execute in smart collections "instantly", and then make that distinction clear to users somehow in the user interface? 
    - With 2x the number of criteria, the current user-interface design becomes unwieldy.  In Any Filter, I was forced to introduce another level of drop-down menus, making it slower to access the most common criterion.   Adobe might think twice about that, not wanting to complicate the most common simple cases for the sake of lesser-used criteria.  Perhaps there's a better, much different interface design that can keep the more common cases simpler and fast, while still providing access to a larger set of criteria. 

  • Blob cache - dedicated drive - allocation unit size

    Hi
    I'm setting up dedicated drives on the WFEs for Blobcache and just formatting the new simple volumes and I was wondering if Default is the best option for the Allocation unit size
    Is there any best practice or guidance here or should I just stick with default?
    Thanks
    J

    Default is ideal since you'll be mostly dealing with small files.
    That said, the advantage for BC comes from the max-age value, which allows clients to not even ask the SharePoint server for the file and instead use local browser cache.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Hard Drive Setup - Allocation Unit Size

    I'm adding on 2 one tb HDD's.
    when formatting i'm wondering what the performance gain and size loss would be of using 8192 bytes instead of the default(4096) allocation unit size.
    i've got 3 one tb HDD's and 4 tb of external storage now but i use a lot of space editing 4-10 hours of multi camera footage at a time and im terrible at cleaning up after myself, hah, so I'm not sure if i have enough overall space to dedicate each one for media, page file, etc... at least for now they will all be general purpose use
    any thoughts on the effects of using 8192 over 4096?
    thanks

    By your own admission, cleaning up is not one of your fortes. This probably means that there are a lot of preview files left on your disk as well and these tend to be rather small. Increasing the allocation units as you call them will create more slack space and result in wasted space, sucking up your disk space very quickly. I do not think you will gain anything, on the contrary, you will probably lose space and performance. If you look at your preview files, you will see a lot of XMP and MPGINDEX files that are all 4 KB max and by changing the allocation unit to 8 K most of these files will have between 75 and 50% slack space (not used for data but reserved) and you will run out of disk space much quicker. Personally I would not change the default NTFS allocation.

  • How can I auto export a PDF File using the "Smallest File Size" preset and set the Exported File Name based on information from an Imported PDF?

    Greetings all,
    I am trying to create a script to automate a PDF export process for my company for inDesign. I’m fairly new to inDesign itself and have no previous experience with javascript, although I did take C++ in high school and have found it helpful in putting this code together.
    We have an inDesign template file and then use the Multi-page PDF importer script to import PDF files. We then have to export two version of each file that we import, then delete the imported file and all of the pages to reset the template. This has to be done for nearly 1000 pdf files each month and is quite tedious. I’m working on automating the process as much as possible. I’ve managed to piece together code that will cleanup the file much quicker and am now trying to automate the PDF exports themselves.
    The files are sent to us as “TRUGLY#####_Client” and need to be exported as “POP#####_Client_Date-Range_North/South.pdf”
    For example, TRUGLY12345_Client needs to be exported as POP12345_Client_Mar01-Mar31_North and POP12345_Client_Mar01-Mar31_South.
    There are two templates built into the template file for the north and south file that are toggled easily via layer visibility switches. I need to get a code that can ideally read the #s from the imported Trugly file as well as the Client and input those into variables to use when exporting. The date range is found in the same place in the top right of each pdf file. I am not sure if this can be read somehow or if it will have to be input manually. I can put North or South into the file name based on which template layer is visible.
    I am not sure how to go about doing this. I did find the following code for exporting to PDF with preset but it requires me to select a preset and then type the full file name. How can I set it to automatically use the “Smallest File Size” preset without prompting me to choose and then automatically input some or preferably all of the file name automatically? (If the entire filename is possible then I don’t even want a prompt to appear so it will be fully automated!)
    PDF Export Code (Originally from here: Simple PDF Export with Preset selection | IndiSnip [InDesign® Snippets]):
    var myPresets = app.pdfExportPresets.everyItem().name;
    myPresets.unshift("- Select Preset -");
    var myWin = new Window('dialog', 'PDF Export Presets');
    myWin.orientation = 'row';
    with(myWin){
        myWin.sText = add('statictext', undefined, 'Select PDF Export preset:');
        myWin.myPDFExport = add('dropdownlist',undefined,undefined,{items:myPresets});
        myWin.myPDFExport.selection = 0;
        myWin.btnOK = add('button', undefined, 'OK');
    myWin.center();
    var myWindow = myWin.show();
    if(myWindow == true && myWin.myPDFExport.selection.index != 0){
        var myPreset = app.pdfExportPresets.item(String(myWin.myPDFExport.selection));
        myFile = File(File.saveDialog("Save file with preset: " + myPreset.name,"PDF files: *.pdf"));
        if(myFile != null){
            app.activeDocument.exportFile(ExportFormat.PDF_TYPE, myFile, false, myPreset);
        }else{
            alert("No File selected");
    }else{
        alert("No PDF Preset selected");
    So far my code does the following:
    1) Runs the Multi-Page PDF Import Script
    2) Runs PDF Export Script Above
    3) Toggles the Template
    4) Runs #2 Again
    5) Deletes the imported PDF and all pages and toggles template again.
    It’s close and much better than the original process which was almost 100% manual but I’d like to remove the Preset prompt from the PDF script and have it automatically select the “Smallest File Size” preset. and then if there’s a way to have it auto-fill in the file name so no user input is required at all other than selecting each file to import. (If there’s a way to setup a batch action for the multi-import script that would be even better!)
    Thanks in advance and if there’s anything else I can provide that would help please let me know! Even a nudge in the right direction will be a big help!

    If you hold down the option key, it will typically show the location. Or you can often hit option-return on the file and it will reveal the file in the Finder, instead of opening it.
    Final option is to open it, and just option-click the filename in the toolbar of Preview and it should show you the location.
    It's probably an attachment to an email you've received. If you have Mail set to cache emails and their attachments it'll be stashed in a subdirectory of ~/Library/Mail. Which is fine.

  • "Best" Allocation Unit Size (AU_SIZE) for ASM diskgroups when using NetApp

    We're building a new non-RAC 11.2.0.3 system on x86-64 RHEL 5.7 with ASM diskgroups stored on a NetApp device (don't know the model # since we are not storage admins but can get it if that would be helpful). The system is not a data warehouse--more of a hybrid than pure OLTP or OLAP.
    In Oracle® Database Storage Administrator's Guide 11g Release 2 (11.2) E10500-02, Oracle recommends using allocation unit (AU) size of 4MB (vs. a default of 1MB) for a disk group be set to 4 MB to enhance performance. However, to take advantage of the au_size benefits, it also says the operating system (OS) I/O size should be set "to the largest possible size."
    http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm
    Since we're using NetApp as the underlying storage, what should we ask our storage and sysadmins (we don't manage the physical storage or the OS) to do:
    * What do they need to confirm and/or set regarding I/O on the Linux side
    * What do they need to confirm and/or set regarding I/O on the NetApp side?
    On some other 11.2.0.2 systems that use ASM diskgroups, I checked v$asm_diskgroup and see we're currently using a 1MB Allocation Unit Size. The diskgroups are on an HP EVA SAN. I don't recall, when creating the diskgroups via asmca, if we were even given an option to change the AU size. We're inclined to go with Oracle's recommendation of 4MB. But we're concerned there may be a mismatch on the OS side (either Redhat or the NetApp device's OS). Would rather "first do no harm" and stick with the default of 1MB before going with 4MB and not knowing the consequences. Also, when we create diskgroups we set Redundancy to External--because we'd like the NetApp device to handle this. Don't know if that matters regarding AU Size.
    Hope this makes sense. Please let me know if there is any other info I can provide.

    Thanks Dan. I suspected as much due to the absence of info out there on this particular topic. I hear you on the comparsion with deviating from a tried-and-true standard 8K Oracle block size. Probably not worth the hassle. I don't know of any particular justification with this system to bump up the AU size--especially if this is an esoteric and little-used technique. The only justification is official Oracle documentation suggesting the value change. Since it seems you can't change an ASM Diskgroup's AU size once you create it, and since we won't have time to benchmark using different AU sizes, I would prefer to err on the side of caution--e.g. first do no harm.
    Does anyone out there use something larger than a 1MB AU size? If so, why? And did you benchmark between the standard size and the size you chose? What performance results did you observe?

  • Sum of file sizes; based on file extension, sorted by size

    Hello,
    Here is what I need, hope someone can help me:
    I want to get a sum of all file sizes in a current directory, based on the file extensions, e.g.
    250K *.txt
    800K *.php
    2M    *.jpg
    10M  *.mp4
    I was trying something with "du -sch" and "type -f" but couldn't get a good result...
    Thanks!

    Still not quite there, but here is what I came up with using bits from here and there:
    #!/bin/bash
    ftypes=$(find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq)
    for ft in $ftypes
    do
    echo -n "$ft "
    find . -name "*${ft}" -exec du -shc {} + | tail -1 | awk '{print $1}'
    done
    The output is:
    .abc 3M
    .bgh 150K
    .cig 100M
    .de 80K
    but I can't the output as it as described in OP...

  • SQL Server NTFS allocation unit size for SSD disk

    Hi,
    I have read that the recommended NTFS allocation unit size for SQL Server generally is 64 kb since data pages are 8 kb and SQL Server usually reads pages in extents which are 8 pages = 64 kb.
    Wanted to check if this is true also for SSD disks or if it only applies to spinning disks?
    Also would it make more sense to use an 8 kb size if wanting to optimize the writes rather than reads?
    Please provide some additional info or reference instead of just a yes or no :)
    Thanks!

    Ok thanks for clarifying that.
    I did a test using SQLIO comparing 4kb with 64kb when using 8kb vs 64kb writes/reads.
    In my scenario it seems it doesnt matter if using 4kb or 64kb.
    Here are my results expressed as how much higher the values were for 64kb vs 4kb.
    Access type
    IOps
    MB/sec
    Min latency (ms)
    Avg latency (ms)
    Max latency (ms)
    8kb random write
    -2,61%
    -2,46%
    0,00%
    0,00%
    60,00%
    64kb random write
    -2,52%
    -2,49%
    0,00%
    0,00%
    -2,94%
    8kb random read
    0,30%
    0,67%
    0,00%
    0,00%
    -57,14%
    64kb random read
    0,06%
    0,23%
    0,00%
    0,00%
    44,00%
    8kb sequential write
    -0,15%
    -0,36%
    0,00%
    0,00%
    15,38%
    64kb squential write
    0,41%
    0,57%
    0,00%
    0,00%
    6,25%
    8kb sequential read
    0,17%
    0,33%
    0,00%
    0,00%
    0,00%
    64kb squential read
    0,26%
    0,23%
    0,00%
    0,00%
    -15,79%
    For anyone interested this test was done on Intel S3700 200gb on PERC H310 controller and each test was run for 6 minutes.

  • Windows Allocation Unit Size and db_block_size

    We are using Oracle 9.2 on Windows 2000 Advanced Server. What is the largest db_block_size we can use? I am going to reformat the drive to the largest cluster size (or allocation unit) Oracle can use.
    Thanks.

    First, you shouldn't use db_block_size in 9.2 but db_cache_size :-)
    With 32-bit version, your buffer cache will be limited to 2,5 Go (if win 2000 know how to manage this quantity). On 64 bit, dunno but probably enough for all of us :-)
    I think you should check Oracle 9.2 for win 2000 realease note on technet.oracle.com
    Fred

  • Trying to understand different file sizes based on publishing options

    Hi,
    I have a project that when published was coming out with the .swf at about 15.9MB.  This is in Captivate 7.0.1.237 with the following settings (defaults I believe):
    Compress Full motion  recording swf file - unchecked
    Retain Slide Quality Settings - checked
    Jpeg: 80%
    Advanced Project Compression - checked
    Compress SWF file: checked
    It seemed strange that all the things in my library came out to about 5 MB and previous projects (with fewer images, fewer slides and less than 1 MB of things in the library) had come out at only 1-2MB.  Such a big jump in .swf size didn't seem to make sense, especially if I wasn't adding that much in new assets.  All of these projects have a lot of video, but those are external. 
    So I tried unchecking Retain Slide Quality and set the Bmp to 8-bit.  5.7 MB output which seemed to make sense for what the project is, but yuck - aliased images and some weird color fringing on single line edges.  Then I tried High(24 bit) and my project is still only 5.7 MB, so I guess there are very few things in my project that it was actually adjusting down (probably only .png files, not the pictures which are .jpg).  I see that previously my individual slides are set to Optimize, which I would expect would have been giving me a smaller sized project all along when it was using Retain Slide Quality Settings. Hmmm...
    Then I tried unchecking Advanced Project Compression to see if it would give me a bigger size... nope, still at 5.7 MB.  It would seem that leaving this unchecked would be a good thing, if it doesn't do anything.  Any reason to check this if it isn't reducing file size?
    What are best practices here?  It seems to me that unchecking Retain Slide Quality Settings, unchecking Advanced Project Compression and putting the Bmp to High (24 bit) is the best option for my project.  But why isn't optimized more optimized?  And what am I loosing in those 10 MB of .swf?  Am I loosing quality or stability somewhere that I haven't noticed yet?
    Thanks for any insight.  I'm still scratching my head.

    Advanced Project Compression tries to optimise the compression settings on a slide-by-slide basis for better overall file size reduction.  It's usually NOT a problem.  However, in some cases we advise developers to turn it off as it is often responsible for 'artifacts' such as 'ghost lines' appearing on slides.
    I usually find that using High 24bit compression actually gives lower file sizes than using the Optimized setting.  Go figure...

  • When using crop tool based on canvas size dimensions, the file size becomes very large

    Can someone explain how to best figure out crop tool settings to maintain relative file size?  I tried to use the image / canvas size information to set the crop tools dimensions, but the file saved is 10 times larger than the original.
    Thanks!

    What exact version of Photoshop?  There were changes to the crop tool as of CS6, althoughy there's an option to use Classic (the old crop tool behaviour).
    It wouldn't hurt to specify your platform (Mac or Win)…
    If you give complete and detailed information about your setup and the issue at hand, such as your platform (Mac or Win), exact versions of your OS, of Photoshop and of Bridge, machine specs, what troubleshooting steps you have taken so far, what error message(s) you receive, if having issues opening raw files also the exact camera make and model that generated them, etc., someone may be able to help you better and faster.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • Applet for figuring bitrate based on target file size?

    Occasionally when I'm exporting quicktimes for emailing to people with known restrictions on incoming file sizes, I'd love to be able to specify the target file size as opposed to the bitrate.
    Does anyone know of any applets that can generate a rough estimate of what you should set the bitrate to in order to achieve a target file size given the length of the clip?
    Sounds like a widget in the making to me...

    And a widget:
    http://www.apple.com/downloads/dashboard/calculate_convert/videospace.html

  • Strange File Size Problem (FLA)

    Hi
    We just encountered quite a problem here and i wonder how to solve it. Our FLAs are getting so huge that they are mostly "un-workable".
    Basicly, what we do is create a character animation, let's say a walk cycle. For that part we use stardard animation. Multiple layers for everybody part, and tweening them. Then, for some reason, we need to break it appart into frames with shapes. so we break our whole animation into frames until everyframe only has a shape on it.
    But this takes some HUGE space. I wonder if there's some hidden meta-data on the shape that remember it once was into an animation, or a clip or anything else alike.
    Every Shape takes about 160k. a simple square takes 160k. If i take that shape. paste it into a new fla, saves it, it'l take 160k on my disk. If i create a new fla, create an identical shape, it'l take about 16k. Ok this might seem like no big deal, but our FLA is now over 170 megs instead of something around 16. And it creates quite a few problem from there working with those files, sharing them, and exporting the units inside of it.
    Anybody encountered this issue and knows how to solve this ?

    anyone ?
    I attached 2 files that are very similar to me, the big one is 157k after a save and compact, and the SmallOne where i tried to reproduce the shape, is 21k after a save and compact. Where does those 136k comes from, it's not file based cause if i put 2 shapes in there, the file size is getting multiplied =/
    edit : Oh well, looks like i can't post fla attachement.. weird

Maybe you are looking for

  • Anyone else have the favorites bar showing up on iPhone?

    I Have looked all over to see if anybody else is having this same issue, but have found nothing. I have a very annoying issue where the favorites bar (also called the bookmarks bar) that is in Safari on the desktop or on the iPad in landscape mode, s

  • Bug in sdoapi 1.0 GeometryAdapter?

    Hallo, I want to run an example for a GeometryAdapter (SampleSDOtoWKT.java) with the Java sdoapi V1.0 but it always gives me an exception 'InvalidGeometryException' on line 84 of the sample code geom1 = sdoAdapter.importGeometry(dbObject); This examp

  • I'm new.  How do I update my feed after posting a new podcast?

    I'm sure this has been asked a bazillion times but i haven't found an answer. I just joined iTunes and received a confirmation email that my podcast feed has been accepted.  The email states, After adding a new episode, update your feed in the fastes

  • Reinstalling HP s3400f to new condition if F11 is not working

    How do I Restore the PC to 'New' condition if my F11 is not working? I've changed keyboards and found that it's not the keyboard. I guess another question would be: How do I get the F11 button to work?

  • Graphics card issue/display resolution with 30" display

    I've got a PowerMac G5 with a ATI Radeon 9800TX graphics card. I recently bought a 30" Apply Cinema Display but I can't get the display resolution on the computer to go any higher than 1280x800. I've read through some of the other postings and tired