Pixel format on Canvas

One of my users is getting the following message in the Sun java Console:
java.lang.RuntimeException:
Unable to set Pixel format on Canvas
     at sun.awt.windows.WToolkit.eventLoop(Native Method)
     at sun.awt.windows.WToolkit.run(Unknown Source)
     at java.lang.Thread.run(Unknown Source)
Searching the forums doesn't really give me a good idea on how to proceed in fixing it.
Any suggestions?

Any suggestions?Try updating the graphics drivers.
db

Similar Messages

  • Problem using byte indexed pixel format in setPixels method of PixelWriter

    I try to construct a byte array and set it to a WritableImage using PixelWriter's setPixels method.
    If I use an RGB pixel format, it works. If I use a byte-indexed pixel format, I get an NPE.
    The stride etc should be fine if I'm not mistaken.
    java.lang.NullPointerException
         at com.sun.javafx.image.impl.BaseByteToByteConverter.<init>(BaseByteToByteConverter.java:45)
         at com.sun.javafx.image.impl.General$ByteToByteGeneralConverter.<init>(General.java:69)
         at com.sun.javafx.image.impl.General.create(General.java:44)
         at com.sun.javafx.image.PixelUtils.getB2BConverter(PixelUtils.java:223)
         at com.sun.prism.Image$ByteAccess.setPixels(Image.java:770)
         at com.sun.prism.Image.setPixels(Image.java:606)
         at javafx.scene.image.WritableImage$2.setPixels(WritableImage.java:199)
    Short, self-contained example here:
    import java.nio.ByteBuffer;
    import javafx.application.Application;
    import javafx.scene.Scene;
    import javafx.scene.image.ImageView;
    import javafx.scene.image.PixelFormat;
    import javafx.scene.image.WritableImage;
    import javafx.scene.layout.BorderPane;
    import javafx.stage.Stage;
    public class IndexedColorTestApp extends Application {
        public static void main(String[] args) {
            launch(args);
        @Override
        public void start(Stage primaryStage) {
            BorderPane borderPane = new BorderPane();
            Scene scene = new Scene(borderPane, 600, 1100);
            primaryStage.setScene(scene);
            ImageView imageView = new ImageView();
            borderPane.setCenter(imageView);
            primaryStage.show();
            int imageWidth = 200;
            int imageHeight = 200;
            WritableImage writableImage = new WritableImage(imageWidth, imageHeight);
            // this works
            byte[] rgbBytePixels = new byte[imageWidth * imageHeight * 3];
            PixelFormat<ByteBuffer> byteRgbFormat = PixelFormat.getByteRgbInstance();
            writableImage.getPixelWriter().setPixels(0, 0, imageWidth, imageHeight,
                                                     byteRgbFormat, rgbBytePixels, 0, imageWidth * 3);
            imageView.setImage(writableImage);
            // this throws an NPE in setPixels()
            byte[] indexedBytePixels = new byte[imageWidth * imageHeight];
            int[] colorPalette = new int[256];
            PixelFormat<ByteBuffer> byteIndexedFormat = PixelFormat.createByteIndexedInstance(colorPalette);
            writableImage.getPixelWriter().setPixels(0, 0, imageWidth, imageHeight,
                                                     byteIndexedFormat, indexedBytePixels, 0, imageWidth);
            imageView.setImage(writableImage);
    }If there's no solution, maybe someone knows a workaround? We chose to use indexed format because of data size / performance reasons.
    Edited by: Andipa on 01.03.2013 10:52

    You have found a platform bug, file it against the Runtime project at => http://javafx-jira.kenai.com with your sample code and a link back to this forum question.
    Byte indexed pixel formats seem like a feature which was never completely (or perhaps even hardly at all) implemented to me.
    The PixelFormat type your failed case is using is (PixelFormat.Type.BYTE_INDEXED):
    PixelFormat<ByteBuffer> byteIndexedFormat = PixelFormat.createByteIndexedInstance(colorPalette);
    System.out.println(byteIndexedFormat.getType());These are the valid PixelFormat types =>
    http://docs.oracle.com/javafx/2/api/javafx/scene/image/PixelFormat.Type.html
    BYTE_BGRA
    The pixels are stored in adjacent bytes with the non-premultiplied components stored in order of increasing index: blue, green, red, alpha.
    BYTE_BGRA_PRE
    The pixels are stored in adjacent bytes with the premultiplied components stored in order of increasing index: blue, green, red, alpha.
    BYTE_INDEXED
    The pixel colors are referenced by byte indices stored in the pixel array, with the byte interpreted as an unsigned index into a list of colors provided by the PixelFormat object.
    BYTE_RGB
    The opaque pixels are stored in adjacent bytes with the color components stored in order of increasing index: red, green, blue.
    INT_ARGB
    The pixels are stored in 32-bit integers with the non-premultiplied components stored in order, from MSb to LSb: alpha, red, green, blue.
    INT_ARGB_PRE
    The pixels are stored in 32-bit integers with the premultiplied components stored in order, from MSb to LSb: alpha, red, green, blue.As the native pixel format for a WritableImage is not the same as the pixel format you are using, the JavaFX platform needs to do a conversion by reading the pixels in one format and writing them in another format. To do this it must be able to determine a PixelGetter for your PixelFormat (the PixelGetter is an internal thing, not public API).
    And here is the source the determines the PixelGetter for a given PixelFormat type:
    http://hg.openjdk.java.net/openjfx/8/master/rt/file/06afa65a1aa3/javafx-ui-common/src/com/sun/javafx/image/PixelUtils.java
    119     public static <T extends Buffer> PixelGetter<T> getGetter(PixelFormat<T> pf) {
    120         switch (pf.getType()) {
    121             case BYTE_BGRA:
    122                 return (PixelGetter<T>) ByteBgra.getter;
    123             case BYTE_BGRA_PRE:
    124                 return (PixelGetter<T>) ByteBgraPre.getter;
    125             case INT_ARGB:
    126                 return (PixelGetter<T>) IntArgb.getter;
    127             case INT_ARGB_PRE:
    128                 return (PixelGetter<T>) IntArgbPre.getter;
    129             case BYTE_RGB:
    130                 return (PixelGetter<T>) ByteRgb.getter;
    131         }
    132         return null;
    133     }As you can see, the BYTE_INDEXED format is not handled and null is returned instead . . . this is the source of your NullPointerException.

  • Using Y8/Y16 pixel format in LabVIEW through DirectShow

    I am trying to use Y8 or Y16 pixel format (also called Mono8 or Mono16) on a FireWire camera through DirectShow. However, these formats always showed up as unknown in NI-MAX's "Video Mode" drop down menu. Interestingly, "Snap" worked properly and images were acquired correctly using these two formats in NI-MAX.
    I am a bit puzzled why they showed up as unkown. Could this be an issue with camera's own DirectShow filter?
    Has any one else used Y8 or Y16 pixel formats successfully in NI-MAX on a FireWire camera? How were they listed in "Video Mode" menu?
    Thanks in advance.

    Hi,
    The list of DirectShow pixel formats that IMAQdx knows about comes from the list Microsoft defines here:
    http://msdn.microsoft.com/en-us/library/windows/desktop/dd407353(v=vs.85).aspx
    Looking at the uncompressed RGB list (includes monochrome as well) and note that there are no 16-bit monochome formats in that list. The only true "grayscale" formats (those where DirectShow does not give us an RGB color image) are RGB1, RGB4, and RGB8. Any other formats will be translated by DirectShow to an RGB image.
    Essentially if your camera is returning an image format that is not listed in Microsoft's defined list of GUIDs, it will show up as "Unknown" and IMAQdx will expect it to be an RGB image. DirectShow will convert the image using whatever filters you have installed to that format.
    Eric

  • How to Set Pixel Format for AEGP Plugin???

    Hi,
    Does Anyone know how to set pixel format (ARGB or BGRA etc) for After Effect (AEGP) Plugin (Source Generator).

    It requires the Canon Hacker's Developement Kit (CHDK). It runs off of your SD card and does not harm your camera. Download a program called Stick-
    http://zenoshrdlu.com/stick/stick.html
    Follow the directions and you will be able to shoot in RAW along with some other deatures not available on your camera previously.

  • Internal pixel format w/ 8800GT & 2600XT

    I'm trying to start cc'ing an hour long documentary and figuring out hardware needs.
    My MP's 7300GT gives only 8-bit or floating point "internal pixel format"s.
    Footage is mainly 1080i50 hdv and possibly going to dci.
    MP is 2007 model so it's really hard to choose waht to do:
    1) Is "floating point" too gpu intensvie for 7300GT?
    2) Would 2600XT enable 10-bit or 12-bit and would it even work with MP2007 & color?
    3) Would 8800GT enable 10/12/16-bit and how many weeks/months does it still take for nvidia to make MP2007 efi for it?

    Ok,
    but still Apple has sold Color for almost an year, so they should know how it works.
    When trying to build a reliable system for critical work, Apple has had an advantage since it offers both hardware and software (including os).
    So it is unacceptable that in this case right hand does not know what left hand does.
    Why didn't they test new graphics cards, which they sell by themselves, with software, that they also sell?
    If they did, why they don't publish their findings?
    And why would they offer 8800GT as their higher end option for new MacPro, but don't tell that it limits the usage of their FinalCutStudio2 package?
    Am I understanding correctly, that Alexis Van Hurkman has told that Nvidias lack of color depth options also apply to 8800GT and since ATI has supported all those options, it still does support all of them with 2600XT?
    Will this choise of card (8800GT vs. 2600XT) really be a difficult trade off between color depth in Color and playback speed in Motion?

  • Unable to Edit - "Invalid Pixel Format"

    When I click on JPEG thumbnails in iPhoto 9 nothing happens. If I then click the edit button manually, the thumbnails appear at the top arrayed horizontally but, when I click on one, it doesn't open up in the edit window. Each time I do this the following error message appears in the console:
    12/13/09 11:44:22 AM iPhoto[687] invalid pixel format
    12/13/09 11:44:22 AM iPhoto[687] invalid context
    Same issue and error message with Preview.
    I only have this issue with my MacMini (early '09) under SL 10.6.2. No problem under 10.5.8 on MacMini or on my MBP under 10.6.2.

    I'm joining this party.
    Brand new Mac mini 2,53 Ghz with SL 10.6.2 that replaced an 2 GHz mini.
    The iPhoto library was already on an external disk (Drobo) so file copy issues are not the cause of this.
    And when I connect to the Drobo from another Mac mini (exactly the same model, I just bought 2 of them!), I can open the files without any problems.
    So it has to be this particular mini.
    The only difference between the two mini's is that the one with troubles runs without a monitor.
    The one without troubles has a 23" ACD attached to it.

  • Rotating image moves pixels out of canvas.

    I was under the impression that using Image => Image Rotation => => 90 CW would rotate everything ninety degrees clockwise, but I guess I don't understand exactly how it works.
    When makin a very simple 2 pixel pattern, I filled the right pixel with black and left the other pixel empty. I then wanted to see how the pattern would look horizontally so I rotated the image 90 deg clockwise and found that my black pixel was moved off of the canvas.
    Next I tried some testing to see if I could better understand what was happening. I found that if I rotate the image CCW then the pixel would be in the correct place, but if I fill the left pixel with black instead of the right I would have to rotate CW to keep the fill from going off canvas. I thout that maybe I was rotating the canvas only, but when testing a larger image I found that wasn't the case.
    I could go on, but basically I can't figure out how to rotate an image with 2 pixels ninety degrees. Kind of silly, what am I doing wrong?

    Doesn't happen on my end. Maybe a video driver issue?

  • Flickering pixelated patterns across canvas in CS6 on Mac OS X

    I recently re-installed CS6 on my iMac and how I am having a problem in Photoshop. Random pixelated patterns in grey, black and other colours flicker all over the canvas whenever I try to do anything, such as using any tool or even zooming in/out.
    The "patterns" are made up of little rectangles and appear all over the canvas:
    Additionally, when I have anything on the canvas selected, the flickering extends over the grey background area as well:
    This only seems to happen when I make a selection, not when using tools, zooming etc. It's hard get a screenshot how annoying this is, but it is flickering very rapidly across the whole screen, somtimes large chunks as shown above, sometimes very small ones.
    I have had Photshop installed on this computer for a year with no issues.Any ideas what may have caused this or how to fix it? I have googled but all I can find is an issue relating to a flickering background in Windows 8. Any advice would be much appreciated - I am a professional web designer and this is driving me crazy! May be worth mentioning that I am not having any issues in Illustrator.
    Thanks in advance.

    Here are a few things to look at. Try each step until your Epson printer works
    Be sure you are updated to 13.0.6  Help > Updates
    Another is to uninstall/reinstall the drivers from an administrator account to be sure proper permissions are set.
    Reset prefs: Quit Photoshop, Hold down cmd-opt-shift, and start Photoshop. You will see a "delete settings" dialog. Agree and see if that fixes it.
    Others have had success by selecting another printer and then going back to the Epson.
    Gene

  • Is anyone else having trouble using pixels for the canvas size instead of in's in the trial version?

    I go to image and then image size and everytime i go to change the canvas size to inches it will not work. Is it becasue the it's the trial version?

    oh ok I found out the resample but it still doesn't allow me to adjust the canvas size to pixels. The option is still in inches and it is impossible to crop quickly with it in inches

  • Single pixel formation in Intensity graph

    I needed to capture a image using CCD camera. Though it has 16 bit resolution, i need to focus on ONE PIXEL.
    The process goes like this. Save the image information in a spread sheet array and then find the max value of the array. Till this step, i am OK.
    But how do i get to display this MAX value using intensity graph??
    Intensity graph is 2D array which needs X and Y value.
    So I need to get the X and Y value of the max value of array. I tried using the max index value of 'Max and Min' array function. But it is of no use since the value comes as 1Dimension but the intensity graph needs 2D information in the form of X and Y value. 
    Please help me on this. Thanks in advance.

    Asking the same question in multiple threads does not help or get you an answer more quickly.  Please be patient.  Many of us who participate in the Forum do so as volunteers.
    Now to your question.  As you noted, you can use the Array Max & Min function to locate a pixel with the maximum value.  If more than one pixel has that value, this function only reports the first occurrence of that value.  The 1D output of the Array Max & Min function contains the indexes of the pixel.  So the first element would be the row index (X value) and the second element would be the column index (Y value).   What I do not understand is why you want to display just one value on an intensity graph.  That seems rather meaningless.  Please explain what you want the output to look like.
    Lynn

  • Console is showing: firefox bins 448 &793, unknown error code: invalid pixel format. MAC OS X 10.6.8, Firefox 5.1

    I started up in safe mode to try to figure out what was raising hell on my computer. the bin 793 error is new. I have been experiencing occasional flashing from space to space; I'm set up with 6 spaces with multiple tabs active in each.
    Generally, I end up having to force quit Firefox to resolve the issue. Right now I'm logging the same error (793) several times a second.

    Hmmm, not sure how “&#160” got into the title of my Question.
    I realize the title is very general but didn’t know specifically what the trouble was.
    More information may lead to some replies.
    Today I turned off Time Machine (TM) automatic backups.  So far no problems all day & when I checked the Console logs I saw no “LaunchServices: bad alias ...” entries. Many instances of these yesterday at the time of the problems.
    Do I need to:
    Rebuild the Launch Services database?
    Reset Launch Services?
    Trash the Launch Services plist file under preferences?
    Have 2:
    com.apple.LaunchServices.plist (4KB)
    com.apple.LaunchServices.QuarantineEvents (303KB)
    I cleared the Cache & History in both Safari & Firefox just in case.
    I checked Console Crash Reports & see both a Dock & a Finder crash report from one of the times I had the problems. That can’t be good.
    I  installed AppleJack & may use it to clear all my Caches depending on what replies I get here.
    Hopefully, this is now enough information for some suggestions on how to proceed.
    I’m going to restart & then w/o opening any other programs, re-enable TM automatic backup & see if the trouble reoccurs.
    Thanks for any help.

  • DVCPRO HD 720P Capture File  Pixel Dimensions, etc.

    I'm crossing over from Final Cut 7 where I've been using AJA LHi card to capture DVCPRO HD 720P 59.94 tape from the Panasonic 1400 deck successfully for years. In Premiere I've adjusted all the capture settings and sequence settings to conform to DVCPRO HD 720P 59.94:
    -Pixel format: MC DVCPRO HD 720P 60
    -Capture format: AJA Mov Capture
    -Video format:  720P 59.94
    BUT....
    -My Premiere capture tests look not only slightly horizontally stretched, but  cropped about 30 pixels only on the right of the full raster frame in both Premiere and in playback in Final Cut 7.  This is true for the clip as it plays on the timeline as well.
    -When I launch the clip in QT player it has the same cropping problems but is playing as 960x720, not defaulting to 1280x720 as normal 16:9. Final Cut captured clips play in QT as default 1280x720. So the file as captured in Premiere is missing a tag that tells QT how to resize it. I'm in QT 10.4 by the way.
    -I've imported captures from Final Cut into Premiere and they look fine.
    -The Premiere captured clip in the bin shows as 960x720 which is correct for DVCPHD, but in parenthesis shows (1.3333) which I've never seen anywhere before.
    -The folks at AJA think this is a Premiere issue not a capture card issue, and I tend to agree.
    Any suggestions?
    Thank you
    Steve
    Mac Pro Tower Quad Core
    OS 10.10.2

    I'm wondering if what the original camera may have embedded into the file is jumbled in with the 1400 and "crashing into" what both FCP and PrPro expect to 'see'. From the above grab, PrPro is thinking that this is just a P2 format Panny camera, shooting "standard" P2 stuff. Which this clearly is not.
    At this point ... wondering is say Kevin-Monahan could jump in here & provide some assistance? At least, he'd know who to talk with about this. So might shooternz ... he's been around this sort of thing for many years.
    Neil

  • Delivering anamorphic (2.39:1) from video 16:9 (1.78) format?

    When I wish to deliver the look of the anamorphic format, but have source video shot in the 16:9 (1.78) format, what is the best method to convert it?
    So far, I have read that one can imply add the letterbox bars on the top and the bottom to simulate the visual part to be in the 2.39:1 format.  I now realise, that even though visually it is the 2.39:1 format, the actual footage is still the original 1.78 format, just with black at the top and the bottom of the frame.
    If that is okay, should one then notify film festivals and other alike that the footage is in the 1.78 format for them to project it correctly.  On the other hand, should one render only the visual part of 2.39 and then also notify them that the format is 2.39?
    Interested in your opinions.
    Thanks

    Most often you will either deliver a 1920 X 1080 standard source file or you will render to a specific codec using specific settings that are called for in a set of delivery specifications. Make up your own delivery specifications or use someone elses and you are going to be in just guessing. Standard HD is always safe. On occasion you can deliver 4K, but here again, you have to ask the folks that will be projecting the image about the format and follow their recommendations exactly.
    The most important part of the production process is making sure that you do not distort the original footage. If it was shot full HD in square pixels then masking is the way to achieve your Cinemascope look. If it was shot using a camera that uses a non-square pixel format (DVCPro HD for example) then you would still work with a square pixel comp but, and this is the important part, you would let AE automatically fit your non square pixel footage in a square pixel comp. Manually changing the pixel aspect ratio of the source footage will foul things up and distort the image.
    So, before we can give you a specific production workflow that should be followed we need to know exactly what kind of footage you are using in your project and exactly what kind of system will projecting the final image.

  • Bayer rg 8-bit - how many bits per pixel am I looking at?

    I'm streaming video with IMAQdx from a GigE Vision camera that does Bayer 8-bit RG, among other formats. I appear to be receiveing 24 bpp from the video stream, though. Does LabView do any conversions on this data that would ramp the size up without me knowing? Or is my camera doing it?

    IMAQdx will generally do some conversion on the various pixel formats sent by the camera to get them into a proper format for a Vision image datatype. For some pixel formats, this may mean swapping endianness, unpacking bits, or shifting the data around.
    For a BayerRG8 format, the camera is sending a 8-bit mono image and the driver is going to (by default) automatically do the Bayer conversion to a color image. Other cameras might expose a pixel format directly of an RGB or YUV format, in which case it is the camera itself doing the bayer conversion before sending it to IMAQdx.
    In this case, you could disable IMAQdx's Bayer decoding (turn the bayer decoding attribute to "Off" rather than "Auto") and get the raw 8-bit mono image if you want. Usually the monochrome image is fairly useless since it has the bayer pattern applied. Some cameras might have a monochrome mode available where they will internally debayer the image, then convert it back to monochrome so that the bayer filter is effectively removed.
    What is your objective you are trying to do here? 
    Eric 

  • Format of file Exported by SDK Exporter plugin for premiere pro cs4

    Hi
    I am trying to build a plugin which can use a custom codec to export an imported video to h264 format.
    The codec which i have to use converts a YUV420 raw video to h264 format. As given in the Premiere SDK Guide
    the SDK Exporter supports uncompressed 8-bit RGB with or without alpha, and packed
    10-bit YUV (v410). The initial rendering is performed by the RenderVideoFrame() function call which is called in RenderAndWriteVideoFrame().
    This is then converted using various calls like ConvertFrom8uTo32f(), ConvertFromBGRA32fToVUYA32f(), and ConvertFrom32fToV410().
    The problem i m facing is that i m not able to verify if the RenderVideoFrame() is working correctly and the format in which it writes.
    Is there any way to check using any media player that can play the exported video. Unless i know the exact(correct) exported format
    i wont be able to convert it to the required YUV420 format that i need.
    Also if i dont use the ConvertFrom functions and use only the video stream written by RenderVideoFrame, can u specify clearly the format
    and any media player which can play it, so as to check the working.
    Thanks
    Agam

    Zac do you have any charts on what the pixel formats each filter supports?
    If you use quirky filters (Adobe or 3rd party) you know they implemented with the mandated pixel format (BGRA_4444_8u) but probably no others. The result is that no matter what the importer/workflow/compiler combo you think you're working in you are forced to use BGRA_4444_8u from that point on in the workflow. - so much for "native" DV, HDV, MPEG2, MPEG4, CineForm, RED, etc etc etc.
    Can you provide a list of what PixelFormats the Adobe filters support?
    It would great to see it for CS5|4|3 & Prem Pro 2.0*
    Eg:     FILTER      PIXELFORMATS
         MOTION     BGRA_4444_8u   YUVA_4444_8u   V410   DV  etc etc
         BLUR          BGRA_4444_8u   YUVA_4444_8u   V410   DV  etc etc
         XXXXXX     BGRA_4444_8u   YUVA_4444_8u   V410   DV  etc etc
         YYYYYY     BGRA_4444_8u   YUVA_4444_8u   V410   DV  etc etc
    That would be a fantastic table for the user to have. That way you know when you're forcing the workflow from the native import pixelformat type (and colorspace for that matter) back to 8bit RGBA. (and potentially loosing 10bit quality and screwing up the colorspace of rec601 and rec709).
    I mention Premiere Pro 2.0 because it was the last version that processed the Timeline single threaded and thus the last version that single-threaded filters can be used on. 'fortunately plugins intended for later versions of Premiere will work with it because Prem Pro 2.0 understands the version 8 API of CS3 and CS4.
    For this reason (legacy plugins) I'm in the process of pulling some plugins from CS3 into the Prem Pro directory to see it they work. I'm willing to do that even though the filter I want to use is BGRA_4444_8u only. - the filter is THAT important to me.
    Sidenote: this is why I'm SO disappointed with CS5. Forcing plugins to be 64bit has killed the use of hundreds of 3rd party plug-ins created over the last 20 years - many of which are End Of Life and thus will only ever be 32bit. I would have been a lot happier to have CS5 implement proxy editing (ie OffLine editing) and then on Export click a button to use the Online (ie full res) material instead. That feature would have allowed v fast timeline rendering (with or without CUDA) and much lower requirements on cpu/gpu and storage. In the few instances that you need to edit in native 2k or 4k etc then sure do it but I bet 90% of the time you can work in 1080p or lower to get your movie produced.

Maybe you are looking for

  • Trouble playing HDV on Quicktime

    Can anyone help? I have Quicktime Pro 7.1.2. I'm trying to play media files from a DVD that's part of an Apple Certified course (FCE HD). The standard NTSC footage plays fine, but the HDV footage sticks for both audio and video. Movie Info... 16-bit

  • Sync older version of an app?

    Hi. There is a game app named Warship HD that we like. An updated version recently was released. We downloaded the new version in iTunes and then synced it to our iPad. We found we like the older version better. Is there a way to get the older versio

  • ADF 10.1.3.4 - html tag in af:table column header

    Hello, How can I to put "<br/>" into af:table column header? I tried to put "<br/>" in both the headerText property of the af:column and in the VO, but it always get encoded. Thanks before. Regards, Rudi Edited by: bungrudi on Nov 24, 2008 1:50 AM --

  • Boot camp not working cause disk needds repair

    well. im trying to run boot camp but everytime i go to partition, it says failed to partition cause the disk needs to be repaired start up Mac OS X with start up disc and go to disc utility. ive done this numerous times and everytime i do it, it says

  • Interesting G3 question.

    I recently purchased a 2610SA sata raid controller. Its a 64bit PCI card. I also have a b&w 400mhz G3 that I got from work because it has 3 64-bit PCI slots. I was wondering if there was any way at all to use aforementioned raid card in the G3. Its r