Do IMAQdx and NI hardware support GigE Vision 2.0?

Hello,
From my brief research, GigE Vision 2.0 has some very nice features I'm after that aren't supported by the earlier GigE Vision 1.x standards:
Camera-side compression
IEEE 1588 (PTP) synchronization and timestamping
Does NI currently support these? If not, are there any plans to add support in the near future?
Thanks!

Hi Eric,
Thank you for your informative reply.
Perhaps I've been trying to solve an XY problem, so I'll explain my context.
I have a customer who wanted 1280 x 1024 images, saved to disk at 30 fps, from 4 cameras simultaneously. Each image is to be associated with a GPS time and position, so that the image can be correlated with other data sources (the system also performs other live measurements). He doesn't need raw uncompressed data, so we acquired 8-bit colour images and then saved them as JPEG files.
The customer chose a GigE camera, so we built him a system with 2x PXIe-8234. We noticed that the PXIe-8820's CPU was barely keeping up with the compression, so we upgraded to a PXIe-8135 controller. This controller handled the load comfortably.
For the timestamps, I simply logged the GPS time at the instant I called the IMAQdx Grab VI. There's a bit of uncertainty involved, but it's acceptable. I guess IEEE 1588 level precision isn't necessary after all.
Now, the same customer wants a new system which captures 1900 x 1200 images from 6 cameras simultaneously. I'm not sure if the CPU can handle the load (and we don't currently have a camera to run tests with, unfortunately), so I was looking at ways to reduce the CPU load. Camera-side compression came to mind, and I assumed that I can write the pre-compressed image directly to disk as a JPEG file. (I haven't yet checked to see if the disk can handle the required write speeds, but that's a separate issue)
We are open to other protocols though. Would you recommend something other than GigE Vision? (No hardware has been selected yet, so we'll probably select based on the chosen protocol)
Thanks again!

Similar Messages

  • Does labview vision support GigE Vision cameras using H264 compressio​n?

    Does labview vision support GigE Vision cameras using H264 compression?
    I'd like to use a GigE Vision compliant camera that encodes its video data using H.264 compression. Can Imaqdx read this stream, decode the H264 compressed frames and write the frames into an imaq image?
    global variables make robots angry

    I have no info whatsoever on your question, but welcome back from your hiatus!
    Spoiler (Highlight to read)
    Actually I know the answer but it is simply more fun sitting back and watching you figure it out yourself. 
    Actually I know the answer but it is simply more fun sitting back and watching you figure it out yourself. 

  • Does Labview and NI hardware support Multifunct​ion Vehicle Bus (MVB)

    Hello,
    Most of the train systems today use MVB (Multifunction Vechicle Bus) communication. Does labview support it and are there NI hardware compaitable with it. Any information on this matter will be very helpful.
    Thanks in advance.
    Regards
    Prav

    Dear Christian,
    we requested technical and commercial information from duagon regarding the MVB cPCI D215 board and they told (and write into the offer) that no LV driver are available for that board.
    We can use only the API doing at least a wrapper in LV.
    Do you have different information about that?
    Where we can find the driver you mention?
    Regards
    Luigi Magni (System Engineer - CTA)

  • GigE VIsion Camera IP and Port setting in multicast mode (IMAQdx)

    GigE VIsion Camera IP and Port setting in multicast mode (IMAQdx)
    Hello, Everybody
    I have NI-IMAQdx 3.5.0 , and I have Basler Camera (scA640-74gc) with GigE Vision Interface.
    I run that camera in my computer as controller (Multicasting mode) with IP (239.192.0.1)
    I detected that camera in another computer (to run it as listener) by LbVIEW .
    my problem is
    I run the Pylon Viewer from Basler (Monitor mode) after I detected that camera.
    it was run successfully (I look to details of that camera it have same IP 239.192.0.1 and the port changed every time when i stop running in controller computer)
    from where i can set fixed Port  in multicasting mode??????
    When I use Pylon Viewer in controlled computer and set the mode to multicast
    I should set IP and port , then in listener computer I run Pylone Viewer (monitor mode) I saw same IP and same port which i set it in controller
    I need a way in LabVIEW to set Port Number in multicasting mode...How I can do that
    Best Regard
    Alzhrani

    Hi Alzhrani,
    Unfortunately it is a bit more complicated because you would have to use the GVCP protocol and do a register read of a specific register on the camera that stores the multicast UDP port (that IMAQdx programs from the controller). However, this likely requires access to the GigE Vision specification in order to be able to format the messages correctly.
    Eric

  • IMAQdx with JAI and a hardware trigger

    Hello,
    We are working with two 'JAI AD-080' cameras and IMAQdx, and have two problems regarding the triggering and frame grabbing:
    1)  We are unable to change the trigger source through IMAQdx property node or the Vision Acquisition Express; Vision Acquisition block.
    2)  When we manually edit the trigger source property using the NI Measurement and Automation Explorer (MAX) to its correct value, we can't get all four of the CCD's to run at a time without resulting in bad packets, e.g. horizontal black lines across the images.
    Our goal is to obtain the images from the 4 CCD's at a rate of 5 Hz using our hardware trigger.  We can already connect and obtain all four images at full speed, but the 5 Hz trigger is not being used by the cameras in that case.
    Details of the setup:
    NI 2011, Windows 7 
    Two (2) JAI AD-080 cameras (with 2 CCD's each), GigE cameras connected over Ethernet
    Hardware triggering at 5 Hz, on pin:  'Line 7 - TTL In 1'
    Details of the problem:
    (1)  Setting the trigger source not possible in Vision Express or IMAQdx property node
    In order to use our hardware trigger, we have to set the camera property 'CameraAttributes::AcquisitionControl::TriggerSour​ce' to a specific pin (Line 7 - TTL In 1).  This property is available in MAX, but is not usable in the Vision Express Block.  The property is present, but the values are invalid.  Here is what I think is happening:  the list of properties are read from the camera, but LabVIEW does not know what the valid values are for that property and it populates the value drop-down menu with whatever was loaded last.  This can be seen in figures 1 and 2 where the values in the drop down menu change.
    Similarly, this property of 'Trigger Source' cannot be changed programmatically using the IMAQdx property node shown here: http://digital.ni.com/public.nsf/allkb/E50864BB41B​54D1E8625730100535E88
    I have tried all numeric values from 0 to 255, and most give me a value out of range error, but the ones that do work result in no change to the camera.
    (2)  Lost packets in image during triggering
    If I set the 'Trigger Source' property in MAX to the correct pin, save the configuration, and then use the Vision Acquisition Express block to connect to the camera, the triggering works properly (the hardware trigger is used and the images are received by LabVIEW at 5 Hz).  However, this only works for one CCD:  If i use the same code for all four CCD's at the same time, I get black bars on the images, and at least one of the CCD's result in an error in 'IMAQdx Get Image.vi'  (code -1074360308, -1074360316)
    I tested this by using the configuration attributes created by the Vision Express Block, (The string used in the 'IMAQdx Read Attributes From String.vi'),  in the code we have been developing as well as a very simplified version which I have attached.  Those configuration attributes are saved in the text files:  JAI_Config_TrigON.txt and JAI_Config_TrigOFF.txt for using triggering or not respectively.  
    So my final questions are:
    Is there a problem with the IMAQdx because it doesn't recognize the trigger source value?
    Do you have any suggestions for why there are bad packets and trouble connecting to the cameras when I load them with the trigger on attributes?
    Thank you for your time - 
    Attachments:
    Fig1_VisionAcq.png ‏387 KB
    Fig2_VisionAcq.png ‏442 KB
    Fig3_BadPackets.png ‏501 KB

    Hello,
    Thank you for your response; especially the speed in which you responded and the level of detail.  
    I have not solved the problem fully in LabVIEW yet, but I was able remove the black lines and apparitions from the images using different camera parameters.  
    Since this was a significant victory I wanted to update:
    1)  Version of IMAQdx?
    I have IMAQdx 4.0, but the problem persists.
    2)  Setting configuration files
    Your suggestion to pay attention to the order in which the properties are set as well as using the MAX settings is very helpful.  I have not explored this feature fully, but I was able to successfully use MAX to set the default settings and then open the cameras programmatically without loading a new configuration file.  
    3)  Bandwidth limitations
    I modified the CCD's to only use 250 Mbits/second, but the lost packets (or missing lines/ apparitions) were still present.  
    4)  JAI AD-080GE Specifics
    I am using the JAI AD-080GE; and there are two settings for this camera that I want to mention:  
    JAI Acquisition Control>> Exposure Mode (JAI)>>Edge pre-select
    JAI Acquisition Control>> Exposure Mode (JAI)>>Delayed readout EPS trigger
    The "Edge pre-select" mode uses an external trigger to initiate the capture, and then the video signal is read out when the image is done being exposed.
    The "Delayed readout EPS trigger" can delay the transmission of a captured image in relation to the frame start.  It is recommended by JAI to prevent network congestion if there are several cameras triggered simultaneously on the same GigE interface.  The frame starts when the 'trigger 0' is pulsed, then stored on the camera, then is transmitted on 'trigger 1'.  
    The default selection is the "Delayed readout EPS trigger", however, I do not know how to set the 'trigger 1' properly yet and I only have one connection available on my embedded board that is handling the triggering right now (I don't know if 'trigger 1' needs to be on a separate line or not).  Incidentally, the system does not work on this setting and gives me the black lines (aka lost packets/ apparitions).
    I was able remove the black lines and apparitions using the "Edge pre-select" option on all 4 images with a 5 Hz simultaneous trigger.  I confirmed this using the "JAI Control Tool" that ships with the cameras.  I am unable to make this happen in MAX though, as the trigger mode is automatically switched to 'off' if I use the mode:  JAI Acquisition Control>> Exposure Mode (JAI)>>Edge pre-select
    i.e. when manually switching the trigger mode to 'on' in MAX, "JAI Acquisition Control>> Exposure Mode (JAI)>>Delayed readout EPS trigger" option is forced by MAX.  The vise-versa is also forced so that if EPS mode is chosen, "Trigger Mode Off" is forced.
    Additionally, there is a setting called:
    Image Format Control>>Sync Mode>>Sync     &     Image Format Control>>Sync Mode>>Async
    When the "Sync" option is chosen the NIR CCD uses the trigger of the VIS CCD.  In addition to using the "Edge pre-select" option, the "Sync" option improves the triggering results significantly.  
    5)  Future troubleshooting
    Since I cannot set the camera parameters manually in MAX (due to MAX forcing different combinations of parameters in 4), I am going to explore manually editing the configuration file and loading those parameters at startup.  This can be tricky since a bad combination will stall the camera, but I can verify the settings in JAI Control Tool first.  There is also an SDK that is shipped with the cameras, so I may be able to use those commands.  I haven't learned C/C++ yet, but I have teammates who have.

  • OSX support for Gige Vision cameras?

    Is there any support in OS X for the huge line of Gige Vision cameras now available? There are only a few USB and Firewire cameras that work well with OS X. If I could use Gige cameras, it would open up a multitude of options for me, but I can't find any mention of support or SDKs for the Gige Vision protocol in Mountain Lion.

    Transini wrote:
    Is there any support in OS X for the huge line of Gige Vision cameras now available? ...
    I do not use Gige, so I cannot test this with OS X, but it seems not.
    This GEViCAM page http://www.gevicam.com/products/peripherals.html states that a PC is "... required for GEViCAM cameras..."
    This GEViCAM page http://www.gevicam.com/downloads/sdk.html states that "... GEViCAM's software is officially supported under Microsoft Windows... Please contact us on SDK for other OS."
    I suggest you contact GEViCAM directly http://www.gevicam.com/contactus/gevicam.html and ask them if their product can work with your system.
    Message was edited by: EZ Jim
    Mac OSX 10.8.2

  • HT201343 I have a late 2008 Mac Pro Quad Core 2x2.8 Intel Xeon with Lion 10.7.5- do I need Maverick and will this hardware support the Airplay Mirroring?

    I have a late 2008 Mac Pro Quad Core 2x2.8 Intel Xeon with Lion 10.7.5… do I need Maverick and will this hardware support the Airplay Mirroring?

    It doesn't mention anything about a Mac Pro. So no.
    http://support.apple.com/kb/HT5404
    If your computer did support it, you would still need 10.8 or 10.9 for Airplay to work.

  • Running average or median of images from Gige vision camera

    Hi all,
    I have a Gige Vision camera(baselar's) I want to continously grab the images (video) and output the Average or median of the frames(10 frames atelast). My camera setting and pixels hieght widht are mono 8 and width and height are 1000. I tried to get the average in simple adding and division but was unsuccesfull in getting the final images can any one check where I can be wrong , attached is my vi.
    thanks
    Attachments:
    average image.vi ‏53 KB

    MoviJOHN wrote:
    That's easy.  I don't even need to see your code to tell you what you're doing wrong.  You are probably storing your data as 8-Bit unsigned, taking ten images and adding them together.  The problem is that the largest value U8 can hold is 255, so you run out of space for the numbers to increase.
    Grab the U8 image, cast it to a SGL, and convert image to array.  Then add the previous array to the current array in a for loop, and divide by the total images.  You can either display the image as a SGL, or cast it to some other type for saving to disk, since SGL is not supported in any common image file format.
    Hi MoviJohn,
    I tried putting imaq cast after imaqdx get image and giving the output of cast image to image to array, but it didnt work, i dont se the images at all.
    Attachments:
    average image.vi ‏54 KB

  • Poll:  How many users on your apps and what hardware are you using?

    I am curious about the collective experience and performance of others APEX applications and how much "horsepower" your hardware has.
    I have a need to support 20-30 users and am using a Windows XP machine with 1.99 Gig of RAM and a 2.8 GHz Pentium 4 CPU.
    Any thoughts on this?

    For my personal use, I'm renting a Celeron 2.4ghz with 1Gig RAM running CentOS, Oracle XE and Apex 3.2 at a data center 1200 miles away. Seems to work fine. I find the biggest issue is network bandwidth. I originally had a 10Mb link to the datacenter's backbone, now have a 100Mbit. The difference was night and day.
    For work (speaking only of our Apex servers), we've got:
    One Dell 2way P4 with 4G RAM and 10,000 RPM drives.
    It runs 2 10G instances and several Apex apps and it also runs Enterprise Manager Grid Control for 40+ databases. No issues.
    Multiple prod/dev/test servers.
    All are VIRTUAL running on VMWare ESX3.5. Each database VM has 2 gig of RAM and 2 "CPUs" running 11g/Apex3.2 on Oracle Linux. They all work fine.
    I find that a poorly written app can crush just about any hardware you can throw at it. Luckily we don't have any of those ;).
    I suspect that your hardware would be fine, particularly if running Linux. I cannot speak to Oracle/Apex on XP. I suspect you might be disappointed, mostly because it is tuned for interactive use rather than background applications (services). Not trying to be a Windows vs Linux troll here, just saying that if you are running a workstation with Oracle/Apex and you on it and expect to support 20-30 other people, you are all likely to be disappointed. Apex/Oracle should be on dedicated server hardware with a server OS.
    You also aren't giving us much info on your hardware; 7200RPM IDE/SATA drives aren't going to be as fast as 15KRPM SCSI or SAS, etc.
    It wouldn't surprise me if your well written application was able to support 20-30 users with the hardware you are describing (though I won't vouch for the OS).
    Google for "apex.oracle.com" and "poweredge 1950" that will take you to this link:
    http://joelkallman.blogspot.com/2009/06/who-says-application-express-cant-scale.html
    The whole thing apparently serves up 6 million page hits a week with a 1.5Gig SGA. That's a lot of work on a pretty small box.

  • XML file for GigE Vision camera

    Hello,
    I am working on a design of GigE Vision camera. It should be very simple linescan camera. I implemented the whole required GigE Vision register set, can communicate with the camera in MAX. The problem is that I need to creata this XML file for the camera.There is this XML file here: 
    http://www.emva.org/cms/upload/Standards/GenICam_D​ownloads/SFNC_Reference_Version_2_0_0_Schema_1_1.x​...
    What should I do with it? My first version of the camera will have 2 funcions: turn it on and turn it off I am bit confused, because the xml file template has lots of functions, that I don't really need. Should I delete them? Or leave inside with default values? What does minimal configuration XML file need?
    Regards,
    Linus
    Solved!
    Go to Solution.

    linru wrote:
    Thank you very much! Perhaps I was too concentrated on digging through various documents and missed the important info.
    And one more question: how does NI software build the *.icd file? By reading XML file? Or by reading registers in my camera? 
    Both. The XML is processed by the GenApi software component which then translates it to register operations. The XML file thus controls what features are visible based both on the XML file as well as the camera registers (features in the XML may be conditionally available based on the <pIsImplemented< tag). Next, the IMAQdx ICD file contains the subset of features in the XML file that are available and are tagged as <Streamable>, meaning they can be saved to a settings file.
    Eric

  • GigE vision on NI cRIO-9068

    Hello:
    We are planning a project in which we are intending to use GigE vision cameras and embedded vision processing. Our client is interested in the cRIO-9068 as the embedded platform.
    The question is: Is there support for GigE Vision on the cRIO-9068?
    I cannot find a document that says so, then I guess no official support is provided. If this is the case, I would like to know why.  I believe that VxWorks targets do not support GigE, but this cRIO runs Linux. From the outside, I think it would be possible to provide GigE support for Linux.  The only devices that seem to support GigE run Phar Lap ETS.
    Thanks in advance for your kind reply.
    Robst - CLD
    Using LabVIEW since version 7.0
    Solved!
    Go to Solution.

    Hello Bruce:
    The application will be in a rough environment, in an industrial setting, that is why we want a rugged controller.  We are planning to use the FPGA as a coprocessing unit of the RT processor, so we may have to pass the image to the FPGA and get it back afterwards as you say. The GigE requirement is important, because the cameras will be widespread, and the maximum cable length with other buses is an issue.
    Best regards.
    Robst - CLD
    Using LabVIEW since version 7.0

  • GigE Vision how can I read the camera attributes

    Hi,
    we are using CVI 2012 and NI IMAQdx and we want to read out the camera attributes from a GigE vision camera.
    We tried the IMAQdxEnumerateAttributes2 function but it didn't work. We got no information back.
    Does anyone have an idea how we can read out the camera attributes?
    thanks in advance
    Oliver

    Hello Topper,
    yes we tried this but we didn't succeed.
    But meanwhile we know why, here is our solution:
    // Open a session to the selected camera
    // initialisiert den G4GigE-Zeilensensor
    rw = IMAQdxOpenCamera(G4GigE.CamName, IMAQdxCameraControlModeController, &G4GigE.session);
    IMAQdxEnumerateAttributes2(G4GigE.session, attributeInformationArray, &uiCount, "CameraAttributes::CustomFeatures::Illumination", IMAQdxAttributeVisibilityAdvanced);
    strcpy(String, "CameraAttributes::CustomFeatures::Illumination:lot1:lot_1_RED");
    IMAQdxSetAttribute(G4GigE.session, String, IMAQdxValueTypeU32, 10);
    It is necessary to type in the correct pathname to adress the attribute correctly. The path will be separated by :: If the path is not correct there will be no result.
    To get the number of attributes it seams that the highest path name have to be adressed.
    since we found this out we could run our application
    greetings
    Oliver

  • My WIFI keeps on disconnecting and even after reporting and suggestion by support the phone hasnt been replaced or repaired under warranty

    I think I have had enough. So its been a long time back and forth between me and you. Since I am not getting any solution and same thing you guys are providing me again and again.
    I want to take a step back and list down a few things which have happened so far :-  
    I started chatting with the apple support on this issue starting September - please refer back to your chats on this Serial Number.
    I have factory reset my phone like 20 times since then as each time they asked me to follow few steps which made me loose my data as your icloud doesn't work properly as stated by your customer care support people itself that sometimes the back up doesn't happen so please take back up at your laptop too.
    Send phone diagnostics by clicking on a link.
    reset your WIFI settings
    Call apple support after 24 hours of observation.
    Next day when same issue occurs then extreme Apology.
    Take Back up on computer and Icloud both and contact apple support once its done
    Call apple support again.
    Factory Reset by erase all contents and setting options. Once one call back again.
    wait and analyse for a whole day after setting up apple ID and data.
    Wifi Issue stil exists.
    Call apple support again
    Between all this I had to explain the issue each time to a new representative. This above process I have gone through at least 7-8 times. Please go and refer your call notes.
    So finally i got to know after escalating it several times and one of your supervisors told me that the issue is a hardware issue and it can only be replaced as they have tried it all.
    Supervisor assured me that he has recommended it to be replaced in notes and asked me to visit a center.
    I called in local distributor and set up an appointment, unfortunately couldnt make it. So call them to reschedule as I am working women so i dont have all the time in life to keep interacting with associate after associate so thought of walking in to an apple store near by and lodge a complaint. The supervisor updated me that I have warranty left and it will be replaced without any cost and also if i report and issue in warranty and if apple is not able to resolve it so the warranty stands true until its completely resolved after customer's consent.
    I went in to give my phone to the tech support and they kept it for analysis and came back saying the issue doesnt exist. I was getting a disconnection in my WIFI every now and then. My Data plan was always used and i fortunately have a proof of that.
    Customer rep told me talk to apple support and if they say we will replace it. One more place added where customer has to go.
    Now and back to square one - They say phone is damaged - we cant replace it. First of all the phone is not damaged and you guys are just finding reasons to not replace a faulty phone.
    It randomly disconnects and its listed as a known issue on your website too. had this not been listed i would have never thought its the phone. Then I started to observe that its the phone that makes a disconnection and not the WIFI connections as it was everywhere home, office and at friends place.
    Best i can do is call you guys when this happens if you guys promise me that the tech employee of apple can come to whatever location i suggest at that point and do not be surprised if the phone behaves normal when the person arrives as its random.
    Now you want pictures of my phone and you want to analyse further.

    After making an appointment and waiting over a week I was a few minutes late for my appointment I was told by apple I would have to make a new appointment. The shop was Chatswood in Sydney and the manager Mathew was very pathetic. I travelled 1 hour each way and was treated very poorly I must comment and say this was my first Apple devices and I will never understand how come people are buying these devices. The apps seem really good if only the hardware and the support could match. I really wish this manager all his best to Matthew in his next position somewhere else he obviously should not be in the position of managing an Apple store.

  • I NEED OVER ALL HARDWARE SUPPORT FOR PAVILION DV6-3043TX

    I NEED OVER ALL HARDWARE SUPPORT FOR PAVILION DV6-3043TX, DISPLAY : FLASHING, HDD : SMART ERROR 301, KEY BOARD : UNSERVISEABLE, USB PORT : DISCONNECTED CONTINUOUSLY, THERMAL SHUT DOWN : RAPIDLY SHUTDOWN DUE TO INCREASE IN TEMP AS I M PERIODICLY CLEANING VENTS AND USING COOL PAD RECOMENDED BY HP DEALER, CD/DVD DRIVE NOT WORKING PROPERLY. SO I NEED SUPPORT FROM HP. MY EMAIL IS {Personal Information Removed}

    The smart error is usually a fatal hard drive error and it probably needs to be replaced.  Here is a quick scan of this forum for smart error 301:
    http://h30434.www3.hp.com/t5/forums/searchpage/tab/message?filter=location&location=forum-board%3ALa...
    I think the high heat situation and damaged your video chip at the very least and someone will have to open the laptop and reflow, reball or replace the video chip. If this machine is in warrantee, you need to call HP or contact them before it is out of warrantee.
    Reminder: Please select the "Accept as Solution" button on the post that best answers your question. Also, you may click on the white star in the "Kudos" button for any helpful post to give that person a quick thanks. These feedback tools help keep our community active, so you receive better answers faster.

  • Solaris 10 & Hardware Support

    I understand that Linux depends on the communities support to develop drivers for hardware that isn't supported by the manufacturer. How does Solaris stand with support from manufactures, and support from Sun or popular manufactures. I'd like to install Solaris 10 on my x86 desktop, but if it can't support my nForce video chip then there isn't much of a question that I would stick with Linux.
    Is it possible, specifically for an nForce video chip, to get just video that doesn't necessarily take advantage of graphic acceleration?
    Since nVidia is partnered with Sun to bring some of their high-end cards to Solaris, should we expect to see some of their other hardware supported also?
    I'm curious about other hardware as well, such as Ethernet cards, or sound cards. Is it really feasable to run Solaris on a desktop PC, or does it need to be dedicated Solaris machine that was built from the ground up with support in mind because supported hardware is so scarce?

    The answer isn't easy to find, because the breadth of the
    problem is wide.
    As I write, I'm on an Athlon-64 running a NVidia nForce3
    250 Gb chipset. Does it work? Yes. Does everything work?
    No. There are some parts of the chipset which don't have
    Solaris drivers yet. Unfortunately, NVidia also keeps their
    driver source close and even for Linux they don't release
    the source code. In my case, neither the network nor audio
    drivers are available. It may be difficult to get the Solaris drivers
    ported unless NVidia does the work. AFAICT, that is not
    part of the Sun/NVidia announcement, though I'd be happy
    to be proved wrong.
    A number of sound cards do have drivers. On this machine,
    the NVidia audio driver is not recognized, but I have an iMic
    USB audio dongle that seems to work well.
    For Ethernet, I have an old Rhine-based card (D-Link). A
    driver for this, including the needed 64-bit amd64 driver :-),
    is a available from
    http://homepage2.nifty.com/mrym3/taiyodo/eng/
    Some other vendors offer Solaris drivers directly, so you'll
    find some have more support than others.
    -- richard

Maybe you are looking for