Capture Camera Image
I want to capture an image from a camera, give it a specific
name and store it in a specific directory. In this case the camera
is the Microsoft Lifecam VX-3000.
The end result is to create a form that collects some
information about an item have a button that has the camera take a
picture that is re-named to the item number.
Any ideas which direction to go?
1) Detect the Webcam on the users machine from the web application.(To be more clear when the user clicks on 'Add Photo' tool from the web application)There's not really any good way to do this with JMF. You'd have to somehow create a JMF web-start application that will install the native JMF binaries, and then kick off the capture device scanning code from the application, and then scan through the list of devices found to get the MediaLocator of the web cam.
2) When the user confirms to save, save the Image in users machine at some temporary location with some unique file name.You'd probably be displaying a "preview" window and then you'd just want to capture the image. There are a handful of ways you could capture the image, but it really depends on your situation.
3) Upload the Image to the server from the temporary location.You can find out how to do this on google.
All things told, this application is probably more suited to be a FMJ (Freedom for Media in Java) application than a JMF application. JMF relies on native code to capture from the web cams, whereas FMJ does not.
Alternately, you might want to look into Adobe Flex for this particular application.
Similar Messages
-
Camera - Torch 9800 - Unable to capture the image
Hey everyone!
Had a torch 9800 for 18 months now and always been working fine... until today.
I went to use the camera and it all seemed to open up fine but instead of seeing what the lens sees the screen was black. The centre focus [ ] was there and I could zoom back and forth just a black screen.
When I click for a photo I get a message " Unable to capture the image "
I have searched the web and found nothing conclusive. Battery pulls, a full wipe and installation of the o/s have made no difference to it's lack of function.
Does anyone out there have a suggestion?I am having the exact same issue. I also tried rebooting, taking out the battery and sim card and cleaning out the back, alas no difference.
-
Capture Web Cam image in APEX and Upload into the Database
Overview
By using a flash object, you should be able to interface with a usb web cam connected to the client machine. Their are a couple of open source ones that I know about, but the one I chose to go with is by Taboca Labs and is called CamCanvas. This is released under the MIT license, and it is at version 0.2, so not very mature - but in saying that it seems to do the trick. The next part is to upload a snapshot into the database - in this particular implementation, it is achieved by taking a snapshot, and putting that data into the canvas object. This is a new HTML5 element, so I am not certain what the IE support would be like. Once you have the image into the canvas, you can then use the provided function convertToDataURL() to convert the image into a Base64 encoded string, which you can then use to convert into to a BLOB. There is however one problem with the Base64 string - APEX has a limitation of 32k for and item value, so can't be submitted by normal means, and a workaround (AJAX) has to be implemented.
Part 1. Capturing the Image from the Flash Object into the Canvas element
Set up the Page
Required Files
Download the tarball of the webcam library from: https://github.com/taboca/CamCanvas-API-/tarball/master
Upload the necessary components to your application. (The flash swf file can be got from one of the samples in the Samples folder. In the root of the tarball, there is actually a swf file, but this seems to be a different file than of what is in the samples - so I just stick with the one from the samples)
Page Body
Create a HTML region, and add the following:
<div class="container">
<object id="iembedflash" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000"
codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0" width="320" height="240">
<param name="movie" value="#APP_IMAGES#camcanvas.swf" />
<param name="quality" value="high" />
<param name="allowScriptAccess" value="always" />
<embed allowScriptAccess="always" id="embedflash" src="#APP_IMAGES#camcanvas.swf" quality="high" width="320" height="240"
type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" mayscript="true" />
</object>
</div>
<p><a href="javascript:captureToCanvas()">Capture</a></p>
<canvas style="border:1px solid yellow" id="canvas" width="320" height="240"></canvas>That will create the webcam container, and an empty canvas element for the captured image to go into.
Also, have a hidden unprotected page item to store the Base64 code into - I called mine P2_IMAGE_BASE64
HTML Header and Body Attribute
Add the Page HTML Body Attribute as:
onload="init(320,240)"
JavaScript
Add the following in the Function and Global Variable Declarations for the page (mostly taken out of the samples provided)
//Camera relations functions
var gCtx = null;
var gCanvas = null;
var imageData = null;
var ii=0;
var jj=0;
var c=0;
function init(ww,hh){
gCanvas = document.getElementById("canvas");
var w = ww;
var h = hh;
gCanvas.style.width = w + "px";
gCanvas.style.height = h + "px";
gCanvas.width = w;
gCanvas.height = h;
gCtx = gCanvas.getContext("2d");
gCtx.clearRect(0, 0, w, h);
imageData = gCtx.getImageData( 0,0,320,240);
function passLine(stringPixels) {
//a = (intVal >> 24) & 0xff;
var coll = stringPixels.split("-");
for(var i=0;i<320;i++) {
var intVal = parseInt(coll);
r = (intVal >> 16) & 0xff;
g = (intVal >> 8) & 0xff;
b = (intVal ) & 0xff;
imageData.data[c+0]=r;
imageData.data[c+1]=g;
imageData.data[c+2]=b;
imageData.data[c+3]=255;
c+=4;
if(c>=320*240*4) {
c=0;
gCtx.putImageData(imageData, 0,0);
function captureToCanvas() {
flash = document.getElementById("embedflash");
flash.ccCapture();
var canvEle = document.getElementById('canvas');
$s('P2_IMAGE_BASE64', canvEle.toDataURL());//Assumes hidden item name is P2_IMAGE_BASE64
clob_Submit();//this is a part of part (AJAX submit value to a collection) two
}In the footer region of the page (which is just a loading image to show whilst the data is being submitted to the collection [hidden by default]) :<img src="#IMAGE_PREFIX#processing3.gif" id="AjaxLoading"
style="display:none;position:absolute;left:45%;top:45%;padding:10px;border:2px solid black;background:#FFF;" />If you give it a quick test, you should be able to see the webcam feed and capture it into the canvas element by clicking the capture link, in between the two elements - it might through a JS error since the clob_Submit() function does not exist yet.
*Part 2. Upload the image into the Database*
As mentioned in the overview, the main limitation is that APEX can't submit values larger than 32k, which I hope the APEX development team will be fixing this limitation in a future release, the workaround isn't really good from a maintainability perspective.
In the sample applications, there is one that demonstrates saving values to the database that are over 32k, which uses an AJAX technique: see http://www.oracle.com/technetwork/developer-tools/apex/application-express/packaged-apps-090453.html#LARGE.
*Required Files*
From the sample application, there is a script you need to upload, and reference in your page. So you can either install the sample application I linked to, or grab the script from the demonstration I have provided - its called apex_save_large.js.
*Create a New Page*
Create a page to Post the large value to (I created mine as 1000), and create the following process, with the condition that Request = SAVE. (All this is in the sample application for saving large values).declare
l_code clob := empty_clob;
begin
dbms_lob.createtemporary( l_code, false, dbms_lob.SESSION );
for i in 1..wwv_flow.g_f01.count loop
dbms_lob.writeappend(l_code,length(wwv_flow.g_f01(i)),wwv_flow.g_f01(i));
end loop;
apex_collection.create_or_truncate_collection(p_collection_name => wc_pkg_globals.g_base64_collection);
apex_collection.add_member(p_collection_name => wc_pkg_globals.g_base64_collection,p_clob001 => l_code);
htmldb_application.g_unrecoverable_error := TRUE;
end;I also created a package for storing the collection name, which is referred to in the process, for the collection name:create or replace
package
wc_pkg_globals
as
g_base64_collection constant varchar2(40) := 'BASE64_IMAGE';
end wc_pkg_globals;That is all that needs to be done for page 1000. You don't use this for anything else, *so go back to edit the camera page*.
*Modify the Function and Global Variable Declarations* (to be able to submit large values.)
The below again assumes the item that you want to submit has an item name of 'P2_IMAGE_BASE64', the condition of the process on the POST page is request = SAVE, and the post page is page 1000. This has been taken srtaight from the sample application for saving large values.//32K Limit workaround functions
function clob_Submit(){
$x_Show('AjaxLoading')
$a_PostClob('P2_IMAGE_BASE64','SAVE','1000',clob_SubmitReturn);
function clob_SubmitReturn(){
if(p.readyState == 4){
$x_Hide('AjaxLoading');
$x('P2_IMAGE_BASE64').value = '';
}else{return false;}
function doSubmit(r){
$x('P2_IMAGE_BASE64').value = ''
flowSelectAll();
document.wwv_flow.p_request.value = r;
document.wwv_flow.submit();
}Also, reference the script that the above code makes use of, in the page header<script type="text/javascript" src="#WORKSPACE_IMAGES#apex_save_large.js"></script>Assuming the script is located in workspace images, and not associated to a specific app. Other wise reference #APP_IMAGES#
*Set up the table to store the images*CREATE TABLE "WC_SNAPSHOT"
"WC_SNAPSHOT_ID" NUMBER NOT NULL ENABLE,
"BINARY" BLOB,
CONSTRAINT "WC_SNAPSHOT_PK" PRIMARY KEY ("WC_SNAPSHOT_ID")
create sequence seq_wc_snapshot start with 1 increment by 1;
CREATE OR REPLACE TRIGGER "BI_WC_SNAPSHOT" BEFORE
INSERT ON WC_SNAPSHOT FOR EACH ROW BEGIN
SELECT seq_wc_snapshot.nextval INTO :NEW.wc_snapshot_id FROM dual;
END;
Then finally, create a page process to save the image:declare
v_image_input CLOB;
v_image_output BLOB;
v_buffer NUMBER := 64;
v_start_index NUMBER := 1;
v_raw_temp raw(64);
begin
--discard the bit of the string we dont need
select substr(clob001, instr(clob001, ',')+1, length(clob001)) into v_image_input
from apex_collections
where collection_name = wc_pkg_globals.g_base64_collection;
dbms_lob.createtemporary(v_image_output, true);
for i in 1..ceil(dbms_lob.getlength(v_image_input)/v_buffer) loop
v_raw_temp := utl_encode.base64_decode(utl_raw.cast_to_raw(dbms_lob.substr(v_image_input, v_buffer, v_start_index)));
dbms_lob.writeappend(v_image_output, utl_raw.length(v_raw_temp),v_raw_temp);
v_start_index := v_start_index + v_buffer;
end loop;
insert into WC_SNAPSHOT (binary) values (v_image_output); commit;
end;Create a save button - add some sort of validation to make sure the hidden item has a value (i.e. image has been captured). Make the above conditional for request = button name so it only runs when you click Save (you probably want to disable this button until the data has been completely submitted to the collection - I haven't done this in the demonstration).
Voila, you should have now be able to capture the image from a webcam. Take a look at the samples from the CamCanvas API for extra effects if you wanted to do something special.
And of course, all the above assumed you want a resolution of 320 x 240 for the image.
Disclaimer: At time of writing, this worked with a logitech something or rather webcam, and is completely untested on IE.
Check out a demo: http://apex.oracle.com/pls/apex/f?p=trents_demos:webcam_i (my image is a bit blocky, but i think its just my webcam. I've seen others that are much more crisp using this) Also, just be sure to wait for the progress bar to dissappear before clicking Save.
Feedback welcomed.Hmm, maybe for some reason you aren't getting the base64 version of the saved image? Is the collection getting the full base64 string? Seems like its not getting any if its no data found.
The javascript console is your friend.
Also, in the example i used an extra page, from what one of the examples on apex packages apps had. But since then, I found this post by Carl: http://carlback.blogspot.com/2008/04/new-stuff-4-over-head-with-clob.html - I would use this technique for submitting the clob, over what I have done - as its less hacky. Just sayin. -
How to capture an image from my usb camera and display on my front panel
How to capture an image from my usb camera and display on my front panel
Install NI Vision Acquisition Software and NI IMAQ for USB and open an example.
Christian -
Capture an image using the web camera from a web application
Hi All,
Could anyone please share the steps what have to be followed for capture an image using the web camera from a web application.
I have a similar requirement like this,
1) Detect the Webcam on the users machine from the web application.(To be more clear when the user clicks on 'Add Photo' tool from the web application)
2) When the user confirms to save, save the Image in users machine at some temporary location with some unique file name.
3) Upload the Image to the server from the temporary location.
Please share the details like, what can be used and how it can be used etc...
Thanks,
Suman1) Detect the Webcam on the users machine from the web application.(To be more clear when the user clicks on 'Add Photo' tool from the web application)There's not really any good way to do this with JMF. You'd have to somehow create a JMF web-start application that will install the native JMF binaries, and then kick off the capture device scanning code from the application, and then scan through the list of devices found to get the MediaLocator of the web cam.
2) When the user confirms to save, save the Image in users machine at some temporary location with some unique file name.You'd probably be displaying a "preview" window and then you'd just want to capture the image. There are a handful of ways you could capture the image, but it really depends on your situation.
3) Upload the Image to the server from the temporary location.You can find out how to do this on google.
All things told, this application is probably more suited to be a FMJ (Freedom for Media in Java) application than a JMF application. JMF relies on native code to capture from the web cams, whereas FMJ does not.
Alternately, you might want to look into Adobe Flex for this particular application. -
How to capture an image with the camera axis
Hi,
I have an camera axis. I can receive the video. Now I try to capture an image, for example when I push the button take. I do the .vi but I confused between the different Method.
SaveCurrentImageave the current Image
GetImage:Gets the data corresponding to the image currently being
displayed.SetImage:Adds an Image to the Clipboard in the Bitmap format
That I search is to know if to capture image and saving when I push an button I need to put all Icons so SetImage after Getimage and SavecurrentImage.
I put my .vi.
Attachments:
Capture_image.vi 11 KBHello,
Did you download the kit sdkfor using the axis media control active X:
http://www.axis.com/fr/techsup/cam_servers/dev/activex.htm
Did you test the sample and did you test with another language?
Regards,
Nacer M. | Certified LabVIEW Architecte -
Raspberry Pi B+ capture a image from a video cam
Hello,
is there any way to capture a image from the video camera. I didn't find any media class for video. Nor did I find any Runtime.exec to use a native linux command.
Did I overseen something?Hi Klaus,
There a few ideas in design stage, but nothing solid enough to talk about. And actually it should not have been possible with older ME as it's specifically prohibited by the spec. I mean native methods in Java applications. Of course there may be other means to communicate with native applications, like files, sockets etc. But no direct calls
The date for ME 8.1 GA (General Availability) release is not confirmed yet. But it should happen quite soon, stay tuned
Regards.
Andrey -
Camera GigE.capture d'image floue
Bonjours a tous.
J’utilise une camera GigE sur mon nouvelle machine de vision. L’algorithme que j’ai développé fait 3 capture successive de la même image a différence de temps de 2 seconds et je fais l’affichage après chaque capture .Le problème c’est qu’a la deuxième (et par fois la troisième) capture l’image parait un peut flou par rapport au ancien(bruit!!!!)
Je veux savoir si le pb peut être lié au config camera (attribue d’acquisition, vitesse……)
Est-ce qu’il ya un attribue qui peut causer cette pb en cas de mauvaise config.
J’ai essayé de traité le pb coté algorithme mais toujours pareille.
merci d'avance pour l'aide.
Hamd2015Bonjour,
Mon code est fait avec Labwindows/cvi,
ci joint deux capture de la meme image:la premire capture ma donner "l'image1" et la deuxiemme capture donne "l'image2" qui contient un bruit en bas.
j'ai essayé de jouer sur la lumunosité du camera mais le probleme persiste toujours.
merci
Hamd2015
Pièces jointes :
Image1.png 261 KB
Image2.png 250 KB -
Windows XP can no longer "see" my iPhone as a camera/imaging device
can't sync photos from iPhone to computer; Windows XP doesn't even see it as a camera or imaging device. Weird, since iTunes is able to synch music/contacts.
No matter how I check, Windows XP can't find the device:
control panel --> scanners and cameras - nothing!
My Computer --> devices with removable storage - no iPhone
Computer Management--> Device Manager - no iPhone or scanner/cameraUPDATE - I think I've resolved my problem and that it is related to having my iPhone "passcode locked".
1. I connected it and unlocked the iPhone, synced music, but had the problems I noted in my original discussion post
2. I disconnected the iPhone, closed iTunes, reopened iTunes and reconnected the iPhone (making sure it was NOT passcode locked before connecting it) -- everything works fine.
3. now I can get back to my original issue, was to figure out how to get only my NEW iPhone captured photos to synch, or at least have them date ordered. But that's a subject for another post.
BOTTOM LINE - passcode lock may be the key for those who are having problems with their PC not able to see the iPhone as a camera/imaging device -
Two Simultaneous USB Camera Image Acquisition and Saving
Hello everyone,
Recently my friends and I have begun a project which requires the simultaneous acquisition of images from two independent USB cameras. Throughout our project research we have encountered many difficulties. The major problem appears to be how to successfully interface two USB cameras and properly save the image to a median for further processing down the road.
Our first attempted at this was with the USB IMAQ drivers. Unfortunately, to our surprise, it does not support two independent and simultaneous camera acquisitions. To combat this problem we implemented a crude loop of closing a camera reference, then opening the other, acquiring an image, then closing, etc... This did provide us with functionality but was very inefficient, resulting insufficient acquisition speeds.
Afterwards we began looking at other ways of capturing and writing the image, which are listed below.
Various Acquisition Interfaces / Strategies
USB IMAQ Driver - USB interfacing kit available for public download, (not officially supported by NI) zone.ni.com/devzone/cda/epd/p/id/5030
Status - Abandoned because of the lack of dual web camera support
Avicap32.dll - Windows 32 DLL responsible for all web camera communication, and other media devices.
Status - Currently acquired two streams of video on the front panel but with no way of saving the images
Directx - The core Windows technology that drives high-speed multimedia on a PC.
Status - Currently researching.
.NET - Microsoft .NET framework, possibly a valid interface to a video stream.
Status - No development until further progress.
We are still currently developing the Avicap32.dll code but the major bottleneck is the learning curve on the property nodes, library calling nodes, .NET and other references. So far it’s been a tedious effort of trying to understand example code and piece them together. As a former programmer of C I know this is not the proper way of writing code web I can’t seem to grasp how these functions work. (Property nodes, library calling nodes, .NET and other references). Learning this may allow us the knowledge to code a proper system. Does anyone know any good tutorials or learning material for such functions? So far my research has come up empty handed.
We are currently working with these interfaces and other reference examples on the NI site. Unfortunately we cannot seem to properly integrate the code for our system. Does anyone know if this setup is even possible? Also, if anyone has any experience with similar systems and can offer advice, code snippets, or any other assistance it would be greatly appreciated.
Our current projects will be posted below if you have any suggestions. (More will be added when I get to my desktop tomorrow)
Two Camera Avicap32.dll - Currently stuck on how to save the image, various tweaks have gotten us stuck at this point ( will have another Avicap32.dll implementation posted tomorrow)
USB Capture Example - This USB capture VI was an example that immediate had problems. We used it as a baseline for crude open-close camera image grabbing system. (Will also be posted tomorrow)
Thanks for any help, it's much appreciated.
Taylor S. Amarel
Thanks,
Taylor S. Amarel
Learning is living.
Co-Founder and CEO of http://3dprintingmodel.com/
"If we did all the things we are capable of, we would literally astound ourselves."
-Thomas Edison
Attachments:
Two Camera Avicap32.dll.zip 84 KB
USB Capture Example.zip 17 KBThanks for the information but that's not exactly what I'm looking for. I've been to that URL and read the information before. I don't mean to disparage you help but I'm looking for ways that I can successfully implement a functional system, not information on an unfunctional method. Maybe I was unclear before, I'm looking for any advice, knowledge or help on how to make this system properly work.
Thanks, have a great day.
Taylor S. Amarel
Learning is living.
Co-Founder and CEO of http://3dprintingmodel.com/
"If we did all the things we are capable of, we would literally astound ourselves."
-Thomas Edison -
Capture Pictures/Images from Video
Hello All,
I am just getting warmed up in these forums here. I wanted to know if you could capture still images from video clips and if they are Kodak clear. I have a couple of family videos that I would like to have pictures taken from and framed. Does anybody know much about this?<[email protected]> wrote in message <br />news:[email protected]..<br />> Thats interesting. I had talked to a representative from Adobe and I was <br />> told it would be crystal clear of a photograph! That includes after the <br />> image being deinterlaced...<br /><br />It will be "clear." However, the clarity of a picture is a function of the <br />original video and the resolution. Standard definition digital video is 720 <br />x 480 pixels for a total of 345,600 or, roughly, 1/3 of a megabyte. This is <br />very low resolution compared to what a good digital still camera can <br />produce, i.e. 4-8 megabytes. A high-definition frame will contain 1,920 x <br />1,080 pixels, totaling 2,073,600. A 2 megabyte frame will produce a <br />reasonable 4 x 6 photograph.
-
What is the best way to capture RGB8 image to memory for later write to disk?
HI-
I have been using the code below for some time now but wonder if an expert can take a look and let me kown if there are ways to improve it. Problem I run into is that I preallocate the memory beforehand (camera.buffering[]), and with a 5 MP camera, this takes up a lot of RAM. It seems like when I am running this program in debug mode all day and constantly start/stop it without allowing it to free the memory gracefully, i eventually bog the system and start getting Out of Memory error -12, in other places in my code such as DisplayBitmap or GetBitmapfrom File. Is this all related to how much memory I am trying to allocate prior to begining the camera capture routine below? Am I not allocating and free-ing it properly?
Code is below for how I grab the image and then send to memory usign memcpy. Is there a better, more efficient or faster method? I have removed error checking/etc from code below simply to save space. This operates at 15 fps.
I also included my allocation and free-ing code, too.
Thanks in advance.
Blue
//caemra capture loop//
camera.dxerror = IMAQdxGrab (camera.session, camera.image, 1, &camera.bufNum);
stat = imaqRotate2 (rotateimage, camera.image, camera.rotate_image, pixval, IMAQ_BILINEAR, TRUE);
stat = imaqScale (scaledimage, rotateimage, 4, 4, IMAQ_SCALE_SMALLER,IMAQ_NO_RECT);
stat = imaqGetImageInfo(camera.image,&camera.info);
stat = imaqDisplayImage (scaledimage, 0, FALSE);
memcpy (camera.buffering[camera.count], camera.info.imageStart, camera.info.pixelsPerLine*camera.CAMY*camera.bytesperpixel);
//memory allocation code//
camera.buffering = (int **)malloc(camera.maxcount*sizeof(int*));
for(i=0;i<camera.maxcount;i++)
camera.buffering[i] = (int *)malloc(camera.info.pixelsPerLine*camera.CAMY*camera.bytesperpixel);
// free allocated memory code//
for(x=0;x<camera.maxcount;x++)
free(camera.buffering[x]);
free(camera.buffering);How many images are you storing in RAM? Typically when I need to work with tens, hundreds, or even thousands of images, I save them to the hard drive after I capture them as a PNG with 1000 quality. Sometimes I process them before saving, sometimes I process them offline from the saved images depending on the application. If you don't need all of the images simultaneously in memory at once, just save them to PNG. If you only process one image at a time, only allocate enough memory for the processing you need to perform on a single image.
www.movimed.com - Custom Imaging Solutions -
hi, when zooming in to capture still images, the image is really blurred and out of focus. is this a hardware issue and is there any solution for this?? or is my device faulty?
If you are using the iPhone camera zoom then remember that this is a digital zoom not an optical zoom. The zoomed in area will have fewer pixels and not appear as clear. The more you zoom the worse it gets. It's like cropping a picture after it has been taken.
-
Capturing an image from webcam
hi, i'm trying to capture a image from a webcam, but i don't have a clue of how to do this, could anyone help me with samples or books.
thanks.hi
so as i understood u want to access the USB and make whats called camera interaction i worked with my webcam and i generate a video stream and i'm working now to sample this video
i found these class on the internet u have to make a littel changes like camera defention
* File: DeviceInfo.java.java
* created 24.07.2001 21:44:12 by David Fischer, [email protected]
import java.awt.Dimension;
import javax.media.*;
import javax.media.control.*;
import javax.media.format.*;
import javax.media.protocol.*;
public class DeviceInfo
public static Format formatMatches (Format format, Format supported[] )
if (supported == null)
return null;
for (int i = 0; i < supported.length; i++)
if (supported.matches(format))
return supported[i];
return null;
public static boolean setFormat(DataSource dataSource, Format format)
boolean formatApplied = false;
FormatControl formatControls[] = null;
formatControls = ((CaptureDevice) dataSource).getFormatControls();
for (int x = 0; x < formatControls.length; x++)
if (formatControls[x] == null)
continue;
Format supportedFormats[] = formatControls[x].getSupportedFormats();
if (supportedFormats == null)
continue;
if (DeviceInfo.formatMatches(format, supportedFormats) != null)
formatControls[x].setFormat(format);
formatApplied = true;
return formatApplied;
public static boolean isVideo(Format format)
return (format instanceof VideoFormat);
public static boolean isAudio(Format format)
return (format instanceof AudioFormat);
public static String formatToString(Format format)
if (isVideo(format))
return videoFormatToString((VideoFormat) format);
if (isAudio(format))
return audioFormatToString((AudioFormat) format);
return ("--- unknown media device format ---");
public static String videoFormatToString(VideoFormat videoFormat)
StringBuffer result = new StringBuffer();
// add width x height (size)
Dimension d = videoFormat.getSize();
result.append("size=" + (int) d.getWidth() + "x" + (int) d.getHeight() + ", ");
// try to add color depth
if (videoFormat instanceof IndexedColorFormat)
IndexedColorFormat f = (IndexedColorFormat) videoFormat;
result.append("color depth=" + f.getMapSize() + ", ");
// add encoding
result.append("encoding=" + videoFormat.getEncoding() + ", ");
// add max data length
result.append("maxdatalength=" + videoFormat.getMaxDataLength() + "");
return result.toString();
public static String audioFormatToString(AudioFormat audioFormat)
StringBuffer result = new StringBuffer();
// short workaround
result.append(audioFormat.toString().toLowerCase());
return result.toString();
the second class:
* File: MyDataSinkListener.java
* created 24.07.2001 21:41:47 by David Fischer, [email protected]
* Decription: simple data sink listener, used to check for end of stream
import javax.media.datasink.*;
public class MyDataSinkListener implements DataSinkListener
boolean endOfStream = false;
public void dataSinkUpdate(DataSinkEvent event)
if (event instanceof javax.media.datasink.EndOfStreamEvent)
endOfStream = true;
public void waitEndOfStream(long checkTimeMs)
while (! endOfStream)
Stdout.log("datasink: waiting for end of stream ...");
try { Thread.currentThread().sleep(checkTimeMs); } catch (InterruptedException ie) {}
Stdout.log("datasink: ... end of stream reached.");
* File: MyDataSinkListener.java
* created 24.07.2001 21:41:47 by David Fischer, [email protected]
* Decription: simple data sink listener, used to check for end of stream
import javax.media.datasink.*;
public class MyDataSinkListener implements DataSinkListener
boolean endOfStream = false;
public void dataSinkUpdate(DataSinkEvent event)
if (event instanceof javax.media.datasink.EndOfStreamEvent)
endOfStream = true;
public void waitEndOfStream(long checkTimeMs)
while (! endOfStream)
Stdout.log("datasink: waiting for end of stream ...");
try { Thread.currentThread().sleep(checkTimeMs); } catch (InterruptedException ie) {}
Stdout.log("datasink: ... end of stream reached.");
}[i]the 3rd class:/******************************************************
* File: Stdout.java.java
* created 24.07.2001 21:44:46 by David Fischer, [email protected]
* Description: utility class for standard output
public class Stdout
public static void log(String msg)
System.out.println(msg);
public static void logAndAbortException(Exception e)
log("" + e);
flush();
System.exit(0);
public static void logAndAbortError(Error e)
log("" + e);
flush();
System.exit(0);
public static void flush()
System.out.flush();
the 4rt is :
* File: TestQuickCamPro.java
* created 24.07.2001 21:40:13 by David Fischer, [email protected]
* Description: this test program will capture the video and audio stream
* from a Logitech QuickCam� Pro 3000 USB camera for 10 seconds and stores
* it on a file, named "testcam.avi". You can use the microsoft windows
* media player to display this file.
* operating system: Windows 2000
* required hardware: Logitech QuickCam� Pro 3000
* required software: jdk 1.3 or jdk1.4 plus jmf2.1.1 (www.javasoft.com)
* source files: DeviceInfo.java, MyDataSinkListener.java,
* Stdout.java, TestQuickCamPro.java
* You can just start this program with "java TestQuickCamPro"
* hint: please make shure that you setup first the logitech camerea drives
* and jmf2.1.1 correctly. "jmf.jar" must be part of your CLASSPATH.
* useful links:
* - http://java.sun.com/products/java-media/jmf/2.1.1/index.html
* - http://java.sun.com/products/java-media/jmf/2.1.1/solutions/index.html
* with some small modifications, this program will work with any USB camera.
import java.io.*;
import javax.media.*;
import javax.media.control.*;
import javax.media.datasink.*;
import javax.media.format.*;
import javax.media.protocol.*;
public class TestQuickCamPro
private static boolean debugDeviceList = false;
private static String defaultVideoDeviceName = "vfw:Microsoft WDM Image Capture (Win32):0";
private static String defaultAudioDeviceName = "DirectSoundCapture";
private static String defaultVideoFormatString = "size=176x144, encoding=yuv, maxdatalength=38016";
private static String defaultAudioFormatString = "linear, 16000.0 hz, 8-bit, mono, unsigned";
private static CaptureDeviceInfo captureVideoDevice = null;
private static CaptureDeviceInfo captureAudioDevice = null;
private static VideoFormat captureVideoFormat = null;
private static AudioFormat captureAudioFormat = null;
public static void main(String args[])
// get command line arguments
for (int x = 0; x < args.length; x++)
// -dd = debug devices list -> display list of all media devices - and exit
if (args[x].toLowerCase().compareTo("-dd") == 0)
debugDeviceList = true;
// get a list of all media devices, search default devices and formats, and print it out if args[x] = "-dd"
Stdout.log("get list of all media devices ...");
java.util.Vector deviceListVector = CaptureDeviceManager.getDeviceList(null);
if (deviceListVector == null)
Stdout.log("... error: media device list vector is null, program aborted");
System.exit(0);
if (deviceListVector.size() == 0)
Stdout.log("... error: media device list vector size is 0, program aborted");
System.exit(0);
for (int x = 0; x < deviceListVector.size(); x++)
// display device name
CaptureDeviceInfo deviceInfo = (CaptureDeviceInfo) deviceListVector.elementAt(x);
String deviceInfoText = deviceInfo.getName();
if (debugDeviceList)
Stdout.log("device " + x + ": " + deviceInfoText);
// display device formats
Format deviceFormat[] = deviceInfo.getFormats();
for (int y = 0; y < deviceFormat.length; y++)
// serach for default video device
if (captureVideoDevice == null)
if (deviceFormat[y] instanceof VideoFormat)
if (deviceInfo.getName().indexOf(defaultVideoDeviceName) >= 0)
captureVideoDevice = deviceInfo;
Stdout.log(">>> capture video device = " + deviceInfo.getName());
// search for default video format
if (captureVideoDevice == deviceInfo)
if (captureVideoFormat == null)
if (DeviceInfo.formatToString(deviceFormat[y]).indexOf(defaultVideoFormatString) >= 0)
captureVideoFormat = (VideoFormat) deviceFormat[y];
Stdout.log(">>> capture video format = " + DeviceInfo.formatToString(deviceFormat[y]));
// serach for default audio device
if (captureAudioDevice == null)
if (deviceFormat[y] instanceof AudioFormat)
if (deviceInfo.getName().indexOf(defaultAudioDeviceName) >= 0)
captureAudioDevice = deviceInfo;
Stdout.log(">>> capture audio device = " + deviceInfo.getName());
// search for default audio format
if (captureAudioDevice == deviceInfo)
if (captureAudioFormat == null)
if (DeviceInfo.formatToString(deviceFormat[y]).indexOf(defaultAudioFormatString) >= 0)
captureAudioFormat = (AudioFormat) deviceFormat[y];
Stdout.log(">>> capture audio format = " + DeviceInfo.formatToString(deviceFormat[y]));
if (debugDeviceList)
Stdout.log(" - format: " + DeviceInfo.formatToString(deviceFormat[y]));
Stdout.log("... list completed.");
// if args[x] = "-dd" terminate now
if (debugDeviceList)
System.exit(0);
// setup video data source
MediaLocator videoMediaLocator = captureVideoDevice.getLocator();
DataSource videoDataSource = null;
try
videoDataSource = javax.media.Manager.createDataSource(videoMediaLocator);
catch (IOException ie) { Stdout.logAndAbortException(ie); }
catch (NoDataSourceException nse) { Stdout.logAndAbortException(nse); }
if (! DeviceInfo.setFormat(videoDataSource, captureVideoFormat))
Stdout.log("Error: unable to set video format - program aborted");
System.exit(0);
// setup audio data source
MediaLocator audioMediaLocator = captureAudioDevice.getLocator();
DataSource audioDataSource = null;
try
audioDataSource = javax.media.Manager.createDataSource(audioMediaLocator);
catch (IOException ie) { Stdout.logAndAbortException(ie); }
catch (NoDataSourceException nse) { Stdout.logAndAbortException(nse); }
if (! DeviceInfo.setFormat(audioDataSource, captureAudioFormat))
Stdout.log("Error: unable to set audio format - program aborted");
System.exit(0);
// merge the two data sources
DataSource mixedDataSource = null;
try
DataSource dArray[] = new DataSource[2];
dArray[0] = videoDataSource;
dArray[1] = audioDataSource;
mixedDataSource = javax.media.Manager.createMergingDataSource(dArray);
catch (IncompatibleSourceException ise) { Stdout.logAndAbortException(ise); }
// create a new processor
// setup output file format ->> msvideo
FileTypeDescriptor outputType = new FileTypeDescriptor(FileTypeDescriptor.MSVIDEO);
// setup output video and audio data format
Format outputFormat[] = new Format[2];
outputFormat[0] = new VideoFormat(VideoFormat.INDEO50);
outputFormat[1] = new AudioFormat(AudioFormat.GSM_MS /* LINEAR */);
// create processor
ProcessorModel processorModel = new ProcessorModel(mixedDataSource, outputFormat, outputType);
Processor processor = null;
try
processor = Manager.createRealizedProcessor(processorModel);
catch (IOException e) { Stdout.logAndAbortException(e); }
catch (NoProcessorException e) { Stdout.logAndAbortException(e); }
catch (CannotRealizeException e) { Stdout.logAndAbortException(e); }
// get the output of the processor
DataSource source = processor.getDataOutput();
// create a File protocol MediaLocator with the location
// of the file to which bits are to be written
MediaLocator dest = new MediaLocator("file:testcam.avi");
// create a datasink to do the file
DataSink dataSink = null;
MyDataSinkListener dataSinkListener = null;
try
dataSink = Manager.createDataSink(source, dest);
dataSinkListener = new MyDataSinkListener();
dataSink.addDataSinkListener(dataSinkListener);
dataSink.open();
catch (IOException e) { Stdout.logAndAbortException(e); }
catch (NoDataSinkException e) { Stdout.logAndAbortException(e); }
catch (SecurityException e) { Stdout.logAndAbortException(e); }
// now start the datasink and processor
try
dataSink.start();
catch (IOException e) { Stdout.logAndAbortException(e); }
processor.start();
Stdout.log("starting capturing ...");
try { Thread.currentThread().sleep(10000); } catch (InterruptedException ie) {} // capture for 10 seconds
Stdout.log("... capturing done");
// stop and close the processor when done capturing...
// close the datasink when EndOfStream event is received...
processor.stop();
processor.close();
dataSinkListener.waitEndOfStream(10);
dataSink.close();
Stdout.log("[all done]");
}finally search with the athour name there is an additional program u must download to detect ur cam .
i hope that is work with u -
Capturing still images using IDVD
Hi
Is it possible for me to play a personal DVD and capture still images from the movie that is being played, and then save the images, for either putting on a DVD/CD or printing them? if so how is this possible?
ps i do not have IDVD at the moment but looking at getting it.
Thanks in advance
Abedabz latif wrote:
.. Is it possible for me to play a personal DVD and capture still images from the movie
sure, with the usual screencapture shortcuts -3, -4, -4-space etc.
but you have to use VLC as player, due to DVDplayer.app prohibits any captures.
ps i do not have IDVD at the moment ..
your Mac came pre-installed with iDVD...
Maybe you are looking for
-
One library on multiple computers
I have my iTunes library on an external firewire drive that I move between 3 Macs. Is there any way to not have to make iTunes organize it every time? It seems some songs get missed and then when I switch computers other songs have been moved so they
-
Downloading file from a server
Hi guys. I am attempting to allow a user to download or open a file in their browser via a servlet. I can successfully do this, providing the user has configured their browser image to viewer - so they can choose "Open" to view in the browser window,
-
Odd Display of PDF Created with Pagesn '09
I create the monthly newsletter for a Macintosh user group, Capitol Macintosh of Austin, Texas (CapMac), using Pages '09. The newsletter is exported from Pages as a PDF and is then made available on the CapMac website <http://www.capmac.org/>. A weir
-
RPC Fault faultString="HTTP request error"
Hi I built a project from an existing fxp file.. But when I try to run it I receive tthis error.. What is wrong?? [RPC Fault faultString="HTTP request error" faultCode="Server.Error.Request" faultDetail="Error: [IOErrorEvent type="ioError" bubbles=fa
-
Providing redirects in Servlet filter
Hi, I need to provide serverside redirects in Servlet filter.O tried the below code, But unable to do it. Is it possible to do such a thing. public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOExce