Possible to use openGL to speed up graphics stuff?

Dear all,
An AI application I am presently implementing is fairly intensive on the old graphics. Is there any way to incorporate my graphics card directly - e.g. use OpenGL (I'm a linux man) - and utilise its on board graphics memory to speed it all up?
I've heard that such is possible with java 3D, but I'm fairly reluctant to re-implement everything, (its all 2D anyway).
Cheers,
Ben

Images drawn using ManagedImage and VolitaleImage will attempt to be hardware accelerated if possible. I'm not sure about primitives drawn using the graphics draw methods though.
as of Java 1.5, you can turn on OpenGL for 2D by setting the property sun.java2d.opengl:
http://java.sun.com/j2se/1.5.0/docs/guide/2d/new_features.html#ogl

Similar Messages

  • Use opengl and directdraw

    hi all,
    i have enabled opengl and directdraw in jvm, and the statement is correct, but i find the graphics rendering does not accelerate compared with not enabled. does anyone know why is that?thanks

    Yes it is possible to use OpenGL with JAVA. I have have written a very nice program that use an API that I have at work. I will see if it is on the web and I will get back to you on how to begin to use this powerful tool.

  • How can I use OpenGl for iPhone games??

    Hi! As you know I really want to be an iOS developer. I know how Xcode works. But I don't know for the graphics and animation. I've heard that OpenGl can be used in games. But how? I mean like connecting the animations in OpenGl to the program written in XCode. I don't understand that part yet. So please can you guys explain me about that. I thank you all. 

    Hi Eric,
    it would help me frame answers if you give the background information:
    - do you know any 3D computer graphics theory? E.g. viewing transformations.
    - do you know any openGL already?
    - do you already know objectiveC and ios programming?
    To answer your questions:
    - the OpenGL ES libraries are already included as part of the iOS SDK, so if you have that you are set.
    - I am using an Air myself, works fine for developing for iOS, including code using OpenGL ES.
    - OpenGL ES is based on OpenGL 3.0, so is very similar. Almost the same.
    - read the post I linked to:
    http://db-in.com/blog/2011/01/all-about-opengl-es-2-x-part-13/
    He explains how to hook things up quite well. However, you do already need to understand how to develop for iOS.
    As I mentioned above, it helps to know 3D Graphics principles. If you do not know that then it is difficult to animate using openGL (or any other SDK). There are third party tools available (e.g. Corona, Unity, project Anarchy) that can make life easier, but at some point you simply have to learn 3D graphics. Trying to do animation without understanding graphics is like trying to write native apps without understanding programming. You can cheat to do basic stuff, but nothing useful or deep.
    This is not easy stuff, you will have to put some work into it. Start with the above link as an intro, and also the apple docs pointed to by teacup.
    Good luck!

  • Is is possible to use creative cloud as a shared volume for collaboration between incopy & indesign?

    I'm a designer student and as a designer, I constantly have to make decks for presentations.  I typically work in a group of six and because I'm confident with graphic design, I tend to be the leader of these tasks.  I've found the best workflow for slide creation is Indesign.  Unfortunatly, I'm just an awful writer— so when I heard about Incopy's collabrative features(saw the post from indesign's facebook page) I saw oppertunity and got very excited.  My teamates and I typically all have subscriptions to adobe cc because of this, I was wondering (since I havent used it at all) if the storage given out from adobe cc was able to be that shared volume that is required for the live link between incopy stories and indesign layouts.
    Therefore,
    Is is possible to use creative cloud as a shared volume for collaboration between Incopy & Indesign?
    If this needs any further clarification I'll gladly write a more detailed question — lastly, if this is not possible might you suggest an alternative?  Please assume that we go to school together but we work offsite also.

    You will not be able to use the Files page at https://creative.adobe.com/files for collaboration in this way. But you can share individual files in this way.
    When folder sharing is available this will be possible.

  • [SOLVED] kwin crash when using OpenGL mode after upgrading to 4.7

    I have a Ideapad Y460 (Graphic: ATi Mobility Radeon 5650). Recent days I upgrade my KDE from 4.7RC (4.6.95) to 4.7 (4.7.00) and also Catalyst from 11.6 to 11.7.
    Update:
    The problem has been solved. I used [kde-unsupported] source and kdebase-workspace package in that source contains OpenGL ES support while Catalyst doesn't support it. Thank @csslayer.
    But when I use OpenGL mode, kwin crashed. I run
    kwin --replace
    to start kwin, then the Desktop Effect shows:
    OpenGL compositing (the default) has crashed KWin in the past.
    This was most likely due to a driver bug.
    If you think that you have meanwhile upgraded to a stable driver,
    you can reset this protection but be aware that this might result in an immediate crash!
    Alternatively, you might want to use the XRender backend instead.
    And there is a re-enable button. But the systemsettings window crashed when I pushed it.
    I checked ~/.xsessiong-error and it shows this:
    X Error: BadWindow (invalid Window parameter) 3
    Major opcode: 20 (X_GetProperty)
    Resource id: 0x1c0015a
    I downgrade Catalyst but the problem still exists while KDE4.7 RC run well on my laptop. So I'm not sure whether it is caused by kwin or by Catalyst 11.7 or other packages.
    Does anyone know the solution? Thanks for your patience.
    Last edited by ukyoi (2011-08-01 08:34:20)

    I found an solution. Go to Settings/Mail, Contact, Calendar/Fetch new data and than enable Push and choose to fetch hourly or every 30 minutes. In this case your iPad will stay connected to wifi even when smart cover is closed. If you select Manualy than wifi connection will be terminated when in sleep mode.
    (Not native english speaker, hope you'll understand)

  • Is it possible to use a USB hub on a new MacBook Pro with Maverickss?

    I have used a powered Q-Store USB2 hub on first my old trust MacBook with the 13 inch screen running System X.6.8 for years; then I switched it over to my new 13 In. MacBook Pro (non retina) with Mavericks X.9.2 and it still works great. It is powered and seems to run a full load including USB hard drives, camera when wanted, cooling pad with fan, printer/scanner/copier combo, charging a MP3 player or Galaxy S4, apple mouse and keyboard, all with no trouble and no drop outs. I let an old Firewire 400 or 800 hard disk stay with my old Macintosh for now as it mirrors it, but sometimes hook it up with no problem except it gets hot.
    I have a new USB3  WD Passport 3 hard drive and a 32 gb USB 3 memory stick. I wanted to put these on a USB3 hub and use one of the ports for the faster items. I can run either my WD My Passport 3 Gb drive that I use to mirror my internal drive as long as it is alone. I can also plug a 32 gb A Data memory stick in fine alone except things get a bit snug, and there's no more room on the USB 2 hub for the memory stick. If I do not use a hub for the one port I have to either use the Hard Drive or the memory stick, not both. I was using the memory stick to port files back and forth between my old and new computers at least for a while. The USB 3 peripherals work reliably when plugged in alone, but the moment a hub gets in the picture even if only one item is plugged into it, things drop in and out. It does not matter if it is the powered USB 3 hub or the non-powered Anker USB 3 4 port hub. The USB 2 items simply work but the USB 3 ones only work alone. I have looked at the power specifications. It seems like one hard drive uses all of the power the hub can put out.
    It looks like the USB 3 is rather limited on Macintosh computers is all I can conclude. There may be extra speed, but what good is it if you can only run one peripheral at a time? It seems like even a hub stresses it unless it is a hub and a memory stick or two. I even changed to system settings to keep my hard drives from going to sleep, but still it drops out randomly if hooked to a hub and runs perfectly when plugged in directly. I had hoped to use it for an emergency start up disk if needed, though I must say the rather incredible StoreEdge Micro SDX card works great for that purpose and seems to belong.
    That old USB 2 hub, even with a smaller one plugged in with 2 portable hard drives still works flawlessly on the other port and never drops out.
    I have not tested the Firewire 800 port long enough to decide, except it gets hot if it reads very long.. The old Firewire 800 / USB 2 hard disk has more than one port on it unlike so many today. I had to use a Firewire 800 to 400 cable for my old computer. Anyway it works in either computer and stays mounted even when asleep.
    Is it possible to use a USB 3 hub with more than one peripheral on a 2012-2013 13 inch MacBook Pro computer with Mavericks? If so which one works? It cannot be good for hard disks to randomly disconnect and reconnect without being dragged to the trash.

    no you can't - you have to upgrade it to 16GB yourself.
    you can get your RAM from newegg.com, macsales.com.
    the RAM you need is 16GB (8GBX2) DDR3 1600MHz DDR3.
    upgrading the RAM yourself will not void your warranty.
    you need a #00 Phillips Screwdriver - 10 screws at the bottom - 7 long ones and 3 short ones - just remember where they go when you put the screws back in.
    a simple 10 minute job.

  • [CS3 JS SDK] Possible to Use HTTP URLs for Links?

    I am attempting to use JavaScript to implement a connector from CS3 (InDesign/InCopy in my case) to our content management system. Our CMS provides an HTTP-based API. By using the HttpConnection object (from the Bridge code, see posts on "httpwebaccess library") I can access our repository using HTTP URLs and, for example, get an InDesign document or INX file (and the UI support in CS3 scripting makes it possible for me to build all the UI components I need without having to write a true plugin).
    However, what I *can't* seem to do is create Link objects that use a URL, rather than a File, to access resources.
    My goal is to be able to store in our repository InDesign documents that use URLs to other resources in the repository for links (e.g., to graphics and InCopy articles).
    However, as far as I can tell from the scripting documentation and my own experiments, the URL property on Link is read-only in JavaScript (even though the scripting API HTML indicates it's a read/write property) and Link instances can only be constructed using File objects.
    My question: have I missed some trick in the scripting API (or elsewhere) that would allow me to create links that use URLs instead of files (and having done so would InDesign resolve those URLs?)? Our repository does support WebDAV, so that might be an option, but it would depend on mounting WebDAV services in consistent places on client machines, which is dicey at best, given the weak nature of the WebDAV clients on both Windows and OS X).
    Or are my only options to either copy linked resources to the client's local file system (or a shared network drive) or implement a plugin that implements my desired behavior for links?
    And if the answer is a plugin, will that even work?
    This is in the context of CS3. Has the Link mechanism changed at all in CS4 such that I could do what I want in CS4 where I cannot in CS3?
    Thanks,
    Eliot

    Hi,
    It is not possible to use HTTP URLS in CS3. You will have to create a plug-in to use Custom Data Links.
    I think it is possible to use HTTP URLs in CS4 as per the User Guide.
    Regards,
    Anderson

  • Creating Active controls using opengl libraries in LabVIEW

    Hi,
    I want to create a 3D representation of the some geographic data on LabVIEW GUI. The need is to achieve something beyond the dumb representation. I would like to be able to choose any one of the points represented on the 3D GUI and do some activity like give a POP-UP etc.
    I believe labview 3D picture control allows to create the view in various ways like mesh and bars etc. But doesn't allow to create a active control.
    Attaching a sample pic. Of what am I expecting. I want to be able to click on each of the intersection points in the graph and get the pop-up.
    Since this seems to be not possible just using inbuilt 3D picture capabilities of LabVIEW, plan is to use a opengl libraries to achieve this.
    Will it be possible to create this in LabVIEW?
    Has some one done this before? A sample code will be of great help.
    Attachments:
    Example 3D view.JPG ‏69 KB

    Ayman
    Take a look at this thread http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=2391  where Joe Hoskins has kindly donated his collection of activex vi's for excel.
    There are a couple of examples shipped with labview. Goto menu Help -> Find Examples. The NI Example finder will appear. Goto the search tab and search for excel.
    And, there are plenty of examples on the forum you just need to search for them.
    Trying to get you started ...
    David

  • Using OpenGL ES in an AIR native extension on iPad

    Hi,
    I am trying to find out whether I can render images using iPad GPU from a native extension loaded by an AIR application on iPad. The main reason for this is that I need every bit of GPU power in my AIR application and I cannot use Stage3D (see http://forums.adobe.com/thread/1267084 for details).
    The idea is to pass a bitmap from the ActionScript code to the native extension, render it by Objective C code and raw OpenGL ES and send it back to the AIR code.
    Is it technically possible? I am afraid that AIR runtime uses OpenGL ES for its own needs (at least for Stage3D support) so native extension possibly cannot share OpenGL with it.
    Nevertheless I have made a try. Here is some code:
    The first strange thing is that the following initialization code:
    myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    [EAGLContext setCurrentContext:myContext];
    does not make much sense for me. Instead of this I can see that EAGLContext already contains some instance previously set up by someone else (maybe AIR runtime did it). And I was able to get an image only when I do not create this context at all. So these two lines are actually commented out in my test app.
    Here is how I initialize framebuffer:
    glGenFramebuffersOES(1, &framebuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
    glGenRenderbuffersOES(1, &colorRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, width, height);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
    I do not need 3D so I am not creating depth buffer. Insteat I need to render a lot of 2D polygons and the drawing order is OK for me.
    Then I tried the following code to render a single triangle specified in the vertexData array:
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrthof(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
    glMatrixMode(GL_MODELVIEW);
    glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT);
    // vertexData contains 2d float coordinates followed by 4 bytes of color for each vertex
    glEnableClientState(GL_VERTEX_ARRAY);
    glVertexPointer(2, GL_FLOAT, 12, vertexData);
    // The following two lines cause the whole screen to be filled with random gradient semi-transparent fill.
    // If I comment them out then it renders a black triangle that I really expect to get.
    glEnableClientState(GL_COLOR_ARRAY);
    glColorPointer(4, GL_UNSIGNED_BYTE, 12, (vertexData + 8));
    // Draw the triangle:
    glDrawArrays(GL_TRIANGLES, 0, 3);
    // Get the final image:
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    NSInteger datalength = width * height * 4;
    GLubyte* rawdata = (GLubyte*)malloc(datalength * sizeof(GLubyte));
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, rawdata);
    // In this point the rawdata array contains an image that I am able to convert and send back to my AS code.
    So each time I try to specify color for vertexes then I get the whole iPad screen filled with some random gradient. I have also tried glColor function and it causes such effect too. Disabling lighting and fog did not helped.
    So my main question is the following: is it technically possible to render an offscreen image in the native extension using OpenGL?
    Maybe the black triangle that I was able to get from the rendered image is rendered accidentally and the whole thing should not work at all?

    Hi there,
    I'm a total OpenGL newb but, after quite some struggle I managed to bend OpenGL from AIR extension to do what I wanted in my FlashyWrappers library for iOS (for superfast video capturing from AIR apps).
    Basically you're right in the assumption that the OpenGL context is already initialized by AIR. I'm not sure about your code, because like I said I'm practically a newb in that field, but overall, because you share context with AIR's OpenGL, it's entirely possible you're messing with it's own rendering pipeline if you don't properly unbind / reset stuff and that might cause the screen go crazy.
    Myself I was able to do a little different thing: Create a texture-backed FBO, bind it on every frame and let AIR render into it (instead of screen). I was then able to manipulate the contents of what AIR rendered (you can do glReadPixels on the content too, by the way).
    So I bet if I can let AIR render its stuff into my texture backed FBO you can render your own things into it as well...I just bound it and waited for AIR to finish its rendering into it. Instead you can probably render into it, then bind AIR's original FBO instead of waiting for AIR to render so that AIR renders to screen again on the next frame.
    I partially used this tutorial OpenGL ES 2.0 for iPhone Tutorial Part 2: Textures - Ray Wenderlich .
    Oh and I would like to add that playing with this works only in GPU or Direct mode, not CPU.

  • Is it possible to use any nvidia 8800GT in my Mac Pro?

    Hi,
    I have a 1st gen Mac Pro, with PCIe slots. Is it possible to use any 8800GT or do i have to get one from the apple webshop. This upgrade hasn`t come to Norway yet, but i`ve found some other great 8800GT`s. I don`t see why i can`t buy a MSI 8800GT (as long as it`s PCIe and not 2.0).
    What do you think? Please help me.. =/

    Thanks for the replies.
    Thats a real shame, though. The Upgrade kit cost the equivalent of $475 in Norway. Thats insane for a $170 graphics card. I found one computer store that had it listed at about $355, which i find strange. How could another computer store sell apple hardware cheaper then the official apple reseller?
    I find this whole situation frustrating. I can't buy from nvidia.com either as the taxes and stuff would likely end up in the $200 area. For once i wish i lived in the US.
    Thanks for the help. =)

  • Is it possible to use an Airport Extreme router with AT&T 2Wire modem/router?

    My AT&T 2Wire modem/router does not have great range, speed, or compatibility with my Macbook and iMac.  I bought an Apple Airport Extreme to try to remedy my problem, however, I did not realize it was purely a router.  Is it possible to use the 2Wire modem/router as just a pure modem and connect it to the Airport Extreme?  Would it be more beneficial to purchase a new modem that does not have wireless functionality?  I have looked at previous posts but couldn't find any definite answers.  Any help will be GREATLY appreciated!!  Thanks!

    The easiest route to take here would be to simply turn off the wireless function on the 2-Wire modem/router and then configure the AirPort Extreme in Bridge Mode to provide a dual band wireless network.
    If the 2-Wire performance shortcomings are the result of other wireless interference issues at your location, unfortunately the AirPort Extreme may not perform any better, wireless wise.  It's not possible to know things like this in advance....more of a case of you won't know until you try.
    In theory, it might be possible to configure the 2-Wire product as a simple modem...but I doubt that AT&T would be willing to tell you how to do this or provide support in the event of problems.
    It's getting to be hard to find a simple DSL modem these days, and even if you do, it will have to be re-configured as a Bridge to work correctly with the AirPort Extreme, so that complicates things as well.

  • Is it possible to use iChat for non network video conferanci

    Hi all,
    I am looking for a little help please. This may be a fruitless question, but here it goes. I have a deaf 5th grader. TTY relay is essentially a thing of the past, and all of her friends are using 'video phone' to keep in touch. Anyway, today we got a Sorenson VRS router and video phone camera from her school, and it uses our existing cable internet connection. It is a BUNCH of stuff to hook up to an already crowded system. Does anyone have any ideas about the possibility of using iChat for my daughter to connect to her friends that will be using a different system?
    Sorry if this is the question of a Neophyte. I am learning this deaf world as i go.
    Thanks,
    Chad

    HI Randell,
    Welcome to the Apple Discussion Pages.
    I can tell you what iChat will do and maybe that will help.
    iChat is an Instant Messaging service that uses the AIM servers. Therefore it is an AIM client.
    AIM stands for AOL Instant Messenger.
    You can join this service in several ways.
    1) By having the AOL Package where AOL is your Internet provider (ISP)
    2) By having the stand alone Application referred on these Discussions as the AIM application (comes in several version for PC, Macs, Linux and phones although each version varies as to what it can do - see side bar at link page).
    3) Or by another application that is also an AIM client.
    On a Mac that can be Proteus or AdiumX, there are PC AIM clients like Trillian.
    Some AIM clients only do some of the features.
    FOr instance iChat to PC AIM can Video (Moving Pics and Sound) but not Audio only.
    iChat to Trillian on a PC can do both Video and Audio only if the PC end has the Pro version of Trillian ($25)
    All clients can at least Text chat.
    Many can send files and pics as well as Text chatting.
    iChat can support Text chatting, Audio Chats and Video chats. Any USB Mic will work for Audio chats. Many Firewire DV Camcorders also work, many Firewire Webcams also work and some USB cams can be made to work with iChat.
    To do this iChat needs a fast enough Mac (Procesor speed) and a fast enough interent Connection Speed.
    In iChat 2 there is felt to be a better chance of success with connections to PC machines running either AIM (or the AOL package) or Trillian Pro for Video chats.
    Text chats and Video chats can be done to the same Buddy (contact) at the same time. I would have thought this would be ideal as it provides a chance to practice lip reading skills and not missing any of the info.
    What iChat 2 can not do.
    1) Chat to other services like MSN or Yahoo!
    2) Chat or link to Video phones.
    3) Video to Mobile (cell) phones.
    4) Connect to other Video conferencing services.
    For more info start here
    Faq on how to start in iChat
    This tells you more about connecting to PCs.
    More info than you might need at this stage http://www.ralphjohnsuk.dsl.pipex.com/
    What iChat 3 adds.
    1) Upto 4 people in a Video chat
    2) Access to the Jabber service (Another Instant Messaging service). This also can use Jabber servers that provide things called Tranports Gateways to other IM services which allows Text chatting to Buddiies on these services.
    Ralph

  • Looking to buy new MacBook Pro for editing with Premiere/After Effects, but wondering about trade-off between Processor Speed and Graphics Card

    I'm a professional video editor (using Premiere and After Effects) looking to buy a new MacBook Pro and am deciding between two models. The slightly older model has a 2.8GHZ i7 (3rd generation) Quad Core processor with a 1GB SDRAM of NVIDIA GE FORCE GT 650M Graphics Card. Then newer model has a 2.3GHZ i7 (4th generation) Quad Core preocessor with 2GB SDRAM of NVIDIA GE FORCE 750M/Intel Iris Pro Graphics Card.
    Which makes the most difference (processor speed vs. graphics card) with editing with Premiere and After Effects?
    Any help/guidance would be greatly appreciated.
    Thanks!
    mike

    Poikkeus wrote:
    1. Your MBP will be somewhat slower than your iMac, as reflected in the general speed; desktop Macs have more RAM and storage.
    You recon? If he get's the 17", he would have up to 8x more RAM, 4 x more GPU,, and  a bit faster CPU;.
    2. Be aware of the advantages and disadvantages of extra RAM. Loading up the slot will make juggling multiple applications easier, like Photoshop, VLC, and Safari. However, more than 4gigs of RAM will make loading your MBP on startup twice as slow - at least a minute, probably longer. That's why a MBP user with extra RAM should sleep their machine nearly always when not in use, rather than powering off. 
    I did not know this, I just upgraded from 4gb to 8gb the other day. Have not noticed it being slower, but I don't often shut it down. It's nice to not even have to bother with ifreemem.
    3. Additional storage and RAM will maximize the basic capabilities of your MBP, but you won't be able to make a 2.3ghz machine any faster than it already is.
    SSD
    4. I still feel that your iMac will be faster than your prospective MBP. The only way to dramatically increase the speed would be the installation of a SSD drive (like the lauded OWC series). But they're not cheap.
    I don't want to rain on your parade, but want you to get a more realistic idea of your performance.
    I chose a macbook pro, 17" of cause. I use it for gaming. Yes a iMac is better for gaming. But, it's nice to be able to move around. Set up a man cave in the lounge 1 week, or in the bedroom, the next. But you fork out a lot more dosh for that luxury. And yes, not as much power as Poikkeus has said.

  • Is it possible to use Aperture to make a new image from two existing ones--ie, in a new jpg? I essentially want to create a collage that I can then print as a single image, rather than in a book. Any advice will be most welcome!

    Is it possible to use Aperture to make a new jpg from two existing ones? I'd essentially like to create a photo collage (without using existing templates) that I can then print as a single image, rather than in a book. Any advice will be most welcome! Thanks.

    https://discussions.apple.com/message/15678716#15678716
    (Added)
    The main point is that Aperture is used to make digital negatives as good as they can be (for the uses you define), but it does not ADD date to your digital negatives, nor does it produce NEW combination files.  For those tasks you need a Graphics program.
    Message was edited by: Kirby Krieger

  • Is it possible to use LTO tape library as an Archive Device

    Is it possible to use LTO tape library as an Archive Device in Final Cut Server System?
    Does it requires any additional hardware/software? Is there any recommendations available?

    HI Daro, you might want to look at TolisGRoup's impending BRU "Producers edition" http://www.tolisgroup.com/products/macosx/pe/.
    I believe they are looking to have these interfaces to some of the  pro-apps. [Will stand corrected].
    Whatever the archiver.app, it needs to know where the objects start and end on the data tape volume (e.g Ultrium LTO4]. THis is simply done by knowing the BLOCKID ofthe first block of the file(s) that have been archivied on the tape (this was retrieved when the ARCHVE operation was done via RED BLKID cmd). This is simplest form. That way the software can "+LOCATE BLOCK scsi tape cmd+" to that spot on that tape volume using HIGH SPEED SEARCH and read in what it needs.
    Some rather spiffy .apps like SGL's FLASHNET for OSX and others (DIVARCHIVE on windows) will resolve a TCR to a location DATA TAPE volume based on the known codec or in SGL's case, the interface through it's MXF archive implementation. This could be very hand for large high bitrate objects. .
    I am waiting for this myself. It is currently in BETA.
    Contact Tolisgroup and see what the status is of BR PE. http://www.tolisgroup.com/products/macosx/pe/
    HTH
    w

Maybe you are looking for