T/C dropout foils capture process - flywheel settings?

Hi all -
I'm trying to help someone other than me solve this. We have searched to manual to little avail.
Your time, experience and expertise are greatly appreciated. Here are the two issues.
Time code dropouts are causing FCE to hiccup and lose continuity while trying to capture DV from a:
HI-8 dedicated playback deck connected via S-Video to a:
PYRO AV - analog to digital convertor, saving files via FW to a:
iBook (1.5 RAM) and most current versions of all software stored to a:
Western Digital "My Book" model hard drive - I forget the size- plenty of space tho'
I've got twenty years dealing with digital audio and computers so I'm not exactly new to this
but obviously stumped here.
We continually get T/C breaks making a smooth capture pass impossible.
Now I know consumer grade cameras are going to going to have dropouts, but we need to transfer
this to DV and ignore those which currently are causing FCE to pause, wrecking the capture.
The other problem is that when we do get an unbroken pass we've often got a message stating that we've run out of space for the capture. We're typically trying to capture 60 unbroken minutes.
Three questions:
Do any of the model/items mentioned above have known issues with FCE?
Is there some setting which will forgive and keep rolling (a.k.a. flywheeling) maintaining the capture pass?
Where does one adjust the amount of memory alloted to a capture?
ANY other suggestions would be appreciated please.
TIA,
Rob

You probably have multiple things going on.
First, are you using the _DV NTSC - DV Converter_ easy setup in FCE? I suspect you are using DV NTSC. When you use a converter like your Pyro you need to use the DV Converter easy setup. That alone could be causing your dropped frames. You shouldn't get any dropped frames with the DV Converter setup as long as everything else is working properly.
Which leads me to ...
Your WD Mybook drive. _How is it formatted?_ It needs to be Mac OS Extended to work with Final Cut. If it's FAT32 (which I believe is how it comes formatted from the factory) you will basically never be able to capture more than about 9 minutes of video and you will get an out of space error message. WD Mybooks are not my favorite drive, especially for video they have caused difficulties for many people. But at least you should check the format.
Your FCE preference setting called _Limit Capture Now To_ ... This should be checked (on) and the limit set to 62 or 63 minutes if you are trying to capture 60 min of video. If it's set to 5 minutes, for example, you'll never capture more than 5 minutes of video.
Your system. _What are the exact specs of your iBook?_ Especially, what is the size of your system HD and how much free space is available on it? This is important. Also, _what versions of FCE and QT_ do you have on your iBook? And do you have _any other applications_ running at the same time you are capturing?

Similar Messages

  • Excise invoice capture process

    Hi,
      I want to know about excise invoice capture process for depot plant  which t. cod eis use for depot plant how to do the part1 and part2  and also reversal process for the same.
    also what is diff. between excis einvoice capture process for depot and non depot plant.
    regards,
    zafar

    Hi Zafar,
    There are no part 1 and part 2 in RG23D for depot scenario. You can update RG23D at the time of MIGO or J1IG "Capture excise invoice for depot".
    For cancelling you can use the same transaction. And to send the goods out from Depot plant use T-code J1IJ for updating RG23D.
    Rest process remains the same Extraction J2I5 and print through J2I6.
    BR

  • Instantiation and start_scn of capture process

    Hi,
    We are working on stream replication, and I have one doubt abt the behavior of the stream.
    During set up, we have to instantiate the database objects whose data will be transferrd during the process. This instantiation process, will create the object at the destination db and set scn value beyond which changes from the source db will be accepted. Now, during creation of capture process, capture process will be assigned a specific start_scn value. Capture process will start capturing the changes beyond this value and will put in capture queue. If in between capture process get aborted, and we have no alternative other than re-creation of capture process, what will happen with the data which will get created during that dropping / recreation procedure of capture process. Do I need to physically get the data and import at the destination db. When at destination db, we have instantiated objects, why not we have some kind of mechanism by which new capture process will start capturing the changes from the least instantiated scn among all instantiated tables ? Is there any other work around than exp/imp when both db (schema) are not sync at source / destination b'coz of failure of capture process. We did face this problem, and could find only one work around of exp/imp of data.
    thanx,

    Thanks Mr SK.
    The foll. query gives some kind of confirmation
    source DB
    SELECT SID, SERIAL#, CAPTURE#,CAPTURE_MESSAGE_NUMBER, ENQUEUE_MESSAGE_NUMBER, APPLY_NAME, APPLY_MESSAGES_SENT FROM V$STREAMS_CAPTURE
    target DB
    SELECT SID, SERIAL#, APPLY#, STATE,DEQUEUED_MESSAGE_NUMBER, OLDEST_SCN_NUM FROM V$STREAMS_APPLY_READER
    One more question :
    Is there any maximum limit in no. of DBs involved in Oracle Streams.
    Ths
    SM.Kumar

  • Internal Error when creating Capture Process

    Hi,
    I get the following when trying to create my capture process:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    2 3 queue_table => 'capture_queue_table',
    queue_name => 'capture_queue');
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'apply_queue_table',
    queue_name => 'apply_queue');
    END;
    4 5 6 7 8 9 10 11
    BEGIN
    ERROR at line 1:
    ORA-00600: internal error code, arguments: [kcbgtcr_4], [32492], [0], [1], [],
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 408
    ORA-06512: at line 2
    Any ideas?
    Cheers,
    Warren

    Make sure that you have upgraded to the 9.2.0.2 patchset and, as part of the migration to 9202, that you have run the catpatch.sql script.

  • Capture process: Can it write to a queue table in another database?

    The capture process reads the archived redo logs. It then writes the appropriate changes into the queue table in the same database.
    Can the Capture process read the archived redo logs and write to a queue table in another database?
    HP-UX
    Oracle 9.2.0.5

    What you are asking is not possible directly in 9i i.e. capture process cannot read the logs and write to a queue somewhere else.
    If the other database is also Oracle with platform and version compatibility then, you can use the 10g downstream capture feature to accomplish this.

  • Capture Process Error

    Hi,
    We are working on Oracle 9i bi-directional Stream replication. After set up, and sufficient amount of testing from our side, we are facing fatal error in
    Capture process in one of the database. Both the db srvr are having similar set up parameters, similar hardware, and almost everything is same. But we are facing this error in only one of them.
    The error is :
    Dump file e:\oracle\admin\repf\udump\repf_cp01_1620.trc
    Thu Apr 03 15:42:53 2003
    ORACLE V9.2.0.2.1 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.2.0 - Production
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Instance name: repf
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 1620, image: ORACLE.EXE (CP01)
    *** 2003-04-03 15:42:53.000
    *** SESSION ID:(21.548) 2003-04-03 15:42:53.000
    TLCR process death detected. Shutting down TLCR
    error 1280 in STREAMS process
    ORA-01280: Fatal LogMiner Error.
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01280: Fatal LogMiner Error.
    Dump file e:\oracle\admin\repf\udump\repf_cp01_1904.trc
    Tue Apr 01 18:44:27 2003
    ORACLE V9.2.0.2.1 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.2.0 - Production
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Instance name: repf
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 1904, image: ORACLE.EXE (CP01)
    *** 2003-04-01 18:44:27.000
    *** SESSION ID:(18.7) 2003-04-01 18:44:27.000
    error 604 in STREAMS process
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01423: error encountered while checking for extra rows in exact fetch
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 1600
    ORA-06512: at "SYS.LOGMNR_GTLO3", line 33
    ORA-06512: at line 1
    OPIRIP: Uncaught error 1089. Error stack:
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01423: error encountered while checking for extra rows in exact fetch
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 1600
    ORA-06512: at "SYS.LOGMNR_GTLO3", line 33
    ORA-06512: at line 1
    Thanx,
    Kamlesh Chaudhary

    If you are configuring Streams environment you dont have to specify the logminer tablespace. So i did not specify it manually when i was setting up my Capture process, and i did not change it later.
    Prior the 1280 fatal logminer error i have the following errors:
    ORA-00353: log corruption near block string change string time string
    ORA-00354: corrupt redo log block header.
    I've checked the hard drive, and it it correct.
    Any suggestions?

  • Capture process status waiting for Dictionary Redo: first scn....

    Hi
    i am facing Issue in Oracle Streams.
    below message found in Capture State
    waiting for Dictionary Redo: first scn 777777777 (Eg)
    Archive_log_dest=USE_DB_RECOVERY_FILE_DEST
    i have space related issue....
    i restored the archive log to another partition eg. /opt/arc_log
    what should i do
    1) db start reading archive log from above location
    or
    2) how to move some archive log to USE_DB_RECOVERY_FILE_DEST from /opt/arc_log so db start processing ...
    Regard's

    Hi -
    Bad news.
    As per note 418755.1
    A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
    Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
    If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
    Rgds
    Mark Teehan
    Singapore

  • Location of Capture Process and Perf Overhead

    Hi,
    We are just starting to look at Streams technology. I am reading the doc and it implies that the capture process is run on the source database node. I am concerned of the overhead on the OLTP box. I have a few questions I was hoping to get clarification on.
    1. Can I send the redo log to another node/db with data dictionary info and run the capture there? I would like to offload the perf overhead to another box and I thought Logminer could do it, so why not Streams.
    2. If I run the capture process on one node/db can the initial queue I write to be on another node/db or is it implicit to where I run the capture process? I think I know this answer but would like to hear yours.
    3. Is there any performance atomics on the cost of the capture process to an OLTP system? I realize there are many variables but am wondering if I should even be concerned with offloading the capture process.
    Many thanks in advance for your time.
    Regards,
    Tom

    In the current release, Oracle Streams performs all capture activities at the source site. The ability to capture the changes from the redo logs at an alternative site is planned for a future release. Captured changes are stored in an in-memory buffer queue on the local database. Multi-cpu servers with enough available memory should be able to handle the overhead of capture.

  • Create multiple capture processes for same table depending on column value

    Hi,
    is it possible to create multiple realtime downstream capture processes to capture changes for the same table depending on column value?
    Prakash

    i found it - by using subset rules
    prakash

  • Rman-08137 can't delete archivelog because the capture process need it

    When I use the rman utility to delete the old archivelog on the server ,It shows :Rman-08137 can't delete archivelog because the capture process need it .how to resolve the problem?

    It is likely that the "extract" process still requires those archive logs, as it is monitoring transactions that have not yet been "captured" and written out to a GoldenGate trail.
    Consider the case of doing the following: ggsci> add extract foo, tranlog, begin now
    After pressing "return" on that "add extract" command, any new transactions will be monitored by GoldenGate. Even if you never start extract foo, the GoldenGate + rman integration will keep those logs around. Note that this GG+rman integration is a relatively new feature, as of GG 11.1.1.1 => if "add extract foo" prints out "extract is registered", then you have this functionality.
    Another common "problem" is deleting "extract foo", but forgetting to "unregister" it. For example, to properly "delete" a registered "extract", one has to run "dblogin" first:
    ggsci> dblogin userid <userid> password <password>
    ggsci> delete extract foo
    However, if you just do the following, the extract is deleted, but not unregistered. Only a warning is printed.
    ggsci> delete extract foo
    <warning: to unregister, run the command "unregister...">
    So then one just has to follow the instructions in the warning:
    ggsci> dblogin ...
    ggsci> unregister extract foo logretention
    But what if you didn't know the name of the old extracts, or were not even aware if there were any existing registered extracts? You can run the following to find out if any exist:
    sqlplus> select count(*) from dba_capture;
    The actual extract name is not exactly available, but it can be inferred:
    sqlplus> select capture_name, capture_user from dba_capture;
    <blockquote>
    CAPTURE_NAME CAPTURE_USER
    ================ ==================
    OGG$_EORADF4026B1 GGS
    </blockquote>
    In the above case, my actual "capture" process was called "eora". All OGG processes will be prefixed by OGG in the "capture_name" field.
    Btw, you can disable this "logretention" feature by adding in a tranlog option in the param file,
    TRANLOGOPTIONS LOGRETENTION DISABLED
    Or just manually "unregister" the extract. (Not doing a "dblogin" before "add extract" should also work in theory... but it doesn't. The extract is still registered after startup. Not sure if that's a bug or a feature.)
    Cheers,
    -Michael

  • Copy/Capture Adjustment Layer settings using Javascript

    Hi,
    For last couple of days I'm trying to copy adjustment layer settings via Javascript after adjustment layer has been already created.
    General idea is to capture adjustment layer settings and save them to JSON file for future preset etc. I was hoping to capture raw data ( getData(stringIDToTypeID('legacyContentData'))) and store it in JSON file base64 encoded.
    I'm able to capture raw data but I'm not able to restore it to the adjustment layer using putData or something similar.
    Capture raw data of adjustment layer:
    var doc = app.activeDocument;
    var layer = doc.activeLayer;
    ref.putEnumerated(charIDToTypeID("Lyr "), charIDToTypeID("Ordn"), charIDToTypeID("Trgt"));
    var desc = executeActionGet(ref).getList(stringIDToTypeID('adjustment')).getObjectValue(0);
    rawData = desc.getData(stringIDToTypeID('legacyContentData'));
    Trying to restore raw data:
       var desc = new ActionDescriptor();
            var ref = new ActionReference();
            ref.putEnumerated(cTID('AdjL'), cTID('Ordn'), cTID('Trgt'));
            desc.putReference(cTID('null'), ref);
            aldesc = new ActionDescriptor();
            aldesc.putEnumerated(sTID("presetKind"), sTID("presetKindType"), sTID("presetKindCustom"));
            var list1 = new ActionList();
    // Not sure what to do here ...
            aldesc.putList(cTID('Adjs'), list1);
            desc.putObject(cTID('T   '), id, aldesc);
            executeAction(cTID('setd'), desc, mode);
    If I'm using above code with code generated by i.e. script listener or action translated into javascript it works (at least for curves), but I'd rather use getData/putData so I can capture already created adjustment layers with out need to code each layer separately. toStream and fromStream also would work, but I wasn't able to restore data using these methods too.
    Help would be appreciated.
    Thanks!

    Thanks for the replies you guys. It's not hardware related Mylenium. My machine was purpose built for AE work and this is a very simple comp that is nothing remotely as intensive as some of the truly complex and effects ladened projects I've done before. It's even simpler than the one in the same project where the other adjustment layer is working properly. The layer is going white before an effect is even being added, so it's not that. I'll double check the OpenGL when I can get back on my machine in a bit, but I never changed anything before changing comps, so I don't think that's it.
    I'm guessing that it is probably related to the layer settings itself. There's no 3D layers so it's not that Dave. I'm going to check Rick's idea when I can because the white solid behavior sounds exactly what he is talking about.

  • Can I configure capture process of streams on physical standby

    we have high oltp system that has dataguard setup with physical and logical. We are planning to build a datawarehouse and have streams setup for data feeding into the datawarehouse. Instead of taxing the primary, i was wondering if i could set up the physical standby as the source for my streams (basically configure the capture process on physical standby).
    Appreciate your help in advance!
    Thanks

    Thanks for the reply Tekicora
    This means then On the primary, I will have another destination that I send the archives to (In addition to Physical) that will be my source database for streams where I can configure capture process. if this understanding is right, then i have the following questions
    1) Can i use cascaded standby to relieve my primary from having another log destination and use that database for the source
    2) Do you know if PSB can be used as source in 11g? because we are planning to move to 11g soon.
    Thanks
    Smitha

  • PPCS4 Clip "Description" etc. won't save after capture process

    During the logging/capturing process I have been entering information into the various Clip Data fields, such as Description, Scene, and Log Note.  Once the clips have been logged...and the project closed...when I start a new PProCS4 session, and retrieve those same clips...the Clip Data that I had entered does not appear next to the clips in the Project Panel?!?  Does this have to do with a Metadata issue?  Anyhelp ASAP would be greatly appreciated.  Thanks.

    They are blank...nothing in the under the field heading
    Complete workflow/actions are as follows - may be helpful here...
    1. capture/log clips with added descriptions etc.
    2. Batch capture above - creating .avi files
    3. Close out of PPro CS4
    4. Re-open PPro - and import the above .avi files
    5. The .avi files now do not have the descriptions that were input during the capture/log process

  • RAC for downstreams capture process

    I have created a real-time downstreams capture process in a RAC to protect the process of any failure but I have some doubts about this:
    1- I need create the group of standby redo log for each instance in cluster or its shared for all ?
    2- if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
    RAC_STR=
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (CONNECT_DATA=
    (SERVICE_NAME=RAC-global_name)
    configured process will be able to continue capturing changes without data redo loss ?
    Appreciate any explanation.

    >
    if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
    RAC_STR=
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (ADDRESS=
    (PROTOCOL=TCP)
    (HOST=VIP-instance1)
    (PORT=1521)
    (CONNECT_DATA=
    (SERVICE_NAME=RAC-global_name)
    configured process will be able to continue capturing changes without data redo loss ?You will not expirience with data loss if one of RAC instances goes down - next one will overtake yours downstream capture process and continues to mine redo from source database. But you definitly need to correct your tnsnames, because it is pointing twice to the same RAC instance "VIP-instance1"
    The downstream capture on RAC unfortunatly has other problems, with I've already expirienced, but maybe it will not concern your configuration. The undocumented problems (or bugs which are open and not solved yet) are:
    1. if your RAC DB has the phys. standby than it can happened that it discontinue to register redo from upstream Streams database.
    2. if your RAC DB has both downstream and local capture then if more as 2 RAC instances are running, the local capture can't continue with current redolog (only after log switch)

  • Resetting SCN from removed Capture Process

    I've come across a problem in Oracle Streams where the Capture Processes seem to get stuck. There are no reported errors in the alert log and no trace files, but the capture process fails to continue capturing changes. It stays enabled, but in an awkward state where the OEM Console reports zeros across the board (0 messages, 0 enqueued), when in fact there had been accurate totals in the past.
    Restarting the Capture process does no good. The Capture process seems to switch its state back and forth from Dictionary Initialization to Initializing and vice versa. The only thing that seems to kickstart Streams again is to remove the Capture process and recreate the same process.
    However my problem is that I want to set the start_scn of the new capture process to the captured_scn of the remove capture process so that the new one can start from where the old one left off? However, I'm getting an error that this cannot be performed (cannot capture from specified SCN).
    Am I understanding this correctly? Or should the new Capture process start from where the removed left off automatically?
    Thanks

    Hi,
    I seem to have the same problem.
    I now have a latency of round about 3 days while nothing happened in the database so I want to be able to set the capture process to a further SCN. Setting the Start_SCN gives me an error (can't remember it now unfortunately). Somethimes it seems that the capture process gets stuck in an archived log. It then takes a long time for it to go further and when it goes further it sprints through a bunch of logs before it gets stuck again. During that time all the statuses look good, no heavy cpu-usage is monitored. We saw that the capture-builder has the highest cpu-load, where I would expect the capture-reader to be busy.
    I am able to set the first_scn. So a rebuild of the logminer dictionary might help a bit. But then again: why would the capture process need such a long time to process the archived-logs where no relevant events are expected.
    In my case the Streams solution is considered as a candidate for a replication solution where Quest's Sharedplex is considered to be expensive and unable to meet the requirements. One main reason it is considered inadaquate is that it is not able to catch up after a database-restart of a heavy batch. Now it seems that our capture-process might suffer from the same problem. I sincerly hope I'm wrong and it proofs to be capable.
    Regards,
    Martien

Maybe you are looking for

  • How to highlight the Textbox red in colour

    Hi all,        I have a requirement to highlight the textbox (border) red in color. Any body having idea on this, how to do?          I know to do this via          wdComponentAPI.getMessageManager().reportContextAttributeMessage(      wdContext.curr

  • Blackberry Link for Q10 with Outlook Sync has very poor performanc​e and not good Functional​ity

    I am wondering if anyone has tested this software. Blackberry Sync software for my previous phones were wonderful. I can sync my contacts and calender in a very short time. With this new Blackberry Link it takes about 1 1/2 hours Plus to complete. An

  • Mail keeps freezing. Tried loads, nothing has worked. Please HELP!

    Hi, My wife has an iBook and on recently updating to 10.5.8 we are having problems with the mail. I can open Mail and it starts to download but on email number 41 mail freezes and stops downloading. I have been through the forums and have tried the f

  • Hex 00 in every other byte while reading a unix file

    Hi, I am having trouble reading certain files on the unix server. When I read one of them, I am finding that hex 00 have been inserted in between the characters (e.g. if # represents 00 then "byte" appears as "b#y#t#e#") I do not have this problem wi

  • My Primary DC is down

    Hi all my Primary DC has been down since yesterday due to a hardware failure and should be back up on friday, already started having issues with users not been able to logon with error "Cannot not establish trust relationship". From research I have t