Recording longer than default 15min

I want to record from reel to reels but it stops recording after 15min. I changed this once before but cannot remember where I did it. Can someone tell me what menu to change it?

Press "A" on the keyboard, this will open up the "Set Audio Record Path" dialogue box.
You can either change the 15 minute setting or uncheck the "Maximun Recording Time" Box.

Similar Messages

  • IMovie & iSight - can't record longer than 15 minutes?

    Hi there,
    I'm just curious to see if anyone can help me with the problems I'm having using iMovie and iSight. Basically, I'm trying to record a series of 2-hour lectures at my uni and am using the iSight as a firewire camera to record from into iMovie. However, every 15-20 minutes, iMovie stops recording and shows a blue screen with the message 'No Camera Attached'. I then have to start recording again, and wait another 15 minutes before the same thing occurs. This is very annoying! I have over 60 gigs of free HD space, so that shouldn't be an issue - hence, can anyone advise me what the problem might be?
    BTW - I did do a search on this topic, but the only thread or relevance I could find...
    Walt on line, "Making hour-long iMovie HD videos with iSight", 02:25pm Oct 20, 2005 CDT
    ...wasn't much help and itself pointed to a thread that has been deleted.
    Is it simply a case that long movies can't be recorded in iMovie whatever the input (not just the iSight)? If so, I'd be grateful for any advice as it might save me the money spent purchasing a cheap videocam. Also, I'm aware that I can record for a seemingly unlimited time in Quicktime Pro, but the quality is not as good as with iMovie which I'd obviously prefer.
    My set-up: PowerBook 15" 1.67ghz (rev.c), 80gb, 2gb RAM, Tiger 10.4.3, iMovie 5.0.2, iSight (rev.b)
    Many thanks for any assistance with this!
    R.

    We are having the same problem at my office. I have 5 iSights/5 computers all have been recording short interviews 15 minutes or less and now two of them have decided to stop one just freezes the other gets the blue screen.
    I have deleted preferences, repaired permissions doesn't seem to work. Camara works fine on another computer. So it is something to do with iMovie or a preference I am not finding??
    Thank you,
    Angie
    Hi there,
    I'm just curious to see if anyone can help me with
    the problems I'm having using iMovie and iSight.
    Basically, I'm trying to record a series of 2-hour
    lectures at my uni and am using the iSight as a
    firewire camera to record from into iMovie. However,
    every 15-20 minutes, iMovie stops recording and shows
    a blue screen with the message 'No Camera Attached'.
    I then have to start recording again, and wait
    another 15 minutes before the same thing occurs. This
    is very annoying! I have over 60 gigs of free HD
    space, so that shouldn't be an issue - hence, can
    anyone advise me what the problem might be?
    BTW - I did do a search on this topic, but the only
    thread or relevance I could find...
    Walt on line,
    "Making hour-long iMovie HD videos with iSight",
    02:25pm Oct 20, 2005 CDT
    ...wasn't much help and itself pointed to a thread
    that has been deleted.
    Is it simply a case that long movies can't be
    recorded in iMovie whatever the input (not just the
    iSight)? If so, I'd be grateful for any advice as it
    might save me the money spent purchasing a cheap
    videocam. Also, I'm aware that I can record for a
    seemingly unlimited time in Quicktime Pro, but the
    quality is not as good as with iMovie which I'd
    obviously prefer.
    My set-up: PowerBook 15" 1.67ghz (rev.c), 80gb, 2gb
    RAM, Tiger 10.4.3, iMovie 5.0.2, iSight (rev.b)
    Many thanks for any assistance with this!
    R.

  • Sqlldr error510 Physical record in data file is longer than the max 1048576

    SQL*Loader: Release 10.2.0.2.0 - Production on Fri Sep 21 10:15:31 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Control File: /apps/towin_p/bin/BestNetwork.CTL
    Data File: /work/towin_p/MyData.dat
    Bad File: /apps/towin_p/bin/BestNetwork.BAD
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Continuation: none specified
    Path used: Direct
    Load is UNRECOVERABLE; invalidation redo is produced.
    Table "BN_ADM"."DWI_USAGE_DETAIL", loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name Position Len Term Encl Datatype
    USAGE_DETAIL_DT FIRST * , DATE MM/DD/YYYY HH24:MI:SS
    UNIQUE_KEY SEQUENCE (MAX, 1)
    LOAD_DT SYSDATE
    USAGE_DETAIL_KEY NEXT * , CHARACTER
    RATE_AREA_KEY NEXT * , CHARACTER
    UNIT_OF_MEASURE_KEY NEXT * , CHARACTER
    CALL_TERMINATION_REASON_KEY NEXT * , CHARACTER
    RATE_PLAN_KEY NEXT * , CHARACTER
    CHANNEL_KEY NEXT * , CHARACTER
    SERIALIZED_ITEM_KEY NEXT * , CHARACTER
    HOME_CARRIER_KEY NEXT * , CHARACTER
    SERVING_CARRIER_KEY NEXT * , CHARACTER
    ORIGINATING_CELL_SITE_KEY NEXT * , CHARACTER
    TERMINATING_CELL_SITE_KEY NEXT * , CHARACTER
    CALL_DIRECTION_KEY NEXT * , CHARACTER
    SUBSCRIBER_LOCATION_KEY NEXT * , CHARACTER
    OTHER_PARTY_LOCATION_KEY NEXT * , CHARACTER
    USAGE_PEAK_TYPE_KEY NEXT * , CHARACTER
    DAY_OF_WEEK_KEY NEXT * , CHARACTER
    FEATURE_KEY NEXT * , CHARACTER
    WIS_PROVIDER_KEY NEXT * , CHARACTER
    SUBSCRIBER_KEY NEXT * , CHARACTER
    SUBSCRIBER_ID NEXT * , CHARACTER
    SPECIAL_NUMBER_KEY NEXT * , CHARACTER
    TOLL_TYPE_KEY NEXT * , CHARACTER
    BILL_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
    BILLING_CYCLE_KEY NEXT * , CHARACTER
    MESSAGE_SWITCH_ID NEXT * , CHARACTER
    MESSAGE_TYPE NEXT * , CHARACTER
    ORIGINATING_CELL_SITE_CD NEXT * , CHARACTER
    TERMINATING_CELL_SITE_CD NEXT * , CHARACTER
    CALL_ACTION_CODE NEXT * , CHARACTER
    USAGE_SECONDS NEXT * , CHARACTER
    SUBSCRIBER_PHONE_NO NEXT * , CHARACTER
    OTHER_PARTY_PHONE_NO NEXT * , CHARACTER
    BILLED_IND NEXT * , CHARACTER
    NO_USERS_IN_CALL NEXT * , CHARACTER
    DAP_NO_OF_DSAS_USED NEXT * , CHARACTER
    USAGE_SOURCE NEXT * , CHARACTER
    SOURCE_LOAD_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
    SOURCE_UPDATE_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
    RATE_PLAN_ID NEXT * , CHARACTER
    NETWORK_ELEMENT_KEY NEXT * , CHARACTER
    SQL string for column : "-2"
    SQL*Loader-510: Physical record in data file (/work/towin_p/MyData.dat) is longer than the maximum(1048576)
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    Table "BN_ADM"."DWI_USAGE_DETAIL":
    0 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Date conversion cache disabled due to overflow (default size: 1000)
    Bind array size not used in direct path.
    Column array rows : 5000
    Stream buffer bytes: 256000
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records read: 7000382
    Total logical records rejected: 0
    Total logical records discarded: 0
    Total stream buffers loaded by SQL*Loader main thread: 1666
    Total stream buffers loaded by SQL*Loader load thread: 4996
    Run began on Fri Sep 21 10:15:31 2007
    Run ended on Fri Sep 21 10:27:14 2007
    Elapsed time was: 00:11:43.56
    CPU time was: 00:05:36.81

    What options are you using on the CTL file? How does your data file looks like (e.g. One line per record, one line only)?

  • Can we record for longer than 1 hour 6 minutes?

    There are lectures that go longer than 1 hour and 6 minutes, the default in my version of Garageband.
    Is there a way to keep it recording for longer than that?
    Thanks!
    Alphawolf7

    http://www.bulletsandbones.com/GB/GBFAQ.html#recordlength

  • Fetching null records out of a function takes much longer than non-null

    Hi,
    We have a function that is called thousands of times on SQL. This function has a SELECT than can return one row at max.
    We realized that when the SQL statement doesn't return any record, it takes 3x times longer in the fetch phase.
    I made a simple test with three functions were each one was executed 1000 times. The first one has an extra outer join that guarantees that it always returns one record, a second one with an explicit cursor that can return 0 records and a third one with an implicit cursor that can also return 0 records.
    Here is the sample test code:
    DECLARE
    -- Local variables here
    CURSOR c IS
    SELECT teste_vasco.teste_vasco1(epis.id_episode) as val
    FROM episode epis
    WHERE rownum <= 1000;
    TYPE t_c IS TABLE OF c%ROWTYPE;
    l_c t_c;
    BEGIN
    -- Test statements here
    OPEN c;
    FETCH c BULK COLLECT
    INTO l_c;
    CLOSE c;
              for i in l_c.first..l_c.last
              loop
              dbms_output.put_line(i || ' :' || l_c(i).val);
              end loop;
    END;
    The difference between the tests is that instead of calling the vasco1 function, vasco2 and vasco3 is called.
    ###Test1
    -Function vasco1:
    FUNCTION teste_vasco1(i_episode IN episode.id_episode%TYPE) RETURN VARCHAR2 IS
    l_dt_set TIMESTAMP WITH LOCAL TIME ZONE;
    l_flg_stage VARCHAR2(3);
    l_dt_warn TIMESTAMP WITH LOCAL TIME ZONE;
    CURSOR c_care_stage IS
    SELECT cs.dt_set, cs.flg_stage, cs.dt_warn
    FROM episode epis
    LEFT JOIN care_stage cs ON (cs.id_episode = epis.id_episode AND cs.flg_active = 'Y')
    WHERE epis.id_episode = i_episode;
    BEGIN
    OPEN c_care_stage;
    FETCH c_care_stage
    INTO l_dt_set, l_flg_stage, l_dt_warn;
    CLOSE c_care_stage;
    IF l_dt_set IS NULL
    THEN
    RETURN NULL;
    END IF;
    RETURN l_dt_set || l_flg_stage || l_dt_warn;
    EXCEPTION
    WHEN OTHERS THEN
    pk_alert_exceptions.raise_error(error_code_in => SQLCODE, text_in => SQLERRM);
    pk_alert_exceptions.reset_error_state;
    RETURN NULL;
    END teste_vasco1;
    -Trace file:
    SELECT TESTE_VASCO.TESTE_VASCO1(EPIS.ID_EPISODE) AS VAL
    FROM
    EPISODE EPIS WHERE ROWNUM <= 1000
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.04 0.06 0 8 0 1000
    total        3      0.06       0.07          0          8          0        1000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 1)
    Rows Row Source Operation
    1000 COUNT STOPKEY (cr=8 pr=0 pw=0 time=2035 us)
    1000 INDEX FAST FULL SCAN EPIS_EPISODE_INFO_UI (cr=8 pr=0 pw=0 time=1030 us)(object id 153741)
    SELECT CS.DT_SET, CS.FLG_STAGE, CS.DT_WARN
    FROM
    EPISODE EPIS LEFT JOIN CARE_STAGE CS ON (CS.ID_EPISODE = EPIS.ID_EPISODE AND
    CS.FLG_ACTIVE = 'Y') WHERE EPIS.ID_EPISODE = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1000 0.07 0.05 0 0 0 0
    Fetch 1000 0.01 0.02 0 4001 0 1000
    total     2001      0.09       0.07          0       4001          0        1000
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 2)
    ###Test2
    -Function vasco2:
    FUNCTION teste_vasco2(i_episode IN episode.id_episode%TYPE) RETURN VARCHAR2 IS
    l_dt_set TIMESTAMP WITH LOCAL TIME ZONE;
    l_flg_stage VARCHAR2(3);
    l_dt_warn TIMESTAMP WITH LOCAL TIME ZONE;
    CURSOR c_care_stage IS
    SELECT cs.dt_set, cs.flg_stage, cs.dt_warn
    FROM care_stage cs
    WHERE cs.id_episode = i_episode
    AND cs.flg_active = 'Y';
    BEGIN
    OPEN c_care_stage;
    FETCH c_care_stage
    INTO l_dt_set, l_flg_stage, l_dt_warn;
    IF c_care_stage%NOTFOUND
    THEN
    CLOSE c_care_stage;
    RETURN NULL;
    END IF;
    CLOSE c_care_stage;
    IF l_dt_set IS NULL
    THEN
    RETURN NULL;
    END IF;
    RETURN l_dt_set || l_flg_stage || l_dt_warn;
    EXCEPTION
    WHEN OTHERS THEN
    pk_alert_exceptions.raise_error(error_code_in => SQLCODE, text_in => SQLERRM);
    pk_alert_exceptions.reset_error_state;
    RETURN NULL;
    END teste_vasco2;
    -Trace File:
    SELECT TESTE_VASCO.TESTE_VASCO2(EPIS.ID_EPISODE) AS VAL
    FROM
    EPISODE EPIS WHERE ROWNUM <= 1000
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.27 0 8 0 1000
    total        3      0.00       0.27          0          8          0        1000
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 1)
    Rows Row Source Operation
    1000 COUNT STOPKEY (cr=8 pr=0 pw=0 time=2048 us)
    1000 INDEX FAST FULL SCAN EPIS_EPISODE_INFO_UI (cr=8 pr=0 pw=0 time=1045 us)(object id 153741)
    SELECT CS.DT_SET, CS.FLG_STAGE, CS.DT_WARN
    FROM
    CARE_STAGE CS WHERE CS.ID_EPISODE = :B1 AND CS.FLG_ACTIVE = 'Y'
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1000 0.03 0.05 0 0 0 0
    Fetch 1000 0.00 0.00 0 2001 0 1
    total     2001      0.03       0.06          0       2001          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 2)
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID CARE_STAGE (cr=2001 pr=0 pw=0 time=11082 us)
    1 INDEX RANGE SCAN CS_EPIS_FACT_FST_I (cr=2000 pr=0 pw=0 time=7815 us)(object id 688168)
    ###Test3
    -Function vasco3
    FUNCTION teste_vasco3(i_episode IN episode.id_episode%TYPE) RETURN VARCHAR2 IS
    l_dt_set TIMESTAMP WITH LOCAL TIME ZONE;
    l_flg_stage VARCHAR2(3);
    l_dt_warn TIMESTAMP WITH LOCAL TIME ZONE;
    BEGIN
    BEGIN
    SELECT cs.dt_set, cs.flg_stage, cs.dt_warn
    INTO l_dt_set, l_flg_stage, l_dt_warn
    FROM care_stage cs
    WHERE cs.id_episode = i_episode
    AND cs.flg_active = 'Y';
    EXCEPTION
    WHEN no_data_found THEN
    RETURN NULL;
    END;
    IF l_dt_set IS NULL
    THEN
    RETURN NULL;
    END IF;
    RETURN l_dt_set || l_flg_stage || l_dt_warn;
    EXCEPTION
    WHEN OTHERS THEN
    pk_alert_exceptions.raise_error(error_code_in => SQLCODE, text_in => SQLERRM);
    pk_alert_exceptions.reset_error_state;
    RETURN NULL;
    END teste_vasco3;
    -Trace file:
    SELECT TESTE_VASCO.TESTE_VASCO3(EPIS.ID_EPISODE) AS VAL
    FROM
    EPISODE EPIS WHERE ROWNUM <= 1000
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.25 0.27 0 8 0 1000
    total        3      0.25       0.27          0          8          0        1000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 1)
    Rows Row Source Operation
    1000 COUNT STOPKEY (cr=8 pr=0 pw=0 time=2033 us)
    1000 INDEX FAST FULL SCAN EPIS_EPISODE_INFO_UI (cr=8 pr=0 pw=0 time=1031 us)(object id 153741)
    SELECT CS.DT_SET, CS.FLG_STAGE, CS.DT_WARN
    FROM
    CARE_STAGE CS WHERE CS.ID_EPISODE = :B1 AND CS.FLG_ACTIVE = 'Y'
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1000 0.07 0.06 0 0 0 0
    Fetch 1000 0.00 0.00 0 2001 0 1
    total     2001      0.07       0.06          0       2001          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 2)
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID CARE_STAGE (cr=2001 pr=0 pw=0 time=11119 us)
    1 INDEX RANGE SCAN CS_EPIS_FACT_FST_I (cr=2000 pr=0 pw=0 time=7951 us)(object id 688168)
    As you can see, in the first example the fetch time of the SELECT in the function vasco1 takes 0.02 seconds and 0.06 in the SELECT of the test script. This test returned 1000 non-null records.
    In the tests 2 and 3, the fetch phase of the SELECT in the test script takes much more time - 0.27 seconds, despite the fetch of the SELECT in the functions (vasco2 and vasco3) took 0.00 seconds (as it only returned 1 row for the 1000 executions) and the only difference between them is the function that is called. Both test2 and test3 returned 999 null records and 1 non-null record.
    How it's possible than a select null records takes much longer than selecting non-null records?
    Hope you can understand the problem and thank you in advance for any help or suggestions.

    Thank you for the replies...
    But the thing is that the SELECT in the function is very fast and the fetch phase takes no time (0.00 second).
    And, as you can see in the execution plan, there's no need for a full index scan...only a range scan is performed:
    SELECT CS.DT_SET, CS.FLG_STAGE, CS.DT_WARN
    FROM
    CARE_STAGE CS WHERE CS.ID_EPISODE = :B1 AND CS.FLG_ACTIVE = 'Y'
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1000 0.07 0.06 0 0 0 0
    Fetch 1000 0.00 0.00 0 2001 0 1
    total     2001      0.07       0.06          0       2001          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 286 (recursive depth: 2)
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID CARE_STAGE (cr=2001 pr=0 pw=0 time=11119 us)
    1 INDEX RANGE SCAN CS_EPIS_FACT_FST_I (cr=2000 pr=0 pw=0 time=7951 us)(object id 688168)
    But, the fetch phase of the SQL that calls the function that has this query takes 0,27 seconds.
    As far as the contex switch, we are aware of that, but we can have modularity with this solution and the first function also have many context switching and is must faster.
    Edited by: Pretender on Mar 18, 2009 3:38 PM

  • OL 5.6 mcelog - warning: record length longer than expected

    OS: Oracle Linux 5.6 x86_64 UEK 2.6.32-100 under Virtualbox.
    Problem: root mail account filling up with cron messages:
    /etc/cron.hourly/mcelog.cron:
    mcelog: warning: record length longer than expected. Consider update.
    Apparently this issue is still present in OL 5.6 as it was in OL 5.5:
    mcelog-error in mail from hourly cron job using Unbreakable Kernel
    The following may not be the correct or best way to fix the problem, but it works for me:
    rpm -ivh http://public-yum.oracle.com/repo/OracleLinux/OL5/6/base/x86_64/mcelog-0.9pre-1.30.el5.src.rpm
    cd /usr/src/redhat/SOURCES/
    tar zxvf mcelog-0.9pre.tar.gz
    cd mcelog-0.9pre
    Test:
    /usr/sbin/mcelog --ignorenodev --filter >> /var/log/mcelog
    mcelog: warning: record length longer than expected. Consider update.
    ./mcelog --ignorenodev --filter >> /var/log/mcelog
    (quiet)
    Implement:
    mv /usr/sbin/mcelog /usr/sbin/mcelog.orig
    cp /usr/src/redhat/SOURCES/mcelog-0.9pre/mcelog /usr/sbin/mcelogRegards.

    Username:SQL*Loader-350: Syntax error at line 14.What is line 14 in your script?
    Thanks,
    Hussein

  • Application is getting crash when recording long time( more than 1hour 30m)

    Implemented a Java app to record audio & video .
    For audio recording - Java Sound , video - writing all captured images to one file by capturing images using java.awt.Robot class
    Generating video file using JMF from captured images.
    Once clicked on stop button, I am merging audio & video files using JMF .
    But Here the problem is if I record more than 1hr 30min then I clicked on STOP button, then the app is crashing.
    I am not getting any out of memory or some other exceptions.
    Please suggest.
    Thanks
    GRR

              * This is called from the Processor to read a frame worth of video data.
              public void read(Buffer buffer) throws IOException
                   logger.debug(" read(): nextImage = " + nextImage + ",numOfFrames : "+RecordingConverter.numOfImages);
                   if (nextImage >= numOfImages)
                        start = System.currentTimeMillis();     
                        // We are done. Set EndOfMedia.
                        logger.debug("Done reading all images.");
                        buffer.setEOM(true);
                        buffer.setOffset(0);
                        buffer.setLength(0);
                        buffer.setFormat(format);
                        ended = true;                         
                        return;
                   nextImage++;
                   int imageSize = EndianFormat.readInt(recFile);
              byte data[] = new byte[imageSize];           
              recFile.readFully(data);           
                   buffer.setData(data);
                   buffer.setOffset(0);
                   buffer.setLength(data.length);
                   buffer.setFormat(format);
                   buffer.setFlags(buffer.getFlags() | Buffer.FLAG_KEY_FRAME);     
              }

  • LSMW - The source field is longer than the target field in MEK2

    Hi,
    I tried to create a simple LSMW script to end date condition records using MEK2 transaction.
    I provide Condition type, Plant, Material, Start Date and default Valid from and Valid to dates.
    When the program is generated, it defines just 1 character for Start date and when I try to map the date field from source, it gives following error message.
    Attention: The source field is longer than the target field.
    Message no. /SAPDMC/LSMW_OBJ_040013.
    Not sure why it is happening. I am on 46C.
    Please let me know if anybody come across such situation. I have checked all aspects, tried recreating in different systems but ended up with the same problem.
    You help in this regard will be highly appreciated.
    Regards
    Kasi

    It is a standard error, and I have not found any OSS note to fix it. In order to solve it, you can use this workaround:
    - Export the recording into a txt file
    - Edit the file, and modify the information correspondent to domain, data element and lengths (internal and external) fields, and save it
    - Import the recording
    For example, I had problems in a recording with the field SEL_DATE of the transaction VB02. Then, I changed these values:
    Values before changes:
    /SAPDMC/LSGBDC                ZTEST          ZVB02_ZA01     0018                                        0000 SEL_DATE                                                                                DATE_ON                       Valid on                                                    CHAR1                         CHAR1                          SEL_DATE                                                    0000000000                              000001000001000001CHARC
    Values after changes:
    /SAPDMC/LSGBDC                ZTEST          ZVB02_ZA01     0018                                        0000 SEL_DATE                                                                                DATE_ON                       Valid on                                                    DATUM                         DATAM                          SEL_DATE                                                    0000000000                              000010000010000010CHARC
    (values 01 has been replaced by values 10)
    The values changed correspond to fields:
    /SAPDMC/LSGBDC-PARDOMAIN
    /SAPDMC/LSGBDC-PARDATAELEMENT
    /SAPDMC/LSGBDC-LENG
    /SAPDMC/LSGBDC-INTLEN
    /SAPDMC/LSGBDC-OUTPUTLEN
    The information that you have changed should be updated in table /SAPDMC/LSGBDC, where all the information of the recordings is stored. Check this information has been updated after recording import.
    I hope this solution be useful for anyone...
    Regards.

  • Error message "Page header or footer longer than a page"

    Hi
    Our application call crystal report to generate/display crystal report in PDF format. We have a dev system to test this, all looking good, however when we moved to production server, the attached error message is displayed when running crystal report. 
    Invoking method GENERATEREPORT returned error Page Header plus Page Footer is too large for the page.
    Error in File C:\DOCUME~1\SERV
    IC1.LIM\LOCALS1\Temp\temp coq new xxxx.rpt:
    Page header or footer longer than a page.
    Same report doesnu2019t get this error message if running it from crystal report developer. 
    Not all crystal reports got this error, but some of reports display strange view in PDF, the content of report are reduced in a paper size like label size, not A4 size.  I have checked the default printer on the server, it points to a normal window printer, not a label printer. I have checked all possible settings on production server and compared with dev server, all looks same to me
    I suspect the label printing size is used somehow when the report is previewed in pdf format, not normal A4 size although I have selected a normal window printer as a default printer on the window,what else should i check?
    Could you please shed some light on this?
    Thanks!
    Li

    I found the issue is from label printer that is setup on the production server. After I removed the label printer from the server, then error message is gone, all report running fine.
    This label printer is not the default window printer, it is a network printer but is configured as a printer using local port on the server.
    I think for some reasons, crystal report alway picks up the label printer page size, not the normla page size, I am not sure why and how crystal report do this, it should pick up the default printer, which is a normal printer.
    There is no error when I run report within CR. The error happens only when I try to generate crystal report from our application.
    I do have logo in page header/footer, subreports in report footer, but I don't think they cause the problem, because I can run same report without any problem in dev system.
    Any idea what I should check next?
    Thanks

  • I went to back up my iPhone4 and it said there was an update.  So I clicked  on 'update and backup' (or something close to that.)  It took FOREVER...way longer than usual. When it was done, it showed the power cord w/an arrow to iTunes.  ***??

    I went to back up my iPhone4 and it said there was an update.  So I clicked  on 'update and backup' (or something close to that.)  It took FOREVER...way longer than usual. When it was done, it showed the power cord w/an arrow to iTunes.  ***???  I restarted my computer and tried the whole process again and still...the power cord with an arrow to iTunes.  And I know this means I have to go to Apple and get another phone and will lose anything that wasn't backed up. Anyone else have this happen or have a solution so I don't have to do a restore???

    Reboot your computer.
    Disable your antivirus and firewall.
    Connect the phone to a USB port directly on your computer, not to a hub. If your computer is not a laptop and has ports on both front and back use a port on the back.
    If iTunes does not recognize the phone when you connect it see: http://support.apple.com/kb/HT1808
    If it STILL doesn't work open a command window and type:
       netsh winsock reset
    Then try again. (It seems that a recent Microsoft Windows update may have corrupted network settings; this restores them to the factory default).
    BTW, was there an error number associated with the failed update?

  • Photobooth wont record long videos.

    My apologies if this has been answered. I did my best to search these and other forums.
    Ever since upgrading to Yosemite, Photobooth has some sort of error when recording long videos.
    Short videos record just fine, and appear for playback in the gallery bar. They display the correct video length in the corner of the thumbnail.
    Longer videos (I'm not sure on the exact cut off point where failure begins) appear as a grey box in the gallery bar, and display a time of 0:00. When I attempt to play them they just show a black screen with the Play triangle with an X through it.
    The first time this occurred I went to my photo booth library, clicked show package contents, and checked the file in finder. The movie had a file tag of .MOV rather than the typical .mov - Not sure if this means anything. The second time this happened (the next time I tried to record a long video) it had the typical .mov file name but also would not play.
    Another note is that I did not close the lid or cmd-q the program. I pressed stop, and left photobooth open for at least an hour before attempting playback hoping it would render correctly.
    Since then I have tried to record shorter videos and those work just fine.
    Can I give you any additional information to help solve this problem?
    Thank you very much in advance.

    I seem to be having the same issue as this guy here: How do I record a video in Yosemite?
    Id like to be able to fix the videos that I have recorded though, even though I now have an alternative to using photobooth.

  • I thought the NEW DVR's were supposed to be able to record more than 2 shows at 1 time

    I was told that the New DVR which is what i have, i guess its called a media center or something like that was supposed to record more than 2 shows simultaneously but found out last night that not to be true. I had 3 scheduled and 1 of them did not record.
    its next to impossible to make 1 show record at a later time, in the instance of a series because you would have to allow it to record 1st run and repeats. I don't want it to record older previous episodes. If I set it at only 1st run it would skip the later time becaue it is no longer 1st run it is now a repeat.
    there needs to be a better option available for series or truly allow to record more than 2 shows at a time.

    no-more-holds wrote:
    I was told that the New DVR which is what i have, i guess its called a media center or something like that was supposed to record more than 2 shows simultaneously but found out last night that not to be true. I had 3 scheduled and 1 of them did not record.
    its next to impossible to make 1 show record at a later time, in the instance of a series because you would have to allow it to record 1st run and repeats. I don't want it to record older previous episodes. If I set it at only 1st run it would skip the later time becaue it is no longer 1st run it is now a repeat.
    there needs to be a better option available for series or truly allow to record more than 2 shows at a time.
    If the media server you have (Arris VMS1100) is only recording 2 shows at once then you aren't signed up for enhanced service which does cost more per month.
    All new customers are getting the VMS boxes however you have to add the correct service to be able to record 6 shows at once. If you don't have enhanced then Verizon limits the box to be able to only use 2 tuners.
    If you want the 6 tuner option you have to upgrade through your account online or call them for them to add the enhanced service.
    I don't know if thats what you wanted and did order it and Verizon didn't configure your account correctly or not, but thats how it works.
    If you want to have 6 tuners and want to add the enhanced service all you have to do is this unless you'd prefer to call.
    Sign into your account online and go to add/change service. Once you are on the add/upgrade my services pages, on the left click on tv equipment & outlets. You should then see an option to add the enhanced service on it. If it is not there then you will have to call.

  • "entry in field bank account number is longer than 10 characters"

    Hello Sap Experts,
    The user is getting an error on saving the bank infotype. (0009). The user is trying to create a IT0009 for an international assignee.The error is "entry in field bank account number is longer than 10 characters.
    The default characters allowed maximum specified in the technical field is 18 characters. The abaper also checked and confirmed that it is specified as 18 characters.
    This is only for a single employee.
    Please provide your suggestions.
    Thanks
    Jyothsna

    Jyothsna,
    You need to extend Bank Account number length....even though 18 will be the max length..by default assignment of length will be different from country to country. So you need to extend the length of Bank a/c number in trasaction OY17.....select the country and change the bank a/c number length to 18 and Checking rule to 5 max value length.
    Mohan

  • Audio is 1 frame longer than video after encode

    My .ac3 files are coming up 1 frame longer than they should be on SOME encodes. Unfortunately, I can't see any noticeable pattern to which files it happens to. It occurs on anything from audio-only files created in Soundtrack to an FCP video/audio export. It's happened on both a DV sequence as well as a DVCPRO HD sequence. It's a bit unnerving to see the audio longer than the video, even if it is one frame because the end product is slightly (1 frame, I'd reckon) out of sync. I've tried setting a specific out point in my sequence and got the same effect.
    Does anyone have any ideas? Anyone else having their audio come up 1 frame too long on a Dolby encode?
    Thanks,
    Cameron

    Anyone else having their audio come up 1 frame too long on a Dolby encode?
    Yes.
    The rest is longwinded with no answer to the problem.
    First the system config.
    Dual 1.8 GHz PCI-X G5
    4 GB Ram
    Media drive: 800 GB Raid FW 800
    (4 discs, 2 on internal bus, 2 on LaCie FW 800 card)
    162 MB/s Read, 91 MB/s Write.
    FCP 5.1.2
    Compressor 2.3
    DVD SP 4.1.1
    I created a 16 minute NTSC DVC PRO50 sequence, audio at 48KHz 16 bit.
    Not having a person recorded talking for that length of time I created a series of one frame "pops" of color bars and tone, followed by 10 seconds of video and audio slug.
    This sequence was copied and pasted to fill the timeline with a single frame of the "pop" at the very end.
    Everything perfectly in sync.
    I nested the sequence and rendered.
    No matter how the 16 minute timeline was exported and compressed to MPEG-2, 7 Mb/s and Dolby 2.0 192 Kb/s.
    (direct export to Compressor, export of audio and video separately or sequence export, then to Compressor)
    there was always an audio sync error once imported to the DVD SP timeline.
    Not an entire frame out, but on zooming to max in the timeline in SP, about 7/8 of a frame extra.
    If I exported just the last 1:21 min of the NTSC DVC PRO50 timeline, and compressed with the same settings, the extra audio length was about 3/4 of a frame over.
    Next I exported the NTSC DVC PRO50 timeline as a DVCPRO HD100 1080 60i QT sequence.
    Cut the last 1:21 min and imported to Compressor.
    Compressed as 19 MB/s MPEG-2 and Dolby 2.0 192 Kb/s.
    Created a new DVD SP HD project and brought the files into the timeline.
    Zoomed in at the end, and the error in sync was minimal, not perfect, but difficult to make out even at full zoom.
    Anyone have any suggestions to get things, at least Standard Definition, perfectly in sync?

  • Code of a method longer than 65536 bytes

    I just moved my web application (jsp/servlets, no EJB) from apache tomcat to Oracle 9iAs Container for J2EE (OC4J). Everything is working fine except one jsp file, called "EditTmpForm.jsp". Basically, this jsp file is used to update existing records, so it needs to first retrieve the data from the database by using the primary keys. I used a java bean to handle the JDBC/SQL code and plugged that bean into the jsp file. The jsp works fine on tomcat, but when I run it on OC4J, I got the following error:
    500 Internal Server Error
    Error parsing JSP page /pirs/jsp/EditTmpForm.jsp
    Error creating jsp-page instance: java.lang.ClassFormatError: jspPage19jsp_EditTmpForm.jsp (code of a method longer than 65536 bytes)
    Why does it give me this error? 65536 bytes is only 64k, which can be stored in a floppy disk.
    Thank you very much if you can give me some idea why this occurs.
    James

    I looked at the code after a jsp compiled to a servlet. The
    jspService method of "EditTmpForm.jsp" is about 801 KB. But for another jsp "EditOperation.jsp", the  jspService method is about 236 KB.
    Both are much greater than 64 K, but why it runs okay for "EditOperation.jsp" but gives me error for "EditTmpForm.jsp"?
    Furthermore, since the HTML form is long, so the HTML code itself is bigger than 64 K, how can you prevent your _jspService method so that is is shorter than 64K?
    Thanks for your help!
    James

Maybe you are looking for