More time for recording sound...

N73 has option to record only 1 mint.

In the full development system under sound you find snd write waveform and snd read waveform.
That is all you need, and ofcourse a soundcard
at least lv 5.1
greetings from the Netherlands

Similar Messages

  • Taking more time for retreving data from nested table

    Hi
    we have two databases db1 and db2,in database db2 we have number of nested tables were there.
    Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
    For feching records it's going to take much more time even records are less in table . what would be the reason.
    plz help me daliy we are facing the problem regarding the same

    Please avoid duplicate thread :
    quaries taking more time
    Nicolas.
    +< mod. action : thread locked>+

  • 39L4363D - Unable to set padding time for recording start and end time

    Hello,
    I have a problem with my 39L4363DG tv (Software 7.1.90.34.01.1).
    I'm not able the set padding time for recording start and/or end time as described in the [manual|http://www.toshiba-om.net/LCD/PDF/English/L4363-323950-English.pdf] on page 51. Both menu item "start padding time" and "end padding time" are disabled?
    Can anyone help me and give me an advice on how to solve it?

    Hi
    The padding time can be set for programmed recording.
    There is also an scheduling priority
    If scheduled time slots are next to each other and there is more than one minute between the end time of the first schedule and the start time of the next schedule, programmed recording will be performed correctly.
    When +Start Padding Time+ and +End Padding Time+ are set, the start time and end time will be the time plus the additional minutes.
    If scheduling times overlap, priority will be given to the programmed recording which starts first.
    When the programmed recording that started first ends, recording will switch to the next scheduled programme.
    At this time, depending on how far the scheduling times overlap, the beginning section of the next scheduled programme may not be recorded.

  • 'BAPI_GOODSMVT_CREATE' takes more time for creating material document

    Hi Experts,
    I m using 'BAPI_GOODSMVT_CREATE' in my custom report, it takes more time for creating Material documents.
    Please let me know if there is any option to overcome this issue.
    Thanks in advance
    Regards,
    Leo

    Hi,
    please check if some of following OSS notes are not valid for your problem:
    [Note 838036 - AFS: Performance issues during GR with ref. to PO|https://service.sap.com/sap/support/notes/838036]
    [Note 391142 - Performance: Goods receipt for inbound delivery|https://service.sap.com/sap/support/notes/391142]
    [Note 1414418 - Goods receipt for customer returns: Various corrections|https://service.sap.com/sap/support/notes/1414418]
    The other idea is not to commit each call, but executing commit of packages e.g. after 1000 BAPI calls.
    But otherwise, I am afraid you can not do a lot about performance of standard BAPI. Maybe there is some customer enhancement which is taking too long inside the BAPI, but this has to be analysed by you. To analyse performance, just execute your program via tr. SE30.
    Regards
    Adrian

  • 'BAPI_GOODSMVT_CREATE' takes more time for creating material document for the 1st time

    Hi Experts,
    I am doing goods movement using BAPI_GOODSMVT_CREATE in my custom code.
    Then there is some functional configuration such that, material documents and TR and TO are getting created.
    Now I need to get TO and TR numbers from LTAK table passing material documnt number and year, which I got from above used BAPI.
    The problem I am facing is very strange.
    Only for the 1st time, I am not finding TR and TO values in LTAK table. And subsequent runs I get entries in LTAK in there is a wait time of 5 seconds after bapi call.
    I have found 'BAPI_GOODSMVT_CREATE' takes more time for creating material document with similar issue, but no solution or explanation.
    Note 838036 says something similar, but it seems obsolete.
    Kindly share your expertise and opinions.
    Thanks,
    Anil

    Hi,
    please check if some of following OSS notes are not valid for your problem:
    [Note 838036 - AFS: Performance issues during GR with ref. to PO|https://service.sap.com/sap/support/notes/838036]
    [Note 391142 - Performance: Goods receipt for inbound delivery|https://service.sap.com/sap/support/notes/391142]
    [Note 1414418 - Goods receipt for customer returns: Various corrections|https://service.sap.com/sap/support/notes/1414418]
    The other idea is not to commit each call, but executing commit of packages e.g. after 1000 BAPI calls.
    But otherwise, I am afraid you can not do a lot about performance of standard BAPI. Maybe there is some customer enhancement which is taking too long inside the BAPI, but this has to be analysed by you. To analyse performance, just execute your program via tr. SE30.
    Regards
    Adrian

  • Procedure is taking more time for execution

    hi,
    when i am trying to execute the below procedure it is taking more time for
    execution.
    can you pls suggest the possaible ways to tune the query.
    PROCEDURE sp_sel_cntr_ri_fact (
    po_cntr_ri_fact_cursor OUT t_cursor
    IS
    BEGIN
    OPEN po_cntr_ri_fact_cursor FOR
    SELET c_RI_fAt_id, c_RI_fAt_code,c_RI_fAt_nme,
         case when exists (SELET 'x' FROM A_CRF_PARAM_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
         then 'Yes'
                   when exists (SELET 'x' FROM A_EMPI_ERV_CALIB_DETAIL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CNTRY_IC_CRF_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CRF_CNTRYIDX_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.x_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.y_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_PAR_MARO_GAMMA_PRIME_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM D_ANALYSIS_FAT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM D_CALIB_CNTRY_RI_FATOR t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_BUSI_PORT_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CNTRY_LOSS_DIST_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CNTRY_LOSS_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CRF_BUS_PORTFOL_CRITERIA t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CRF_CORR_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_HYPO_PORTF_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
              else
                   'No'
         end used_analysis_ind,
         creation_date, datetime_stamp, user_id
         FROM A_IC_CNTR_RI_FAT
    ORDER BY c_RI_fAt_id_nme DESC;
    END sp_sel_cntr_ri_fact;

    [When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597]

  • ADF application taking more time for first time and less from second time

    Hi Experts,
    We are using ADF 11.1.1.2.
    Our application contains 5 jsp pages, 10 - 12 taskflows, and 50 jsff pages.
    For the first time in the day if we use the application it is taking more than 60 sec on some actions.
    And from the next time onwords it is taking 5 to 6 sec.
    Same thing is happening daily.
    Can any one tell me why this application is taking more time for first time and less time from second time.
    Regards
    Gayaz

    Hi,
    If you don't restart you WLS every day, then you should read about Tuning Application Module Pools and Connection Pools
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0301
    And pay attention to the parameter: Maximum Available Size, Minimum Available Size
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0314
    And adjust them to suit your needs.

  • Self Service Password Registration Page taking more time for loading in FIM 2010 R2

    Hi,
    I have beeen successfullly installed FIM 2010 R2 SSPR and it is working fine
    but my problem is that Self Service Password Registration Page taking more time for loading when i provide Window Credential,it is taking approximate 50 to 60 Seconds for loading a page in FIM 2010 R2
    very urgent requirement.
    Regards
    Anil Kumar

    Double check that the objectSid, accountname and domain is populated for the users in the FIM portal, and each user is connected to their AD counterparts
    Check here for more info:
    http://social.technet.microsoft.com/wiki/contents/articles/20213.troubleshooting-fim-sspr-error-3003-the-current-user-account-is-not-recognized-by-forefront-identity-manager-please-contact-your-help-desk-or-system-administrator.aspx

  • Unable to Determine the Change Date and Time for records in infotype 2011

    Hi Everyone,
    We need to know on when the clock-in and clock-out records were interfaced to SAP in infotype 2011.
    The change date/time field in infotype 2011 is blank/not populated. Hence, we're unable to determine on when the clock-in records were updated in infotype 2011.
    It is not possible to get the audit logs for infotype 2011 as it is switched off.
    We found this table TEVEN and just the same thing, the field "change on" is blank.
    Kindly help us on how we can determine the change date and time for records in infotype 2011.

    Hi Prasad,
    Here's the scenario.
    On June 3, it was reported that staff's clock-in record on june 1, 7:00AM is missing in infotype 2011.
    However, when I checked infotype 2011, the record is there.
    So, they are now asking me on when this record was updated in infotype 2011 as they are thinking that there might be some delays in sending of the data to SAP.
    The created on and created at in table TEVEN shows the same, june 1, 7:00AM which seems to be not true as on june 3, it was reported to be missing.
    Can you help further on this?

  • Vi for recording sound/wav?

    does anyone know if there are vi's available for recording sound (via microphone) as wav files? I know the examples have vi's for playing wav.

    In the full development system under sound you find snd write waveform and snd read waveform.
    That is all you need, and ofcourse a soundcard
    at least lv 5.1
    greetings from the Netherlands

  • Taking more time for loading Real Cost estimates

    Dear Experts,
    It is taking more time to load data into cude CO-PC: Product Cost Planning - Released Cost Estimates(0COPC_C09).The update mode is "Full Update"There are only 105607 records.For other areas there are more than this records,but it gets loaded easily.
    I have problem only to this 0COPC_C09.Could  anybody can guide me?.
    Rgds
    ACE

    suresh.ratnaji wrote:
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
    Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
    On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default?

  • Thread behaves wierdly.. takes more time for less code...

    Hi everyone,
    I am stuck into this probel very badly..i have never seen such a problem before..Thread bottleneck somewhere is the issue i guess,,, plz have a look at the code patiently.. and do reply.. u guys are my last hope on this thing..Thank YOU...
    The Bank method is called by another method which runs 50 threads , so at a time 50 threads could enter this method to record Debit...
    class BankHelper {
    // Some code of bank.. not relevant to my problem public Bank() {
    long time = System.nanoTime();
    abc.recordDebit(); // want to measure how much time this method takes
    abc.recordTimeForDebit(System.nanoTime() - time); // accumulating the total time the above method takes to calculate
    class abc {
    // Some code of bank.. not relevant to my problem
    public synchronized void recordDebit(){
    debit++;
    } // I had put timers aound the ++ and it took not more than 700 to 900 nanoseconds
    public synchronized void recordTimeForDebit(long time){
    record += time;
    Log.debug("Time taken to record drop for DEBIT " + record + " Millis " + time);
    }Answer:
    Record time is 9014 millis for 5000 increments or 5000 calls to recordDebit()
    One can see in the answer that its a huge number when one is expecting it to be somewhere around:
    EXPECTED ANSWER:
    5000 * 800 nanos( it takes 800 nanos for very increment) = 4000000 = 4 millis and its like 9014 millis
    How is there such a huge difference ????????? I have bruised thorugh it for like 3 days... i have lost all my hope.. Plz help..Also look at the next code..
    When i go to see in the log .. it shows me that every record took like from 2000 nanos to 6 to 7 millis i.e.7000000...
    HOw is this even possibly possible ??? where did it get all this extra time from...it should not be more than some 700 to 900 nanos ..
    Is there a bottleneck somewhere ????
    Now part 2:
    This thing has fazzed, dazzled , destroyed me.. I could not understand this and it has tossed all my concepts into the cupboard..
    Same stuff: JUST HAVE A LOOK AND ENJOY AND TELL ME WHATS GOING ON...Same code.. different way of incrementing data i.e synchronization is at a different place..but results are very highly improved...
    class BankHelper {
    // Some code of bank.. not relevant to my problem
    public Bank() {
    long time = System.nanoTime();
    abc.recordDebit(); // want to measure how much time this method takes
    abc.recordTimeForDebit(System.nanoTime() - time); // accumulating the total time the above method takes to calculate
    }The Bank method is called by another method which runs 50 threads , so at a time 50 threads could enter this method to record Debit...
    class abc {
    // Some code of bank.. not relevant to my problem
    public void recordDebit(){
    someotherclass.increment();
    } // this is not synchronized nowwwwwwww
    // I have put timers here too and it took like 1500 to 2500 nanos
    public synchronized void recordTimeForDebit(long time){
    record += time;
    Log.debug("Time taken to record drop for DEBIT " + record + " Millis " + time);
    class someotherclass{
    // Not relevant code
    public void increment(){
    someotherclass1.increment1();
    class someotherclass1{
    // Not relevant code
    public void increment1(){
    someotherclass2.increment2();
    class someotherclass2{
    // Not relevant code
    public synchronized void increment2(){
    someotherclass3.increment3();
    } //now its synchronized
    class someotherclass3{
    // Not relevat code
    public synchronized void increment3(){
    debit++;
    } //now its synchronized
    }ANSWER: Record is 135 millis for 5000 increments or 5000 calls to recordDebit()
    Expected time was : 5000 * 2500 = 125000000 = 125 millis (WOW .. AS EXPECTED)
    Please don't ask me why this code has been written this way..i know it goes and increment and does the same thing...but somehow when i measured it..
    overall time or the accumulated time for both codes varied like in huge numbers...even though latter code is actually a superset of previous code..
    HOW IS THERE SUCH A HUGE DIFFERENCE BETWEEN THE NUMBERS ???
    COULD BE BECAUSE OF THE POINT OF SYNCHRONIZATION ???
    Thank you..for going through all this..

    Triple post - http://forums.sun.com/thread.jspa?threadID=5334258&messageID=10438241#10438241

  • ME2O taking more time for execution

    This has been found sometime ME2O takes very long time for execution. Please refer the below screen shot for selection.
    If you provide the component no, the search time is little less. But still sometimes the execution take more than expected time. This is because SAP standard program search all the deliveries even though these are completed.
    There is SAP note:1815460 - ME2O: Selection of delivery very slow. This will help to improve the execution time a lot.
    Regards,
    Krishnendu.

    THanks for sharing this information,

  • T-code SUIM taking much more time for generating output for profie change.

    Hi All
    We want to extract report for profile addition and deletion for Users in ECC 6. While executing the the t-code SUIM it is taking much more time (taking more than 20 hrs). This problem is coming after patch application
    Please give the solution/Suggest to minimize the time taken in report generation.
    Thanks-
    Guru Prasad Dwivedi

    Hello Prasad,
    The reason for the performance trouble is a new feature regarding the user change documents. Since note 874850 and 1015043 you will get a more complete overview about the changes regarding a user.
    The disadvantage of that new feature is, that in some customer system usage scenario, the performance is very poor. That's the case, if the central change documents are intensivly used also by other applications and the tables CDPOS, CDHDR, ... contains a very big count of rows. Unfortunatly the user change documents can not be searched by the key columns of the central change docs. - so the bad response time can explained.
    What now ... ?
    There are some work arounds to get the change documents on faster way.
    1st. - You can get the former report output and performance if you
           would use the report RSUSR100 instead of the new RSUSR100N in
           separate mode.
    2nd. - If you want to use the new report RSUSR100N directly and only
           want to get the information about the traditional topics
           (content of USH* tables) you should only mark the search areas
           on the tabstrip 'user attributes') to get a better performance.
         - furthermore limit the date range, if possible
    3rd. - You should regulary (monthly) archive the user relevant documents
           for PFCG and IDENTITY from the central change documents.
           As per our note 1079207 chapter 3 you can reload that archives
           into more selective tables.
           The selection for change documents will be rather faster over
           reloaded archived documents than the documents in the
           central change documents tables.
    Best Regards,
    Guilherme de Oliveira.

  • Query on DSO is taking more time for current month

    Hi All,
    I have query bulit on DSO is running fine for all the month, but only the problem with ccurrent month. For April is taking more than 15 min to run query though it got less records. For other month is taking less than one sec. Could you help me to resolve the issue.
    Regards,
    J B

    Hi JB,
      Same problem we are facing. Last month users ran till from 01.03.2011 to 15.03.2011 around 17.03.2011 even the data volume is less it is running slow or some times not coming. This month it is slow for 01.04.2011 to 11.04.2011 ...
    I am also weird, and this is happening after latest patch implementation.I am not exactly sure why it happens.
    The fix we have done is restricted in the backend on that date from 01.01.1900 -  31.12.9999 and now allowed filter again on the same date. Now if we use the same date range it is running fine.
    Also try if you inpit 01.01.2011 -  31.12.2011 and check the time whether it is running fast or not....it is running normal ...
    Regards
    vamsi

Maybe you are looking for

  • Windows 7 Network Drivers No Longer Works

    Bought a MBP a week ago, just installed Windows 7 on its own partition, installed bootcamp, everything worked fine for the first day. The next day, it did not recognize my network drivers (wireless, ethernet). Control Panel states that they are not i

  • Unable to install of Adobe Premiere Elements 10 on new iMac with Mavericks sw

    get this error message everytime I try to install premier elements 10 on my new imac with Mavericks sw. Followed suggestion and tried to do install Elements 10 with Mac in safe boot mode, same failure message

  • Vendor is also Customer

    Dear All, I have an issue when there is  Vendor is also Customer. Now I am trying to clear the the amount payable to vendor with the customer open item. New GL is activated and when I am trying the above posting by using posting keys 15 and 25 respec

  • Pricing and costing in MTO material

    Hi In Make to order production material  is it advisiable to maintain pricing condition record (PR00) in VK11.As at the time of creation of a sales order it is difficult to know the the excact requirement of the raw material and other cost. And also

  • Downloaded a book PDF version now I can't delete it... Help

    I recently downloaded a PDF booklet into my iBooks library... I can't delete it have tried to but it won't delete help......