Query not fetched the record

Hello,
Could someone help me please ?
I have a listing of my sales orders and I want to make changes in my order by opening the form and fetched with that record. When I click on that particular orderno in my listing of order and call the form to display the details, it calls the form but says "Query could not fetch the record". I do not know why ? Please help me with the solution.
Thanx

Hello,
I think you are passing orderno to called form as a parameter. If you are using parameter list check..
1. If parameter data is getting in form correctly ?
2. Next, have you changed where clause of other block,so that is will display record with passed orderno ?
I am expecting more details from you.
Thanx
Adi

Similar Messages

  • RSBBS jump query not fetching the document no in R3

    Dear Gurus
    With RSBBS transcation jump is given to R3 tansaction FB03. When query is executed and with right click go to option and with display document (FB03) its fetching the document no in DEVELOPMENT server . But when the query is transported to Production its not fetching the document no.
    Kindly do the needful at the earliest
    Regards,
    R.Satish

    Hi
    You said it is not fetching the doc no. Is it failing and showing some error?
    Have all the pre-requisite settings been done via Tcode SICF.
    Regards
    Nageswara

  • Not fetching the records through DTP

    Hi Gurus,
    I am facing a problem while loading the data in to infocube through DTP.
    I have successfully loaded the data till PSA but not able to load the records into infocube.
    The request was successfully with status green but able to see only 0 records loaded.
    later one of my friend executed the DTP successfully with all the records loaded.
    can you please tell me why it is not working with my userid.
    I have found the following difference in the monitor.
    I am not able to see any selections for my request and ale to see REQUID = 871063 in the selection of the request started by my friend.
    can any one tell me why that REQUID = 871063 is not displaying automatically when I have started the schedule.

    Hi,
    I guess the DTP update process is DELTA UPDATE mode. Because you and your friend/colleague have executed SAME DTP object with a small time gap and during the same period no new TRANSATIONS HAVE POSTED IN SOURCE.
    -Try to execute after couple hours....
    Regards

  • Should not fetch  the records which has tran_efct_dte more than 550 days

    I have a table BRKG_TRA AND below is the structure of the table
    BRKG_ORDER_ID     VARCHAR2(15 BYTE)
    BRKG_ORDER_ID_CNTX_CDE     VARCHAR2(10 BYTE)
    BRKG_ACCT_SIDE_CDE     CHAR(1 BYTE)
    TRD_FTR     NUMBER(15,9)
    PGM_ORIG_TRAN_ID     VARCHAR2(6 BYTE)
    BRKG_OPT_OPEN_CLOS_CDE     VARCHAR2(5 BYTE)
    BRKG_ORDER_QTY     NUMBER(17,4)
    TRAN_ID     VARCHAR2(20 BYTE)
    TRAN_CNTX_CDE     VARCHAR2(10 BYTE)
    CRTE_PGM     VARCHAR2(50 BYTE)
    CRTE_TSTP     DATE
    UPDT_PGM     VARCHAR2(50 BYTE)
    UPDT_TSTP     DATE
    DATA_GRP_CDE     VARCHAR2(10 BYTE)
    TRAN_EFCT_DTE     DATE

    select * from <table name> where <dt_field> > sysdate - 550

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Report is not fetching the data from Aggregate..

    Hi All,
    I am  facing the problem  in aggregates..
    For example when i  am running the report using Tcode RSRT2, the BW report is not fetching the data from Aggregates.. instead going into the aggregate it is scanning whole cube Data....
    FYI.. Checked the characteristcis is exactely matching with aggregates..
    and also it is giving the  message as:
    <b>Characteristic 0G_CWWPTY is compressed but is not in the aggregate/query</b>
    Can some body explain me about this error message.. pls let me know solution asap..
    Thankyou in advance.
    With regards,
    Hari

    Hi
    Deactivate the aggregates and then rebuild the indexes and then activate the aggregates again.
    GTR

  • Lookup in transformation not fetching all records

    Hi Experts,
    In the routine of the transformation of a DSO (say DSO1), i have written a look-up on other DSO (say DSO2) to fetch records. I have used all the key fields of DSO2  in the Select statement, Still the look-up is not fetching all the records in DSO1. There is difference in the aggregated value of the Key Figure of both the DSOs. Please suggest, how can i remove this error.
    Thanks,
    Tanushree

    hi tanushree,
    The code which yu have written in the field routine for lookup is not fetching the data. you can debug the field routine code in the simulation mode of execution of DTP by keeping a break point after the transformation.
    you can test the routine with out actually loading the data..
    double click rule where you have routine and in the below you have option called test routine.
    here you can pass input parameters..
    i hope it will give you an idea.
    Regards
    Chandoo7

  • Why is this query not using the index?

    check out this query:-
    SELECT CUST_PO_NUMBER, HEADER_ID, ORDER_TYPE, PO_DATE
    FROM TABLE1
    WHERE STATUS = 'N'
    and here's the explain plan:-
    1     
    2     -------------------------------------------------------------------------------------
    3     | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    4     -------------------------------------------------------------------------------------
    5     | 0 | SELECT STATEMENT | | 2735K| 140M| 81036 (2)|
    6     |* 1 | TABLE ACCESS FULL| TABLE1 | 2735K| 140M| 81036 (2)|
    7     -------------------------------------------------------------------------------------
    8     
    9     Predicate Information (identified by operation id):
    10     ---------------------------------------------------
    11     
    12     1 - filter("STATUS"='N')
    There is already an index on this column, as is shown below:-
         INDEX_NAME INDEX_TYPE     UNIQUENESS     TABLE_NAME     COLUMN_NAME     COLUMN_POSITION
    1     TABLE1_IDX2 NORMAL     NONUNIQUE     TABLE1      STATUS     1
    2     TABLE1_IDX NORMAL     NONUNIQUE     TABLE1     HEADER_ID     1
    So why is this query not using the index on the 'STATUS' Column?
    I've already tried using optimizer hints and regathering the stats on the table, but the execution plan still remains the same, i.e. it still uses a FTS.
    I have tried this command also:-
    exec dbms_stats.gather_table_stats('GECS','GEPS_CS_SALES_ORDER_HEADER',method_opt=>'for all indexed columns size auto',cascade=>true,degree=>4);
    inspite of this, the query is still using a full table scan.
    The table has around 55 Lakh records, across 60 columns. And because of the FTS, the query is taking a long time to execute. How do i make it use the index?
    Please help.
    Edited by: user10047779 on Mar 16, 2010 6:55 AM

    If the cardinality is really as skewed as that, you may want to look at putting a histogram on the column (sounds like it would be in order, and that you don't have one).
    create table skewed_a_lot
    as
       select
          case when mod(level, 1000) = 0 then 'N' else 'Y' end as Flag,
          level as col1
       from dual connect by level <= 1000000;
    create index skewed_a_lot_i01 on skewed_a_lot (flag);
    exec dbms_stats.gather_table_stats(user, 'SKEWED_A_LOT', cascade => true, method_opt => 'for all indexed columns size auto');Is an example.

  • SQL query to populate the records, the maximum processed should be less than 10 records per week.

    Dear All,
    I have a requirement, to write a SQL query to populate the records which are inserted less than 10 no of records.
    The tables has the cretaed_date column and along with other key column which will have unique values.
    Ex1:  The user might have inserted records from application, per week basis, between the date range '01-jun-2013' - 08-jun-2013  , the no of records created by the user during this week may be less than 10 records or more.
    But I want to populate the records by giving date range that too, it should pick the records the count which fall with in 10 records.
    I dont want the query to populate the records if the user has inserted more than 10 records in a week.
    Ex2:
    User 1 has created 15 records during the week 1 ( the query should not populate this ).
    User 2: has cretaed less than 10 records from the UI during the week 2. ( This details should get populated ).
    Thanks

    Use COUNT to find how many rows where inserted in a week.
    If this does not answer your question then please read Re: 2. How do I ask a question on the forums? And provide necessary details.

  • Invoice hold workflow is not fetching the approver from ame

    Hi,
    I'm trying to get the next approver(3rd level) in wf process from ame through profile option, but it's not fetching the approver.
    my query is
    SELECT persion_id||employee_id
    FROM fnd_user
    WHERE employee_id = fnd_profile.VALUE('MG_AP09_PAYABLES_SUPERVISOR')
    other two level approvers (level1 and level 2)I'm getting , which is not through profile but direct join of tables as given below
    SELECT 'person_id:'|| rcv.EMPLOYEE_ID
    FROM ap_holds_all aph
    ,po_distributions_all pd
    ,rcv_transactions rcv
    WHERE pd.line_location_id = aph.line_location_id
    AND pd.PO_DISTRIBUTION_ID= rcv.PO_DISTRIBUTION_ID
    AND aph.hold_id = :transactionId
    AND transaction_type = 'DELIVER'
    SELECT 'person_id:'|| HR2.attribute2
    from ap_holds_all AH
    ,po_line_locations_all PLL
    ,hr_locations_all HR1
    ,hr_locations_all HR2
    where pll.line_location_id = AH.line_location_id
    AND pll.ship_to_location_id = HR1.location_id
    AND nvl(HR1.attribute1,HR1.location_id) = HR2.location_id
    AND AH.hold_id = :transactionId
    what may be the issue?

    Hi Surjith,
    Please look at the code I have written in the user exit, which is just for testing purpose. In SPRO I set workflow as 9 for all the release codes.
    IF i_eban-werks = '1000'.
      actor_tab-otype = 'US'.
      actor_tab-objid = 'S_RITESH'.
      APPEND actor_tab.
      CLEAR actor_tab.
    ENDIF.
    In PR I am getting the user name in processor coloumn correctly.
    please let me know if I am going wrong.
    Thank you.

  • Query Regarding fetching selected records from table

    Hi All,
    I have a table like below:
    NUM1     NUM2     TYPE
    1     2     A
    1     2     S
    2     3     S
    3     4     A
    3     4     S
    4     5     S
    If for a record TYPE='A' then do not select the records for those num1 and num2.
    Eg: for num1=1 and num2=2 there are two records with type='A' and 'S', only records with type 'A' should get selected.
    Output should be like this:
    NUM1     NUM2     TYPE
    1     2     A
    2     3     S
    3     4     A
    4     5     S
    Please anyone could help me in this query.
    Any help would be highly appreciated.
    Thanks & Regards
    Anuj

    Just MIN function will get what you want
    SQL> with t
      2  as
      3  (
      4     select 1 num1, 2 num2, 'A' type from dual union all
      5     select 1, 2, 'S' from dual union all
      6     select 2, 3, 'S' from dual union all
      7     select 3, 4, 'A' from dual union all
      8     select 3, 4, 'S' from dual union all
      9     select 4, 5, 'S' from dual
    10  )
    11  select num1, num2, min(type) type
    12    from t
    13  group by num1,num2
    14  order by num1,num2
    15  /
          NUM1       NUM2 T
             1          2 A
             2          3 S
             3          4 A
             4          5 S

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • Fetch the records from cache

    say i have emp table
    eno ename sales
    1 david 1100
    2 lara 200
    3 james 1000
    1 david 1200
    2 lara 5400
    4 white 890
    3 james 7500
    1 david 1313
    eno can be duplicate
    when i give empno is 1
    i want to display his sales i.e 1100,1200,1313
    first time i will go to database and fetch the records
    but next time onwards i dont go to database; i will fetch the records from cache;
    i thought doing it using hashmap or hasptable ;both those two don't allow duplicate values(empno has duplicate values);
    How to solve this problem.

    Hi,
    Ever considered splitting that table up. You are thinking about caching thats a
    very good idea. But doesnt it make it vary evident that the table staructure that you have
    keeps a lot of redundant data. Specially it hardly makes a sense to have sales
    figures in a emp table. Instead you can have Emp table containing eno and
    ename with eno as the primary key and have another table called sales with eno
    and sales columns and in this case the eno references the Emp table.
    If you still want to continue with this structure then I think you can go ahead with
    the solution already suggested to you
    Aviroop

  • Query to ignore the records which are 6 months older

    I need a query to ignore the records which are 6 months older . my table is having a column named quote_date . I need to ignore the records whose quote date is 6 months older . Can any one help me in this regard.
    thanks in advance
    rakesh

    Hi:
    SELECT *
      FROM table_name
    WHERE MONTHS_BETWEEN (SYSDATE, quote_date) > 6HTH
    Saad,

  • Garage Band,  I am using an MBOX with a digi design audio core as a mic interface to recod into Garage Band.  Now when I Change the preferences to use the MBOX I can not move the recorder slider and there is no sound going into garage band?  Any solutions

    Garage Band,  I am using an MBOX with a digi design audio core as a mic interface to recod into Garage Band.  Now when I Change the preferences to use the MBOX I can not move the recorder slider and there is no sound going into garage band?  Any solutions

    10.5.8 with latest garageband 5.1 with MBOX 1. On Nov 4.09 I did a software update, now i get no output volume or input volume to the computer itself. I get it from head phones from my mbox, but not to the computer. Before the software update It worked great. I've tried upgrading software through digidesign .com, but they dont seem to have one for what i need. any way to get my old garage band back? go from 5.1 to earlier? Or maybe it was the upgrade to 10.5.8. update? Can i go back in time lol?
    Any help would be appreciated
    BTW
    Ive tried all the system sound prefs, and garage band drivers. Back and forth.
    cheers
    ds

Maybe you are looking for

  • No audio in Windows Vista Home Basic - Compaq Presario c765TU

    Hi all, A couple of days ago, my Compaq Presario c765TU lost audio while fastforwarding through a movie I was playing on VLC. The video was stored on an external hard drive. Tried playing music and video files stored on the laptop's local drive, stil

  • Need to determine BPM of song.

    what is the best way in LE to determine the BPM of a song automatically. if logic doesn't do this - or doesn't do this well is there a plug-in or other (free or cheap) program that will do this for me? thanks!

  • Error : You have encountered an unexpected error. Please contact the System

    Hi All, When i am try to open the system administrator it shows the following error You have encountered an unexpected error. Please contact the System Administrator for assistance. Click here for exception details. Exception details as follows : Exc

  • IPOD auto. get songs from ITUNES?

    My IPOD shuffle seems to automatically start to get any new songs I've added to Itunes...how do I stop this from automatically upating? Thanks for any help offered, Chris

  • Syncing movies, ipad

    I have synced movies on my ipad many times and now I notice that in the 'video' app only shared videos appear; none of the videos that I selected are synced. Any ideas? GH