Re: online number takes too long to transfer

Same problem.  Did you manage to get a fix?  I use it for my business, missing too many call!!

> Tools > Options > Calls > Call Forwarding > Forward calls if I don't answer within "x" seconds > SAVE
TIME ZONE - US EASTERN. LOCATION - PHILADELPHIA, PA, USA.
I recommend that you always run the latest Skype version: Windows & Mac
If my advice helped to fix your issue please mark it as a solution to help others.
Please note that I generally don't respond to unsolicited Private Messages. Thank you.

Similar Messages

  • RPURMP00 program takes too long

    Hi Guys,
    Need some help on this one guys. Not getting any where with this issue.
    I am running RPURMP00 ( Program to  Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
    I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
    The long text message is “No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 “   and     “Job cancelled after system exception ERROR_MESSAGE”
    I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
    It short dumped and here is the st22 message and source code extract.
          ----Message -
    " Time limit exceeded ".
    "The program "RPURMP00" has exceeded the maximum permitted runtime without
    Interruption and has therefore been terminated."
          ----Source code extract -
    Include RPURMP02
      172 &----                   
      173 *&      Form  get_advice_info                                                               
      174 &----                   
      175 *       text                                                                               
    176 ----                   
      177 *  -->  p1        text                                                                     
      178 *  <--  p2        text                                                                      
      179 ----                   
      180 FORM get_advice_info .                                                                     
      181                                                                               
    182 * get information for advice form only if vendor sub-group and                             
      183 * employee detail is maintained                                                             
      184   IF ( NOT t51rh-lifsg IS INITIAL ) AND                                                    
      185      ( NOT t51rh-hrper IS INITIAL ).                                                       
      186                                                                               
    187 *   get remittance items employee number                                                   
      188     SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658             
      189 *     get payroll seqno determined by PERNR and RDATN                                      
    >>>>>       SELECT * FROM t51r8 WHERE pernr = t51r4-pernr                                        
      191                             AND rdatn = t51r5-rdatn                                        
      192                             ORDER BY PRIMARY KEY. "#EC CI_GENBUFF                          
      193         EXIT.                                                                               
    194       ENDSELECT.                                                                               
    Has anyone ever come across this situation? Any input from anyone on this?
    Regards.
    CJ

    Hi,
    What is your SAP version?
    Have you checked if some OSS notes is there on performance.
    Regards,
    Atish

  • My Query takes too long ...

    Hi ,
    Env   , DB 10G , O/S Linux Redhat , My DB size is about 80G
    My query takes too long ,  about 5 days to get results , can you please help to rewrite this query in a better way ,
    declare
    x number;
    y date;
    START_DATE DATE;
    MDN VARCHAR2(12);
    TOPUP VARCHAR2(50);
    begin
    for first_bundle in
    select min(date_time_of_event) date_time_of_event ,account_identifier  ,top_up_profile_name
    from bundlepur
    where account_profile='Basic'
    AND account_identifier='665004664'
    and in_service_result_indicator=0
    and network_cause_result_indicator=0
    and   DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
    group by account_identifier,top_up_profile_name
    order by date_time_of_event
    loop
    select sum(units_per_tariff_rum2) ,max(date_time_of_event)
    into x,y
    from OLD_LTE_CDR
    where account_identifier=(select first_bundle.account_identifier from dual)
    and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
    and -- no more than a month
    date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
    and -- finished his bundle then buy a new one
      date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
                             FROM OLD_LTE_CDR
                             WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
                             AND IN_SERVICE_RESULT_INDICATOR=26);
    select first_bundle.account_identifier ,first_bundle.top_up_profile_name
    ,FIRST_BUNDLE.date_time_of_event
    INTO MDN,TOPUP,START_DATE
    from dual;
    insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
    end loop;
    COMMIT;
    end;

    > where account_identifier=(select first_bundle.account_identifier from dual)
    Why are you doing this?  It's a completely unnecessary subquery.
    Just do this:
    where account_identifier = first_bundle.account_identifier
    Same for all your other FROM DUAL subqueries.  Get rid of them.
    More importantly, don't use a cursor for loop.  Just write one big INSERT statement that does what you want.

  • Quantity conversion takes too long

    Dear Gurus,
    I'm having a problem with the query execution time when I convert the quantities of the materials in KGs.
    I have done all the steps to set up material conversion with reference infoobject 0material and using a dynamic determination of conversion factor using central units of measurement (T006) otherwise reference infoobject.
    With these settings the query takes too long to execute because of the large number of material codes. If I remove this conversion the query is executed very fast.
    Any ideas? Do I have to create an index in UOM0MATE ODS? What am I missing here?
    Regards,
    Panos

    Hi Panos,
    I had the same issue, but it's solved for me now. I tried the same way you did, by creating a secondary index on the active table of the DSO. The only difference is, that I included all the SID fields to the index.
    Did you mark your index as unique? Also make sure, that the index really is created on the DB.
    If the performance still is not getting better, check in RSRT the statistics if the unit conversion really is the problem.
    Kindly regards,
    Matthias

  • I don't know why it takes too long time to sample flat file.

    I don't know why it takes too long time to sample flat file.
    OWB Client 10.1
    While importing a flat file of fixed width ,
    in the screen "Flat File Sample Wizard" shows the text box number of rows with default value 200.
    I want to extend this value to 700,000.
    But, it takes too long time (over 5 hours) to sample it.
    Do you know why it is happend? or How can i fix this problem?
    Thanks in Advance.
    Regards,
    JWS.

    Hello,
    Actually flat file sampling process’ goal is to capture the structure of the file. That’s why initially the sample size is set to 200 lines.
    The question is why you are trying to perform sampling by 700000 rows? Are you expecting some change in structure beyond this mark?
    If so, and you want to capture the fact that your source file is multi – typed, your better prepare small file for sampling outside the OWB.
    Sergey

  • Why does it take too long to open attachments from email account?

    It takes forever to download attachments from my email account. I have tried to do the following tasks explained on the vista forum :http://windowshelp.microsoft.com/Windows/en-US/help/6b046ae9-1434-4423-9303-400ff6fe686b1033.mspx#ESD but none of the possible fixes work.
    After clicking open on the pop up box asking whether i want to open or save the attachement It takes too long to download it. The transfer window stays open showing that it is ready to download but stays at that bhox window. I press cancel and try to open again, if lucky it opens that file, otherwise it takes forever forceing me to cancel. The files very small files most of the time, usually between 50kb so should take seconds.
    I have even tried to save the files but again same process. The transfer box stays open but does not download.
    Any one any ideas.
    Thanx in advance.

    Hello
    It is not easy to say what happen exactly but it must be something with email account provider and their page. For me this case is not some typical Vista problem but you can try to find solution on Microsoft Vista IT Pro forum.
    By the way: do you have alternative mail address by some other provider? Is there the same situation?

  • Why finding replication stream matchpoint takes too long

    hi,
    I am using bdb je 5.0.58 HA(two nodes group,JVM 6G for each node).
    Sometimes, I found bdb node takes too long to restart(about 2 hours).
    When this occurs, I catch the process stack of bdb(jvm process) by jstack.
    After analyzing stack,I found "ReplicaFeederSyncup.findMatchpoint()" taking all the time.
    I want to know why this method takes so much time,and how can I avoid this bad case.
    Thanks.
    帖子经 liang_mic编辑过

    Liang,
    2 hours is indeed a huge amount of time for a node restart. It's hard be sure without doing more detailed analysis of your log as to what may be going wrong, but I do wonder if it is related to the problem you reported in outOfMemory error presents when cleaner occurs [#21786]. Perhaps the best approach is for me to describe in more detail what happens when a replicated node is connecting with a new master, which might give you more insight into what is happening in your case.
    The members of a BDB JE HA replication group share the same logical stream of replicated records, where each record is identified with a virtual log sequence number, or VLSN. In other words, the log record described by VLSN x on any node is the same data record, although it may be stored in a physically different place in the log of each node.
    When a replica in a group connects with a master, it must find a common point, the matchpoint, in that replication stream. There are different situations in which a replica may connect with a master. For example, it may have come up and just joined the group. Another case is when the replica is up already but a new master has been elected for the group. One way or another, the replica wants to find the most recent point in its log, which it has in common with the log of the master. Only certain kinds of log entries, tagged with timestamps, are eligible to be used for such a match, and usually, these are transaction commits and aborts.
    Now, in your previous forum posting, you reported an OOME because of a very large transaction, so this syncup issue at first seems like it might be related. Perhaps your replication nodes need to traverse a great many records, in an incomplete transaction, to find the match point. But the syncup code does not blindly traverse all records, it uses the vlsn index metadata to skip to the optimal locations. In this case, even if the last transaction was very long, and incomplete, it should know where the previous transaction end was, and find that location directly, without having to do a scan.
    As a possible related note, I did wonder if something was unusual about your vlsn index metadata. I did not explain this in outOfMemory error presents when cleaner occurs but I later calculated that the transaction which caused the OOME should only have contained 1500 records. I think that you said that you figured out that you were deleting about 15 million records, and you figured out that it was the vlsn index update transaction which was holding many locks. But because the vlsn index does not record every single record, it should only take about 1,500 metadata records in the vlsn index to cover 15 million application data records. It is still a bug in our code to update that many records in a single transaction, but the OOME was surprising, because 1,500 locks shouldn't be catastrophic.
    There are a number of ways to investigate this further.
    - You may want to try using a SyncupProgress listener described at http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/rep/SyncupProgress.html to get more information on which part of the syncup process is taking a long time.
    - If that confirms that finding the matchpoint is the problem, we have an unadvertised utility, meant for debugging, to examine the vlsn index. The usage is as follows, and you would use the -dumpVLSN option, and run thsi on the replica node. But this would require our assistance to interpret the results. We would be looking for the records that mention where "sync" points are, and would correlate that to the replica's log, and that might give more information if this is indeed the problem, and why the vlsn index was not acting to optimize the search.
    $ java -jar build/lib/je.jar DbStreamVerify
    usage: java { com.sleepycat.je.rep.utilint.DbStreamVerify | -jar je-<version>.jar DbStreamVerify }
    -h <dir> # environment home directory
    -s <hex> # start file
    -e <hex> # end file
    -verifyStream # check that replication stream is ascending
    -dumpVLSN # scan log file for log entries that make up the VLSN index, don't run verify.
    -dumpRepGroup # scan log file for log entries that make up the rep group db, don't run verify.
    -i # show invisible. If true, print invisible entries when running verify mode.
    -v # verbose

  • Takes too long to hibernate when I close the lid - Also random device noise when it boots up

    Hello guys.
    Ever since i've wiped the machine, i've been having these two problems. When I close the lid, it used to go to sleep straight away, but now I can see the sleep light (and the power button) flash and flash, then it goes to sleep.
    When waking up it goes through the lenovo startup screen and resuming windows, then it asks for a password, before I used to open the lid and it would ask me for the password straight away. I know it was going to sleep because I could hear the beep straight away when I closed and opened it, but now it just takes too long.
    Also, everytime I boot up into windows or resume into windows from a sleep state, I can hear the device noise, like something is being plugged in/out. But there is nothing being plugged in or out at the time. I can't get to device manager quick enough to see what it is that is causing it. 
    But all drivers seem okay.
    Thanks in advance.
    Sam.
    EDIT: Also noticed, when the lid is closed, randomly the laptop turns off (hear the beep) an then turns back off again.
    Weird.
     T420 model number: 4180-PR1 with OS: Windows 7 Pro 64 bit
    T420 4180-PR1 - OS: Windows 7 Pro 64 bit
    Solved!
    Go to Solution.

    Hi Sam,
    is this to do with the T420 model number: 4180-PR1 with OS: Windows 7 64 bit installed on it as in another thread you posted in?
    Maybe you could pop the information into your signature; members like to know which system and OS are involved.  At the top next to Sign Out choose   My Settings > Personal Profile > Personal Information - Signature
    Andy  ______________________________________
    Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
    Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
      Deutsche Community     Comunidad en Español    English Community Русскоязычное Сообщество
    PepperonI blog 

  • MAcbook takes too long to boot

    From one day to another our macbook takes too long to boot. The problem is since we push the power button until the first grey apple shows up, it can take up to 30 seconds. Since the apple shows up it boot in a few seconds. All the process up to 1.20 minutes.

    Also... once you are in OS X, go to System Preferences and Startup Disk. Make sure your OS X volume is selected. Sometimes this can change causing your system to look for another startup disk... and it needs to "time out" before it moves on to the next. This is most common after removing a BootCamp installation of Windows... but can happen for any number of reasons.

  • Having lagging problems with YouTube videos, it simply takes so long for the red bar to completely load and the videos frequently pause and take too long to restart playing again.

    I have been having lagging problems with YouTube videos for a number of months now. It simply takes so long for the red bar to completely load and the videos frequently pause and take too long to restart playing again. Even with little 2 and 3 minute videos.
    I have a fast computer and my webpages load really, really fast. I have FireFox 4 browser and a Vista Home Premium 64-bit OS. So I don't have a slow computer or web browser. But these slow YouTube vids take way too long to load for some reason.
    Does anyone have any idea how I can speed up YouTube?

    Hi
    The forums are customer to customer in the first instance, Only the mods (BT) will ask for personal information through a email link. Would you mind posting your Hub stats & BT speed test results this will help all with diagnosis-
    To post the full stats from your router
    for home hub - 192.168.1.254
    Navigate to ADSL Settings or use the A-Z at the top right
    Click on More Details and then post the results.
    Run BT speed tester and post the results from http://speedtester.bt.com/
    If possible it would be best to connect to the BT master socket this will rule out any telephone/broadband extension wiring also consider the housing of the hub/router as anything electrical can cause problems as these are your responsibility, if these are found to be the case Openreach (engineer) will charge BT Broadband, which will be passed onto you, around £130.00.
    Noisy line! When making telephone calls, if so this is not good for your broadband, you can check-
    Quite line test dial 17070 option 2 and listen - should hear nothing best done with old type analogue phone digital (dect) will do but may have slight hiss If you hear noise- crackling pops etc, report it as a noisy line on your phone, don’t mention broadband, Bt Faults on 151.
    As for your FTTC its available in some areas between 40% & 80% of customers in enabled areas can receive it!
    Mortgage Advisor 2000-2008
    Green Energy Advisor 2008-2010
    Charity Health Care Provider Advisor 2010-
    I'm alright Jack....

  • Takes too long to send photos

    The photos on my ip4 are too large, when sent with text it takes too long, they megas instead of kilos. Can this be adjusted?

    Daniel:  I took a look at your code and don't see any problem with it.  However, doing the math:
    19,200 BAUD = 19,200 BITS per second.
    10 BITS per BYTE ( 1 BIT start + 8 BITS data + 1 BIT stop = 10)
    19,200 BAUD = 19,200/10 BITS per BYTE = 1,920 BYTES per second
    1,920 BYTES per second / 1000 milliseconds per second = 1.92 BYTES per millisecond (ms)
    Now the above would not be a problem since you are only waiting for one character (which should only take 0.52 ms), however, you are sending 5 bytes.  That's a total of 6 bytes round trip or 6 * 0.52 ms = 3.12 ms.
    In addition, having "Enable Terminating Char" on your VISA Configure Serial Port can't be helping unless your serial LIN device requires it because having to wait for a teminating character on a read or write (in your case it appears you are using a 0x0A or line feed) increases the number of characters that need to be transmitted. 
    Another thought just came to me:  If your serial device is sending a terminating character, it is possible you may have a second read operation each cycle due to the 0x0A (line feed) and the fact that your code only reads one byte at a time from the serial port.  This may also add some time to your response.
    I hope this helps.
    Message Edited by Bill_in_Detroit on 03-18-2010 09:16 AM

  • Sometimes my computer takes too long to connect to new website. I am running a pretty powerful work program at same time, what is the best solution? Upgrading speed from cable network, is it a hard drive issue? do I need to "clean out" the computer?

    Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy.  It is a Macbook Pro  osx 10.6.8 (late 2010).

    Almost certainly none of the above!  Try each of the following in this order:
    Select 'Reset Safari' from the Safari menu.
    Close down Safari;  move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
    Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
    Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
              defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false

  • Accessing BKPF table takes too long

    Hi,
    Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
    I'm using this:
       select bukrs gjahr belnr budat blart
       into corresponding fields of table i_bkpf
       from bkpf
       where bukrs eq pa_bukrs
       and gjahr eq pa_gjahr
       and blart in so_DocTypes
       and monat in so_monat.
    The report is taking too long and is eating up a lot of resources.
    Any helpful advice is highly appreciated. Thanks!

    Hi max,
    I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
        select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat in so_budat.
    I also tried accessing the table per day, but it didn't worked too...
       while so_budat-low le so_budat-high.
         select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat eq so_budat-low.
         so_budat-low = so_budat-low + 1.
       endwhile.
    I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period?

  • Report Takes Too Long Time

    Hi!
    I am in troubble
    following is the query
    SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
    sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
    pldesc, i.pmempno, pmname, i.empid, empname
    FROM inv_reg i,
    cat_reg c,
    sub_cat_reg s,
    gen_desc_reg g,
    ploc p,
    province r,
    pmaster m,
    iemp_reg e
    WHERE i.sub_cat_id = s.sub_cat_id
    AND i.cat_id = s.cat_id
    AND s.cat_id = c.cat_id
    AND i.bl_id = g.gen_id
    AND i.cur_loc = p.plcode
    AND p.prvcode = r.prvcode
    AND i.pmempno = m.pmempno(+)
    AND i.empid = e.empid(+)
    &wc
    order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
    and this query returns 32000 records
    when i run this query on reports 10g
    then it takes 10 to 20 minuts to generate report
    how can i optimize it...?

    Hi Waqas Attari
    Pls study & try this ....
    When your query takes too long ...
    hope it helps....
    Regards,
    Abdetu...

  • OPM process execution process parameters takes too long time to complete

    PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
    thanks in advance.
    regds,
    Shailesh

    Generally the slowest part of the process is in the extraction itself...
    Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
    Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
    In BW also check in your SM21 for database errors or delays...
    Just some ideas...

Maybe you are looking for