E52 bad GPS performance without A-GPS.

I'm facing a problem with getting a lock on with Integrated GPS on my E52. With A-GPS, there is no problem, the fix is quick. However, I don't see why I have to pay for A-GPS when there is an integrated GPS built in. I waited for over 30 mins in the open and still no lock.
Anyone else face this issue and how did you get it resolved. Nokia Care here in Singapore has no idea about it and asks me to send the phone in.

ryanismo
Unchecking Assisted GPS just makes getting a lock a complete nightmare. I once left my N82 outside to try get a lock in near Perfect weather conditions and an hour later I didn't even have one satellite locked.
Enabled AGPS and 2 min is too long for me to get a lock, It usually takes around 20seconds.
So i don't think that's such a good idea and It doesn't solve the problem at hand
Show your appreciation. Hit that kudos button real hard

Similar Messages

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • HT1651 how can i improve my macbook's performance without installing memory

    how can i improve my macbook's performance without installing memory

    More RAM & bigger faster Hard Drive will help, maybe a better Graphics card also, since 10.5 ises the Video much harder.
    At the Apple Icon at top left>About this Mac.
    Then click on More Info>Hardware and report this upto *but not including the Serial#*...
    Hardware Overview:
    Machine Name: Power Mac G5 Quad
    Machine Model: PowerMac11,2
    CPU Type: PowerPC G5 (1.1)
    Number Of CPUs: 4
    CPU Speed: 2.5 GHz
    L2 Cache (per CPU): 1 MB
    Memory: 10 GB
    Bus Speed: 1.25 GHz
    Boot ROM Version: 5.2.7f1
    Then click on More Info>Hardware>Graphics/Displays and report like this...
    NVIDIA GeForce 7800GT:
      Chipset Model:          GeForce 7800GT
      Type:          Display
      Bus:          PCI
      Slot:          SLOT-1
      VRAM (Total):          256 MB
      Vendor:          nVIDIA (0x10de)
      Device ID:          0x0092
      Revision ID:          0x00a1
      ROM Revision:          2152.2
      Displays:
    VGA Display:
      Resolution:          1920 x 1080 @ 60 Hz
      Depth:          32-bit Color
      Core Image:          Supported
      Main Display:          Yes
      Mirror:          Off
      Online:          Yes
      Quartz Extreme:          Supported
    Display:
      Status:          No display connected

  • Bad query performance - how to analyze it?

    Hi all,
    since 8 weeks we locate a bad query performance (round about 30% worse than before) in our BW system. At the moment we use a BIA on revision 49 with 4 blades (16GB).
    I have already read note 1318214 and analyzed that the most time is spend on the virtual provider(over 80%!).
    I´ve seen that a lot of time is spend on the "Datamanager":
    For example: It takes 0,76s to select 3.5million items in the relative provider and 78s!!! to select 0 items in the virtual provider.
    information from RSDDSTATTREXSERV:
    RFC Server    BIA client  BIA Kernel    ABAP RFC
    497          464              450               619
    So it seems to be a problem an the BW site, what can we do to improve the performance or analyse the query performance better.
    Best Regards,
    Jens

    Hi Jens,
    A few checks you may consider doing.
    BIA Availability :  Check the BI connection with BIA.
    Check if you need to rebuild BIA indices again. SAP recommends to do this often, to repair the degenerate indices or delete the indice which are not referenced any more.(eg data in the cube was compressed/deleted and the indices are no more needed.)
    Check the if BIA  reorganization is required - This is done to see the indices are evenly distribueded areoss the BIA Landscape.
    Try to find from BI Admin if major administration work was done within these 8 weeks.eg: Copy cube, dimension restructureing, copying data to some copy cube, archiving etc.
    You can use the BIA monitor to peform checks/monitor alerts from BIA servers
    [ BIA monitor|http://help.sap.com/saphelp_nw70/helpdata/en/43/7719d270d81a6ee10000000a11466f/content.htm]
    This link would tell you on the overall status of the BIA and any actions if required.
    Also it has sublinks to other important transaction of BIA monitoring and maintainnance.
    To go to BIA monitor : RSA!---> BIA monitor icon.
    Is your virtual provider reading data from R/3 or BW.
    Generally virtual providers are used to read data from other systems , so it woulfd not have an indices in BIA, I believe if this is the case. except for some applications like BCS wher eyou may be reading data from BW itself.
    Hope this helps
    Bext regards,
    Sunmit.

  • BADI to perform a Cost Estimate Check in MD11

    I need a BADI to perform a cost estimate check in transaction MD11 whenever a planned order is getting created before SAVE. Can anyone please suggest me on this? The BADI MD_PLDORD_POST gets triggered after SAVE. I need a BADI which can be triggered before SAVE.

    Hi..
    You can define costing variants which have valuation variant for budgeted values parallel to the standard cost estimate  in Customizing for Product Cost Controlling.
    You can define target cost versions using above costing variants under the customizing transaction OKV6 as below picture.
    When you calculate variances, you can check “all target cost version” flag to calculate variances  for all target cost versions in the controlling area as below picture.
    You can analysis the difference between actual cost and several budget cost using target cost version as below picture.

  • BPC7-5 : Badi and Performance

    Hi experts,
    Does someone know how Badi impacts performance while executing simple reports ?
    And how I can optimize this ?
    Tks for your help,
    Rgds,
    olivia;

    oops, I forgot some cases :
    -> My report try to retreive data calculated tthrough the badi
    each time i make a refresh, it take more then 1 hour :-(
    how can i do to check the Badi ? to find the name of the badi ?
    is that normal that the calculation is made on the fly ? can i avoid this ? how?
    thks a lot for your reply,
    I m not very technical...sorry,
    Tks,
    Olivia;

  • Increase Performance without changing  profile parameter

    Hi folks,
    How to Increase the Performance without changing  profile parameter ?

    One of the easiest ways to gain some performance is to turn off any unnecessary services running on the server. Depending on the platform, you can gain some memory by just removing these (turn off sendmail or postfix if you are not sending any messages [monitoring or otherwise]).
    J. Haynes
    Denver, CO

  • How to improve select stmt performance without going for secondary index

    Hi friends,
    I have a select statement which does not contains key fields(Primary index) in where condition. And I have to improve that select stmt performance without going for the secondary indexes.
    Can you plese suggest the alternative way for this?.
    Thanks in advance,
    Ramesh.

    Hi,
    If , possible create a secondary index opf your own But if you have restriction on this, try to arrange the fields in where clause in the same order as they appear in the very table.
    This will help the performance a bit.
    Another issue, If your table doesn't contain any critical data or data in them are not updated frequently, you may go for Bufferring . it is a good alternate of Indexing with above limitations.
    For details in bufferring , check, and all the sublinks.
    [concept of buffering|http://help.sap.com/saphelp_nw04/helpdata/en/cf/21f244446011d189700000e8322d00/content.htm]
    Regards,
    Anirban

  • E52 internal GPS fix problem only with OVI Maps

    Hi all,
    I read many many E52 phones suffer Internal GPS fix problems.
    I don't beleve this is an hardware problem, but only a software bug.
    My experiments say this:
    1) Start phone, start Ovi Maps -> No lock at all. (only 1 blinking line in bottom left corner)
    2) Start phone, start McGuider (Sygic) navigator -> E52 locks quite immedeately. (in the same place!!)
    3) After GPS is locked, start Ovi Maps again. Now also Maps works great!
    4) Also the gps built-in application under Menu->Applications->GPS->GPS Data has the same behaviour as Ovi Maps.
    Anyone can try and report this or similar experience?
    Thank you!

    uallaualla wrote:
    Thank you for the response.
    I'm sure the fix is faster if i use A-GPS, but i i have to pay for data connection.
    And, above all, Sygic works perfectly WITHOUT A-GPS, so why Ovi Maps does not?
    Yes, you have to pay, but the connection takes about 20sec. About Sygic you're right I noticed that too. It locks pretty fast.
    ‡Thank you for hitting the Blue/Green Star button‡
    N8-00 RM 596 V:111.030.0609; E71-1(05) RM 346 V: 500.21.009

  • Nokia 5800 - Bad GPS reception lately (maybe after...

    Hello,
    When I first got the phone (about 3 months back), the GPS reception was fine. In buildings it didn't find sattelites, but when I was outside it was no problem. But lately the reception is really bad. Even when I'm outside and not in range of large buildings, my 5800 can only find a few sattelites.
    I think it maybe is since I upgraded the firmware of my phone to v30, but I'm not sure.
    Does anyone know what I can do about it? It's really annoying!
    Thank you all in advance

    May be you have deleted the Maps data. Connect the phone to PC thru' USB and thru' Windows Explorer browse for the 'Cities' folder and the 'qf' files on the Phone / Memory card. Delete them. Now disconnect the phone and open Maps..and close them after some 20-30 seconds so  that the folder structure is recreated. Now connect the phone again and open Nokia Suite . Go to Maps in the Nokia suite and select the Maps you want to download / install.. and click the appropriate tab. Once installed you should be able to use the Maps once again ...

  • N95 has bad GPS map inside!

    Guys, is anybody here, who uses N95 as GPS navigator in Russia? No?
    Don't try, I guess.
    N95 has wrong, old and very unfinished map of Moscow inside. Streets, directions are invalid. Road junctions are absent. For example, this map doesn't show the 3d transport ring of Moscow, but this is very important and strategic highway for any driver! Or if you take a look on Sadovoye ring - you would see some wrong crossroads, i.e. where Sadovaya-Koudrinskaya street intersects with Tverskaya.
    I very appreciate Nokia for this beautiful and powerful smartphone, but I don't appreciate their map of Russia. Probably I will use another device for GPS routing in Moscow.
    Alexey.Message Edited by av_degt on 04-Jul-200701:28 PM
    Message Edited by av_degt on 04-Jul-200701:28 PM

    Hi Alexey
    I am sorry for your disappointment but please be advised that this is a user to user forum and will not elicit a response from Nokia. Please use "Contact us" at top right of your screen to bring this shortcoming to Nokia's attention
    Happy to have helped forum in a small way with a Support Ratio = 37.0

  • Premiere Elements 10 Bad/No performance on Win 8 64-bit Core i5-4440 16gig ram and GeForce GTX-650

    Since just over a year i'm the owner of Premiere Elements 10, however it has never pleased me due to bad and inconsistant performance. Just putting video's one after another without any transitions works so-so a few frames per second (which is rediculus). But when i start doing things like motion keyframes, i have no preview at all just a bit of choppy audio. First i thought it was my laptop (a win 7 64-bit core i7 3rd gen, with geforce 610m and 8gig of ram) not bad but i thought well it is a laptop so it might be a bit slower, i'll have to deal with it. However since a few months i own a new PC (specs above), and experience the very same issue. No performance! Every time i think about editing a video i start the program and within a few minutes i get so angry with it that i close it and leave the video's be.
    Pre-Rendering the workspace also has no effect at all. I have no red bar above the timeline (sometimes even green).
    And worst of all it keeps messing up the timing of the video while editing. If you change a clip in the timeline all clips following (the motion edits) are destroyed. Which is only visible after a few hours of rendering, since live preview is only audio.
    What in the world can be done to fix this problem. As you probably can read i'm pretty annoyed.
    The free video editing software that comes with the gopro moves rotates and scales smoothly in live playback, so it is not impossible on the machine. (too bad the functionality of this program is so limited, else i could finally throw Premiere Elements away!).
    Is there anybody who can help me in my despair?

    nVidia Driver Rollback PreElements-10 http://forums.adobe.com/thread/1317675 may help

  • Bad NCP Performance vs CIFS

    Our enviroment :
    OES 11 SP1
    Connect via 10Gbit Fibre (direct attached cable to switch)
    Clients WINDOWS 7
    Connect via 1 Gbit
    We se Bad NCP perfomance on the same Volume vs CIFS
    While copying a file from the workstation to a volume via NCP we get about ~45 MB/s
    While copying a file from the workstation to the same volume via CIFS we get about ~100 MB/s
    NCP can be slower but thats a huge difference, any ideas to step further are welcome.
    regards

    Originally Posted by mrosen
    On 29.10.2013 21:16, Bob-O-Rama wrote:
    >
    > No idea.
    >
    > I know people have reported performance differences, in my experience
    > its not been substantial. Perhaps we have some super awesome network
    > or something. ;-) Which we do... because "network" is in my job
    > title.
    Does that mean you see ~ 100MB/sec via NCP on a GB conenction?
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    Untitled Document
    finally found the time to work a little on this. dedicated boxes (pretty old but decent hardware) on a dedicated LAN, no traffic apart from what i've triggered. the tests are far from real-life operations as it's just about copying a 3GB iso back and forth and doing something similar with "lan speed test lite" (totusoft.com). observations made as follows:
    on XP the NCP operations outperform CIFS operations all across the board. i get constant 70MB/s on both reads and writes via NCP, CIFS writes at about 60 and reads 50MB/s (server and workstations rebooted after each test).
    on W7 NCP performance is about the same as on XP, CIFS reads are slightly below 70MB/s but CIFS writes constantly at around 105MB/s.
    there some pretty confusing things i wonder about:
    in the past the FILE_COMMIT setting on the server just decided whether or not to honor client initiated commit requests (did this change?). copying a file with windows explorer never triggers a commit, so neither client nor server settings should be a factor here. in fact, the server setting really has no effect, but running W7 the client setting really makes a difference (though there's no NCP 59 verb leaving the client). the 70MB/s via NCP on W7 could only be reached with FILE_COMMIT being enabled on the client. while i set commit on and oplocks off all of my life anyway i'd be interested how setting commit off could drop transfer rates down to 60MB/s without a commit request on the wire regardless of this setting. obviously there are effects other than requesting a commit on a buffer-flush call made by an application. maybe something alan like to share with us...
    what makes the W7 CIFS writes significantly faster? i don't think that caching or lazy writes are a factor here as tests with files larger than server's or workstations' memory lead to identical results, i.e. continuous data flow on the network and a max of 2 seconds I/O on the drives after the copy has finished. no difference on calling "sync" or "fsync" after NCP / CIFS operations. not sure how this integrates with NSS, though.
    write caches on harddrives and array controller are disabled, btw.
    finally: does anyone have an idea what the FILE_FLUSH parameter in cifs.conf is about? setting it yes or no didn't make a difference for these tests, just being curious.

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • Macbook performance without the battery

    I use the macbook pro for editing video in Final cut pro, and sometimes when I am at home I remove the battery.
    and then I choose better performance, but the performance is not so good, i have constantly drop frames warning in the Final Cut.
    I've read about that a lot, and most people say that if remove the battery the macbook pro runs in half perfomance.
    And my question is,
    can I get total perfomace of the Macbook pro connected to the power without the battery, like I have when I got the battery on it???
    thanks

    > but can you explain me why??
    Not fully, some call it a bug others a feature/thermal protection.
    http://discussions.apple.com/thread.jspa?threadID=489707&tstart=0

Maybe you are looking for

  • LCM causes Essbase security to be overwritten and can no longer log in.

    When using LCM and migrating Foundation / Shared Services and loading it back in from the file. It somehow overwrote something with Essbase security so no user (including the admin) can log in. Through EAS or through Workspace / Planning Any suggesti

  • 997 Acknowledgement Report

    Hi, I'm using BizTalk 2013. It seems like 997s are getting consolidated with the interchanges and those details can be viewed from BizTalk Administration Console. We have used community who does not have access to BizTalk group, is there way to get t

  • Need an error for no keyFieldValue during File Content Conversion

    Hi Gurus, I am working on a scenario where I have to convert a CSV file structure. Each sender file is having "END" in the last so that the structure looks like as follows: I,103468,-No Entry-,299103468002,0.010,50.000,1.000,-49.990,None,,0,Diab S,31

  • In C4: What determines the max number of slide one can successfully publish?

    Using C4. I am required by policy to use an original PowerPoint file. The total number of slide equals 72 (file size 38,201 KB when completed). The internal customer wants a "kicky" presentation based on her slides, so I must use some of the Captivat

  • So... I cannot watch movie on DVD on my HDTV?

    With a cable from my MacBook to my HDTV, I can watch a movie on DVD or anything that I  to pull up the MacBook.  Now, unless I am wrong, I cannot use the setup Apple TV instead of the cable?   Reading over what Apple says about the Apple TV, I though