Simple count of points within a radius takes too long

I have reduced my table to just two columns - an ID and a Geography column. I have a spatial index on the latter and a clustered PK index on the former. There are 10m records in the table. The indexes are not fragmented.
A query to get a count all points within 1000km of a specified point returns around 7.5m but takes 30+ seconds to execute.
The query plan shows use of the spatial index with cost 9% then clustered index seek on my primary key taking 83%.
I have tested STDistance and STBuffer - the latter is slightly faster.
SELECT COUNT(1) FROM testdoserate2 WHERE ([Location].STDistance('POINT (16.373813 48.230391)')) < 1000000
declare @referencepoint Geography = 'POINT (16.373813 48.230391)'
declare @radius Geography = @referencepoint.STBuffer(1000000);
SELECT count(1) FROM testdoserate2 WITH (nolock, INDEX(IDX_Location))
WHERE Location.Filter(@radius) = 1
Is there anything else I can possibly do to speed this up?

Have a look at this thread, especially the info on custom tessellations and the "optimizing point queries" whitepaper:
https://social.technet.microsoft.com/Forums/sqlserver/en-US/3bbdc511-2a08-4075-b989-d98b70d0bedd/sql-server-express-performance-limitations-with-ogc-methods-on-geometry-instances?forum=sqlspatial
In addition: If your query returns a lot of rows (or a large count of them) and/or if your query and your spatial index doesn't eliminate most of the candidate rows, you can have perf problems. 1000km radius does sound like it could produce a lot of
rows, which cause a lot a seeks on the base table in your query plan (capture the *actual* plan and look at the "Clustered Index Seek" iterator right above-right of the "Filter" iterator). 
Look into tuning the parameters of your spatial index using sp_help_spatial_geography_index. You're looking for largest Percentage/Number_Of_Rows_Selected_By_Primary_Filter and  Pecentage/Number_Of_Rows_Selected_By_Secondary_Filter, which the default
spatial index parameters don't always give you.
Hope this helps, Bob

Similar Messages

  • RPURMP00 program takes too long

    Hi Guys,
    Need some help on this one guys. Not getting any where with this issue.
    I am running RPURMP00 ( Program to  Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
    I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
    The long text message is “No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 “   and     “Job cancelled after system exception ERROR_MESSAGE”
    I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
    It short dumped and here is the st22 message and source code extract.
          ----Message -
    " Time limit exceeded ".
    "The program "RPURMP00" has exceeded the maximum permitted runtime without
    Interruption and has therefore been terminated."
          ----Source code extract -
    Include RPURMP02
      172 &----                   
      173 *&      Form  get_advice_info                                                               
      174 &----                   
      175 *       text                                                                               
    176 ----                   
      177 *  -->  p1        text                                                                     
      178 *  <--  p2        text                                                                      
      179 ----                   
      180 FORM get_advice_info .                                                                     
      181                                                                               
    182 * get information for advice form only if vendor sub-group and                             
      183 * employee detail is maintained                                                             
      184   IF ( NOT t51rh-lifsg IS INITIAL ) AND                                                    
      185      ( NOT t51rh-hrper IS INITIAL ).                                                       
      186                                                                               
    187 *   get remittance items employee number                                                   
      188     SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658             
      189 *     get payroll seqno determined by PERNR and RDATN                                      
    >>>>>       SELECT * FROM t51r8 WHERE pernr = t51r4-pernr                                        
      191                             AND rdatn = t51r5-rdatn                                        
      192                             ORDER BY PRIMARY KEY. "#EC CI_GENBUFF                          
      193         EXIT.                                                                               
    194       ENDSELECT.                                                                               
    Has anyone ever come across this situation? Any input from anyone on this?
    Regards.
    CJ

    Hi,
    What is your SAP version?
    Have you checked if some OSS notes is there on performance.
    Regards,
    Atish

  • Why ipad 2 / iphone 3g resetting takes too long?

    Im trying to reset and delete all content of my ipad and iphone 3g by choosing the selections about resetting on the general. Why it takes too long to reset my ipad / iphone 3g? It is almost 2 days my ipad are on the apple logo / looping circle appears on it and still im waiting for my ipad / iphone 3g to return to main screen.  What will i do? Thanks...

    There is no other way except to restore the iPad - plain and simple. You have to restore the device within iTunes. You want to use the same computer that you always sync with so that you can restore your app data and settings. You can restore with any other computer, but you will lose everything on the iPad.
    You will need to use recovery mode
    iPad: Unable to update or restore

  • Sql Query takes too long to enter into the first line

    Hi Friends,
      I am using SQLServer 2008. I am running the query for fetching the data from database. when i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second) to enter into first
    line of the stored procedure. After its enter into the first statement of the SP, its fetching the data within a second. I think there is no problem with Sqlquery.  Kindly let me know if you know the reason behind this.
    Sample Example:
    Create Sp Sp_Name
    as
     Begin
     print Getdate()
      Sql statements for fetching datas
     Print Getdate()
     End
    In the above example, there is no difference between first date and second date.
    Please help me to trouble shooting this problem.
    Thanks & Regards,
    Rajkumar.R

     i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second)
    Additional to Manoj: A
    DBCC FREEPROCCACHE clears the procedure cache, so all store procedure must be newly compilied on the first call.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Displaying PathGeometry into canvas take too long time

    Hi every one,
    I am working on spatial data with an application WPF and I would like to draw a map from data queries.
    I got my data and I am able to draw them on a canvas with scroll bar and zoom.
    My problem is when I want to zoom or scroll the canvas take a long time before draw the map.
    The big problem is there :
    Dim poly As New Polygon
    poly.Stroke = Brushes.Red
    poly.Fill = Brushes.Orange
    poly.StrokeThickness = 5
    Dim colPoint As New PointCollection
    For i = 1 To geo.STNumPoints
    Dim ls As New LineSegment()
    colPoint.Add(New Point(geo.STPointN(i).STX.Value - extentA, geo.STPointN(i).STY.Value - extentB))
    Next
    colPoint.Add(New Point(geo.STPointN(1).STX.Value - extentA, geo.STPointN(1).STY.Value - extentB))
    poly.Points = colPoint
    masterCanvas.Children.Add(poly)
    so the canvas is "masterCanvas" and i add each Polygon one by one. I would like if it is possible to gather all polygon into one "image" and then when it will refresh the canvas it will draw only one thing. 
    Regards.

    Hi,
    I found that you have posted it in the forum Acamar suggested.
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/e2d50b4e-0d76-4a3c-b229-521b648b54b3/displaying-pathgeometry-into-canvas-take-too-long-time?forum=wpf#e2d50b4e-0d76-4a3c-b229-521b648b54b3
    Please just focus on that thread, and you could mark if any reply which is helpful as answer.
    Thanks for your understanding.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • RTF export takes too long

    Hi all,
    you know why a report with a lot of data and many pages (about 300 pages), if I export this in PDF format using short time (1 min) and if I export this in PDF format takes too long (even 10 minutes) and sometimes even times out.
    Have an idea?
    Thanks

    re: Paul.  Not true, or shouldn't be.  XDCAM HD timeline, XDCAM HD output.
    re: Michael.  Well, close.  I mixdown the edited timeline, then place that into another sequence where I apply a Broadcast Safe filter and hit it with a bit of audio compression.  I render that, ridding myself of the dreaded "red lines".  However, when I splat-E, I get red lines all over again while I'm exporting.  When the export is done, the red lines disappear.  Now as an aside, I do experience actual "missing" render files, but that occurs when I quit a project and then re-open it later...they sometimes don't seem to re-link.  As I mentioned in this post, I have a feeling that's because this system is set up in the worst way possible (a single RAID5 which acts as system drive and media storage).
    Don't blame me, I'm just trying to work within the confines of what I'm given. My major concern here is that it seems to be getting worse.

  • Drill Through reports takes too long

    Hi all,
    I need some suggestions/help with our drill through reports. We are using Hyperion 11.1.1.3 and the cube is ASO.
    We have drill through reports set up in Essbase studio for drilling down from Essbase to Oracle database. It takes too long (like 30 mins for fetching 1000 records) and the query is simple.
    What are the changes that we can do to bring down this time. Please advice.
    Thanks.

    Hi Glenn,
    We tried optimizing the drill through SQL query but actually when we run the directly in TOAD it takes *23 secs* but when we do drill through on the same intersection
    it took more than 25 mins. Following is our query structure :
    (SELECT *
    FROM "Table A" cp_594
    INNER JOIN "Table B" cp_595 ON (cp_594.key = cp_595.key)
    WHERE (Upper(cp_595.*"Dim1"*) in (select Upper(CHILD) from (SELECT * FROM DIM_TABLE_1 where CUBE = 'ALL') WHERE CONNECT_BY_ISLEAF = 1 START WITH PARENT = $$Dim1$$ CONNECT BY PRIOR CHILD = PARENT UNION ALL select Upper(CHILD) from DIM_TABLE_1 where CUBE = 'ALL' AND REPLACE('GL_'||CHILD, 'GL_IC_', 'IC_') = $$Dim1$$))
    And ----same for 5 more dimensions
    Can you suggest some improvement ? Please advice.
    Thanks

  • Update statement takes too long to run

    Hello,
    I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
    UPDATE DEV_OCS.DOCMETA IPM
    SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
    FROM [email protected] LKP
    WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
    IPM.XIPMSYS_APP_ID = 2
    WHERE
    IPM.XIPMSYS_APP_ID = 2;
    Thanks,
    Ilya

    matthew_morris wrote:
    In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
    {code}
    SQL> set linesize 132
    SQL> explain plan for
    2 update emp e
    3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL>
    {code}
    As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
    {code}
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3568118945
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
    | 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
    | 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    |* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
    | 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    6 - filter("T"."DEPTNO"=:B1)
    Remote SQL Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
    25 rows selected.
    SQL>
    {code}
    I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
    SY.

  • Why finding replication stream matchpoint takes too long

    hi,
    I am using bdb je 5.0.58 HA(two nodes group,JVM 6G for each node).
    Sometimes, I found bdb node takes too long to restart(about 2 hours).
    When this occurs, I catch the process stack of bdb(jvm process) by jstack.
    After analyzing stack,I found "ReplicaFeederSyncup.findMatchpoint()" taking all the time.
    I want to know why this method takes so much time,and how can I avoid this bad case.
    Thanks.
    帖子经 liang_mic编辑过

    Liang,
    2 hours is indeed a huge amount of time for a node restart. It's hard be sure without doing more detailed analysis of your log as to what may be going wrong, but I do wonder if it is related to the problem you reported in outOfMemory error presents when cleaner occurs [#21786]. Perhaps the best approach is for me to describe in more detail what happens when a replicated node is connecting with a new master, which might give you more insight into what is happening in your case.
    The members of a BDB JE HA replication group share the same logical stream of replicated records, where each record is identified with a virtual log sequence number, or VLSN. In other words, the log record described by VLSN x on any node is the same data record, although it may be stored in a physically different place in the log of each node.
    When a replica in a group connects with a master, it must find a common point, the matchpoint, in that replication stream. There are different situations in which a replica may connect with a master. For example, it may have come up and just joined the group. Another case is when the replica is up already but a new master has been elected for the group. One way or another, the replica wants to find the most recent point in its log, which it has in common with the log of the master. Only certain kinds of log entries, tagged with timestamps, are eligible to be used for such a match, and usually, these are transaction commits and aborts.
    Now, in your previous forum posting, you reported an OOME because of a very large transaction, so this syncup issue at first seems like it might be related. Perhaps your replication nodes need to traverse a great many records, in an incomplete transaction, to find the match point. But the syncup code does not blindly traverse all records, it uses the vlsn index metadata to skip to the optimal locations. In this case, even if the last transaction was very long, and incomplete, it should know where the previous transaction end was, and find that location directly, without having to do a scan.
    As a possible related note, I did wonder if something was unusual about your vlsn index metadata. I did not explain this in outOfMemory error presents when cleaner occurs but I later calculated that the transaction which caused the OOME should only have contained 1500 records. I think that you said that you figured out that you were deleting about 15 million records, and you figured out that it was the vlsn index update transaction which was holding many locks. But because the vlsn index does not record every single record, it should only take about 1,500 metadata records in the vlsn index to cover 15 million application data records. It is still a bug in our code to update that many records in a single transaction, but the OOME was surprising, because 1,500 locks shouldn't be catastrophic.
    There are a number of ways to investigate this further.
    - You may want to try using a SyncupProgress listener described at http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/rep/SyncupProgress.html to get more information on which part of the syncup process is taking a long time.
    - If that confirms that finding the matchpoint is the problem, we have an unadvertised utility, meant for debugging, to examine the vlsn index. The usage is as follows, and you would use the -dumpVLSN option, and run thsi on the replica node. But this would require our assistance to interpret the results. We would be looking for the records that mention where "sync" points are, and would correlate that to the replica's log, and that might give more information if this is indeed the problem, and why the vlsn index was not acting to optimize the search.
    $ java -jar build/lib/je.jar DbStreamVerify
    usage: java { com.sleepycat.je.rep.utilint.DbStreamVerify | -jar je-<version>.jar DbStreamVerify }
    -h <dir> # environment home directory
    -s <hex> # start file
    -e <hex> # end file
    -verifyStream # check that replication stream is ascending
    -dumpVLSN # scan log file for log entries that make up the VLSN index, don't run verify.
    -dumpRepGroup # scan log file for log entries that make up the rep group db, don't run verify.
    -i # show invisible. If true, print invisible entries when running verify mode.
    -v # verbose

  • Indexing and categorization takes too long - why?

    I have set up a news publishing system where journalists access a folder and publish their news there using an xml form. So far so good.
    Readers of the news access a page with a km-navigation iview that points to a taxonomy folder. The query based taxonomy is set up to categorize news based on property-values chosen by the journalist in the xml-form. This also works as supposed.
    Lifetime (time based publishing) is activated for the folder the journalists access to publish their news. Corresponding start and end dates/times for setting lifetime for an article is set by the journalists. This also works as supposed.
    BUT: after saving each article the system takes awfully long to actually categorize the news and thereby make the articles visible for others than the journalists. It might be as long as half an hour or so. This also repeats itself every time an article is edited.
    I also feel that basic indexing of all other documents in the portal takes too long. I want all new documents to be indexed as soon as they're saved.
    Any tips?
    Henning

    I have set up a news publishing system where journalists access a folder and publish their news there using an xml form. So far so good.
    Readers of the news access a page with a km-navigation iview that points to a taxonomy folder. The query based taxonomy is set up to categorize news based on property-values chosen by the journalist in the xml-form. This also works as supposed.
    Lifetime (time based publishing) is activated for the folder the journalists access to publish their news. Corresponding start and end dates/times for setting lifetime for an article is set by the journalists. This also works as supposed.
    BUT: after saving each article the system takes awfully long to actually categorize the news and thereby make the articles visible for others than the journalists. It might be as long as half an hour or so. This also repeats itself every time an article is edited.
    I also feel that basic indexing of all other documents in the portal takes too long. I want all new documents to be indexed as soon as they're saved.
    Any tips?
    Henning

  • Takes too long to hibernate when I close the lid - Also random device noise when it boots up

    Hello guys.
    Ever since i've wiped the machine, i've been having these two problems. When I close the lid, it used to go to sleep straight away, but now I can see the sleep light (and the power button) flash and flash, then it goes to sleep.
    When waking up it goes through the lenovo startup screen and resuming windows, then it asks for a password, before I used to open the lid and it would ask me for the password straight away. I know it was going to sleep because I could hear the beep straight away when I closed and opened it, but now it just takes too long.
    Also, everytime I boot up into windows or resume into windows from a sleep state, I can hear the device noise, like something is being plugged in/out. But there is nothing being plugged in or out at the time. I can't get to device manager quick enough to see what it is that is causing it. 
    But all drivers seem okay.
    Thanks in advance.
    Sam.
    EDIT: Also noticed, when the lid is closed, randomly the laptop turns off (hear the beep) an then turns back off again.
    Weird.
     T420 model number: 4180-PR1 with OS: Windows 7 Pro 64 bit
    T420 4180-PR1 - OS: Windows 7 Pro 64 bit
    Solved!
    Go to Solution.

    Hi Sam,
    is this to do with the T420 model number: 4180-PR1 with OS: Windows 7 64 bit installed on it as in another thread you posted in?
    Maybe you could pop the information into your signature; members like to know which system and OS are involved.  At the top next to Sign Out choose   My Settings > Personal Profile > Personal Information - Signature
    Andy  ______________________________________
    Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
    Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
      Deutsche Community     Comunidad en Español    English Community Русскоязычное Сообщество
    PepperonI blog 

  • [SOLVED] initramfs takes too long to load

    Using systemd-analyze I found out that initramfs takes too long to load:
    463ms (kernel) + 11875ms (initramfs) + 6014ms (userspace) = 18353ms
    My HOOKS array in mkinitcpio.conf is the following:
    HOOKS="base udev autodetect modconf block encrypt lvm2 filesystems usbinput fsck"
    I suspect that the long loading time of initramfs is caused by partitions decryption (I am using dm-crypt / LUKS on top of LVM).
    Is there any tool that can report loading times of HOOKS seperately, just like systemd-analyze plot does for userspace?
    Last edited by nasosnik (2013-01-21 14:45:28)

    cfr wrote:
    In what sense is it "too long"?
    I'm just wondering: suppose that you find out that it is because you are using encryption. Would you then switch to a non-encrypted system? Would you make better use of the extra seconds you might save on those rare occasions when you reboot? Even if you reboot twice a day, you might save what? Suppose you would even save 5s per boot. That will give you a whole extra 1 minute and 10 seconds a week. Assuming you don't multitask. Obviously if you multitask, the gain will be less. Would that make it worth risking the security of your data?
    EDIT: I didn't mean this to sound as confrontational as it does now I read it back. It just always puzzles me that people are so concerned about shaving a few milliseconds here and there. I always hope that they put the time they save to good use but then I realise that the time they spent shaving the milliseconds off will obviously outstrip the time saved.
    I really don't care about the boot time because of the reasons you have already mentioned. I just want to figure out if there is any misconfiguration. I am just investigating why initramfs takes significant longer to load compared with my desktop Arch installation (non-encrypted) 1316ms for initramfs. My desktop has a Pentium 4 CPU and laptop has a quad-core i7.
    roentgen wrote:11875ms (initramfs)  means the time it takes you to type the password.
    systemd-analyze is not counting the time is spent to type the password.

  • Sometimes my computer takes too long to connect to new website. I am running a pretty powerful work program at same time, what is the best solution? Upgrading speed from cable network, is it a hard drive issue? do I need to "clean out" the computer?

    Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy.  It is a Macbook Pro  osx 10.6.8 (late 2010).

    Almost certainly none of the above!  Try each of the following in this order:
    Select 'Reset Safari' from the Safari menu.
    Close down Safari;  move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
    Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
    Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
              defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false

  • Accessing BKPF table takes too long

    Hi,
    Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
    I'm using this:
       select bukrs gjahr belnr budat blart
       into corresponding fields of table i_bkpf
       from bkpf
       where bukrs eq pa_bukrs
       and gjahr eq pa_gjahr
       and blart in so_DocTypes
       and monat in so_monat.
    The report is taking too long and is eating up a lot of resources.
    Any helpful advice is highly appreciated. Thanks!

    Hi max,
    I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
        select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat in so_budat.
    I also tried accessing the table per day, but it didn't worked too...
       while so_budat-low le so_budat-high.
         select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat eq so_budat-low.
         so_budat-low = so_budat-low + 1.
       endwhile.
    I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period?

  • Report Takes Too Long Time

    Hi!
    I am in troubble
    following is the query
    SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
    sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
    pldesc, i.pmempno, pmname, i.empid, empname
    FROM inv_reg i,
    cat_reg c,
    sub_cat_reg s,
    gen_desc_reg g,
    ploc p,
    province r,
    pmaster m,
    iemp_reg e
    WHERE i.sub_cat_id = s.sub_cat_id
    AND i.cat_id = s.cat_id
    AND s.cat_id = c.cat_id
    AND i.bl_id = g.gen_id
    AND i.cur_loc = p.plcode
    AND p.prvcode = r.prvcode
    AND i.pmempno = m.pmempno(+)
    AND i.empid = e.empid(+)
    &wc
    order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
    and this query returns 32000 records
    when i run this query on reports 10g
    then it takes 10 to 20 minuts to generate report
    how can i optimize it...?

    Hi Waqas Attari
    Pls study & try this ....
    When your query takes too long ...
    hope it helps....
    Regards,
    Abdetu...

Maybe you are looking for

  • How to include in table of contents the list of figures and list of tables?

    How to include in table of content the "Table of contents", "List of figures" and "List of tables"? All are generated with function Table of contents, but they will not appear in generated table of contents even if they have proper style of headings

  • RAM Upgrade - MacBook2,1

    I want to get a friend some additional RAM for her birthday. She has a MacBook2,1. It came with one GB of 667 Mhz RAM. According to Apple, her machine is limited to 2 GBs but I've read elsewhere that she can use 3 GBs. My question is, should I add on

  • How to Setup Fiscal Calendars in Oracle BI Applications through DAC

    Hi, I m new to OBI thing. I need to Setup Finscal calendar for "enterprise calendar using an Oracle EBS source system" and i got some martial but it is not helping out. Kindly help me out on this issue. ASAP. thanks in-advance Edited by: 862383 on Ma

  • Batch Sequence (Signatures)

    Hello, I'm currently working with batch signatures.  I've found documentation that outlines how to do this with javascript: http://www.adobe.com/devnet/acrobat/pdfs/batch_sequences.pdf. However, my results are not quite right.  My script is creating

  • Can't switch operation mode after backup offline

    Hello: Every sunday system stop at 9:30 a.m to backup offline at 16:00p.m System start at 23:00 p.m. Operation mode Day : 7:00 a.m 22:00 p.m Night 22:00 p.m 7:00 a.m This monday dont switch operation mode night to day and disp+work get busy . Nobody