Fetching SDO_GEOMETRY-Objects very slow

Hi,
while fetching all rows from a table
with a sdo_geometry column
(like: "SELECT LINE_ID, GEOMETRY FROM LINES")
the number of rows received from the database decreases
extremely.
I receive the first 5000 rows in about 5 seconds
and the next 5000 rows in 1 minute.
Can anyone help me to get best performance always ?
(I'm using Oracle 8i on Win2000).

Hi,
This could be a lot of different things - lack of memory on the system, bigger (more coordinates) geometries in the second 5000, etc.
Also, 8.1.7 is pretty old, and more recent versions of Oracle do perform better with object in general, and spatial in particular.

Similar Messages

  • Fetching data is very slow in workspace for planning application

    Hello Everyone,
    I am working in the web environment of a planning application and recently I have seen some issue's like Fetching data is very slow through webform's and the system will locking automatically and which kick me off from workspace when I refresh.So could you suggest me like what would be the reason's for this issue's.
    Thanks in advance

    Hi,
    Sounds like your form is very large or there are maybe a lot of dynamic calcs on the retrieval. How many rows and columns have you on the form ? Have you dense or sparse dimensions as rows or columns?
    Brian

  • Why first time loading ReportViewer object very slow

    I am interested to know why the first time .net applaction loading the ReportViewer object so slow. It might take over 60s. Is this a performance issue or we need preload the object?

    Hello Steve,
    if you view a report for the first time the report viewer needs to load all dlls needed to connect to the database and to render and display the report. These dlls stay in memory a certain time afterwards.
    That is why the report takes much longer to display the first time.
    Falk

  • "Fetch Next from" very slow

    Hi all ~ There is a cursor inside a store proc. and at the middle of the loop, became extreamly slow..
    and I look into "sysprocesses"
    "Fetch Next from cursor_name" SUDDENLY uses huge logical read.~~~~
    I am using SQL 2008 R2 SP2 .
    Anyone meet this kind of case before ?

    Hi sakuri_db,
    I’m writing to follow up with you on this post. Was the problem resolved after performing our action plan steps? If you would like to, you can post a reply to share your solution and I will mark it as answer. That way, other community members could benefit
    from your sharing. If it is not resolved, as other post, please post more information or code for analysis.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Documents with Smart Objects - Very slow to open and Save - CS6 Photoshop

    When opening and saving documents with smart objects photoshop freezes the adobe PS loader (circle dots) is replaced and the system loader (multi colored wheel of death) spins for 30 seconds or more.
    What I've tried so far based off looking at various posts.
    Photoshop Preferenes
    Save in Background off
    Maximise PSD and PSB file compatability never
    Cache Tile Size: 128k
    Advanced Graphic Processor Settings: Basic & Normal
    Layer Panel options: No Thumbnail
    Observations and workthroughs to date
    The file size and amount of smart objects effects the file expotentially i.e. The more smart objects you have the worse it gets
    These files worked perfectly in PS CS5
    It also happens on files natively created in PS CS6
    The CPU is maxing out at 100% while PS loads
    Closing or opening suitcase has no effect.
    System:
    iMac 27-inch, Mid 2011
    Processor  3.4 GHz Intel Core i7
    Memory  16 GB 1333 MHz DDR3
    Graphics  AMD Radeon HD 6970M 1024 MB
    Mac OS X Lion 10.7.5 (11G63)
    Suitcase 4
    Anyone got any ideas? This is making me go nuts!

    A solution!
    It turns out the problem in my case was in fact Suitcase. Previously, I'd tried turning it off, but that didn't fix the problem, so this time, I uninstalled it completely and the problem disappeared. I then began re-adding it (installed 15.0.1, upgraded it, etc.) and the problem resurfaced with the addition of the Photoshop-specific plugin. Deleting that plugin solved the problem. So it seems that "disabling" Suitcase by stopping the TypeCore doesn't seem to actually disable all of the tentacles it sticks into your system.
    You can find the plugin here: Applications / Adobe Photoshop CS6 / Plug-ins / Automate / ExtensisFontManagementPSCS6.plugin
    (After a restart, I also had to delete the font cache, as described here http://helpx.adobe.com/photoshop/kb/troubleshoot-fonts-photoshop-cs5.html but your mileage may vary.)
    Alternately, if you don't want to delete the plugin, disabling it from within Photoshop seems to work as well. To do that, go to File > Automate > Extensis, click Preferences..., then deselect Enable Suitcase Fusion 4 Auto-Activation.
    Fortunately, the plugin doesn't seem necessary at all to use the the core functionality of Suitcase (enabling and disabling fonts) in Photoshop. I didn't even know what these app-specific plugins did until researching this problem, and I still don't quite understand the point of them. I guess they allow you to let the apps for which they're installed do a little bit more of their own management (enable a font via Suitcase that isn't enabled system-wide), but that seems like more control than I need--if I'm enabling a font, I want all my software to be able to use it.
    Anyway, the problem seems to be completely solved on my system now, though I just did all this, so more testing over the next few days is required. I'll post here if any issues crop up. I'm interested in hearing if this solves it for anyone else as well.

  • Very slow query on xml db objects

    Hi, I'm a dba (new to xml db) trying to diagnose some very slow queries against xml tables. The following takes 2 + hours to return a count of ~700,000 records. We have some that take 10-15 hours.
    select count(*)
    from MEDTRONICCRM a, table(xmlsequence(extract(value(a),'/MedtronicCRM/Counters/
    Histogram'))) b ,
    table(xmlsequence(extract(value(b),'/Histogram/Row'))) c, table(xmlsequence(extract(value(c),'/Row/Column'))) d
    The explain plan from a tkprof looks like this:
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=1586294 r=27724 w=0 time=334399181 us)
    761020 NESTED LOOPS (cr=1586294 r=27724 w=0 time=6285597846 us)
    209395 NESTED LOOPS (cr=1586294 r=27724 w=0 time=1294406875 us)
    16864 NESTED LOOPS (cr=1586294 r=27724 w=0 time=188247898 us)
    544 TABLE ACCESS FULL OBJ#(26985) (cr=1993 r=548 w=0 time=171380 us)
    16864 COLLECTION ITERATOR PICKLER FETCH (cr=0 r=0 w=0 time=82831582 us)
    209395 COLLECTION ITERATOR PICKLER FETCH (cr=0 r=0 w=0 time=939691917 us)
    761020 COLLECTION ITERATOR PICKLER FETCH (cr=0 r=0 w=0 time=3399611143 us)
    I noticed the following statemnt in a previous thread:
    "Indexing is not the answer here... The problem is the storage model. You are current storing all of the collections as serialized lobs. This means that all manipluation of the collection is done in memory. We need to store the collections as nested tables to get peformant queries on them. You do this by using the store VARRAY as table clause to control which collections are stored as nested tables. "
    Could this be the problem? How do I tell which storage model we are using.
    Thanks ,
    Heath Henjum

    With 10g I get the following
    SQL> begin
    2 dbms_xmlschema.registerSchema
    3 (
    4 schemaURL => '&3',
    5 schemaDoc => xdbURIType('/home/&1/xsd/&4').getClob(),
    6 local => TRUE,
    7 genTypes => TRUE,
    8 genBean => FALSE,
    9 genTables => &5
    10 );
    11 end;
    12 /
    old 4: schemaURL => '&3',
    new 4: schemaURL => 'MedtronicCRM.xsd',
    old 5: schemaDoc => xdbURIType('/home/&1/xsd/&4').getClob(),
    new 5: schemaDoc => xdbURIType('/home/OTNTEST/xsd/MedtronicCRM.xsd').getClob(),
    old 9: genTables => &5
    new 9: genTables => TRUE
    begin
    ERROR at line 1:
    ORA-31154: invalid XML document
    ORA-19202: Error occurred in XML processing
    LSX-00175: a complex base within "simpleContent" must have simple content
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 17
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 26
    ORA-06512: at line 2
    SQL> quit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options

  • Slow performance of fetching BLOB data over network is very slow

    Dear Experts,
    We are deploying a GIS application which handles with BLOB datatypes. Database Server is in RHEL and in 11gR2. The application is using asp.net 3.5 and using ODP.NET to connect with Oracle. But here is a performance chart which shows that performance is very slow when application and database server lie on different subnetwork.
    Case      Login Time
    Same Server(Both in 192.168.9.23)     20-25 sec
    Same Subnet (DB:192.168.9.25) (App: 192.168.9.23)     60-80 sec
    Different Subnet (DB:192.168.11.153) (App:192.168.9.23)     More then 5 minutes
    So what is the reason behind this and how can I improve performance if I want to keep application and database in diff machine?
    Thanks in advance.

    Also, look into the various lob settings in ODP.NET.
    I believe that by default in ODP.NET the LOB will be fetched in separate roundtrips but you can change this my messing with certain properties or using certain methods - apologies, a bit imprecise.
    If you have a network issue then doing more roundtrips than you need to is just going to exacerbate the situation.
    Trace the session - if it's networking or roundtrips you should see symptoms of it in a 10046 trace.

  • Business Objects in vmware cluster very slow

    Post Author: aboorkuma
    CA Forum: Administration
    Hi
    I recently move the BO From standalone server to Cluster in vmwrae users are facing problem when accessing the
    reports , some times page cannot displayed or timeout coming now it is very very slow what are all the parameters i need to tune
    to bring the system faster
    rds

    Hello Smitha,
    please find some [end user documents|https://websmp106.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000713358&_SCENARIO=01100035870000000202] about lots of our products here.
    They will give you a brief overview.
    Please also visit [our BI community|https://boc.sdn.sap.com/] to see more solutions, samples and docus.
    Best regards
    Falk

  • Business Object Infoview very slow on my PC

    We are using BO XI3.0 on a Windows server with an oracle database on a unix platform.
    We have build severall dashboard for data analysis, each one with "interactive metric trend" and multidimensional queries analysis.
    We found Infoview very slow to display the performance data and also to run and get the multidimensional queries data. Keep in mind that the DB is not big (less than 100 K of row)
    The users are using windows Internet Explorer version 6 for accessing Infoview.
    Are there any requirements on the client site such as required RAM or specific version of Java ?
    Is there any tool for analysing the infoview performance ?
    Thank you.

    Can you try increasing you Client Side java virtual memory by setting up following variable.
    JAVA_OPTS = -Xmx512m
    Also, Check your server performance, It should have enough space on the drive on which BO is installed, also do check memory,  you can shutdown unrequired servers, like Dashboard,PM,Desktop INtelligence, Crystal Reports, if you are not going to use it.
    --Kuldeep

  • SDO_AGGR_UNION very slow

    Hello friends,
    I'm having problems with SDO_AGGR_UNION function in Oracle 11.2.0.3. The performance is very bad.
    I need to aggregate several polygons (645) using SDO_AGGR_UNION.
    My table is:
    SQL> DESC GC_MUNICIPALITY;
    Nome Nulo Tipo
    STATE_ID NOT NULL NUMBER(5)
    GEOLOC SDO_GEOMETRY()
    First I ran the following query:
    SELECT SDO_AGGR_UNION(SDOAGGRTYPE(geoloc, 0.5)) geoloc
    FROM GC_MUNICIPALITY
    WHERE STATE_ID = 35
    Execution time: 57 minutes
    Then, I ran then aggregate union with groupings:
    SELECT SDO_AGGR_UNION(SDOAGGRTYPE(geoloc,0.5)) geoloc
    FROM
    (SELECT SDO_AGGR_UNION(SDOAGGRTYPE(geoloc,0.5)) geoloc
    FROM
    (SELECT SDO_AGGR_UNION(SDOAGGRTYPE(geoloc,0.5)) geoloc
    FROM
    (SELECT SDO_AGGR_UNION(SDOAGGRTYPE(geoloc, 0.5)) geoloc
    FROM GC_MUNICIPALITY
    WHERE STATE_ID = 35
    GROUP BY MOD(ROWNUM, 15))
    GROUP BY MOD (ROWNUM, 7))
    GROUP BY MOD (ROWNUM, 2)
    Execution time: 15 minutes
    The second execution was faster than the first, but still very slow.
    If I use ArcGis, PostGis or JTS, the same aggregation is executed in few seconds.
    Why is this function so slow in Oracle?
    Is there another way to accomplish this aggregation with better performance?
    I read in another forum that if I convert from SDO_GEOMETRY to SDO_TOPO_GEOMETRY the aggregation is faster. However, the SDO_AGGR_UNION works only with SDO_GEOMETRY type. Is there any way to aggregate SDO_TOPO_GEOMETRY?
    Thanks!
    Edited by: user12000327 on 03/04/2012 09:30
    Edited by: user12000327 on 03/04/2012 09:33

    Siva,
    This group by won't work as you hit that internal limit in the server.Patently! I provided this example because this is what people want in an aggregate and it is the way most people think of it.
    So try the other approach described in the user guide using a function that creates a geometry array and passes it to the union function.OK, two more assertions:
    1. Working with users over the years I have discovered they can use SQL reasonably well but I am surprised how many simply don't want to go down a PL/SQL function route!
    2.The example given is very opaque as to how to do a SELECT ... GROUP BY aggregation using sdo_aggr_set_union.
    Personally, I would prefer to expose the grouping value....
    CREATE OR REPLACE FUNCTION Set_Geometry(p_value in varchar2)
    RETURN SDO_GEOMETRY_ARRAY
    deterministic
    AS
      c_query   SYS_REFCURSOR;
      v_g       sdo_geometry;
      v_GeomArr sdo_geometry_array;
    BEGIN
      v_GeomArr := SDO_GEOMETRY_ARRAY();
      OPEN c_query FOR 'select a.geometry from admin a where a.blockcode = :1'
                 USING p_value;
       LOOP
        FETCH c_query into v_g;
         EXIT when c_query%NOTFOUND ;
         v_GeomArr.extend;
         v_GeomArr(v_GeomArr.count) := v_g;
       END LOOP;
       RETURN v_GeomArr;
    END;
    -- Which is called like so:
    select a.group_code, count(*) as aggrCount, sdo_aggr_set_union(set_geometry(a.group_code),0.005) as geoms
    from admin a
    where a.hierarchy_code = 'WA'
    group by a.group_code;But this is quite inflexible because it really isn't a generic function but rather a function for a specific piece of SQL.
    Because of this I prefer something more generic:
    CREATE OR REPLACE FUNCTION Set_Geometry(p_cursor in SYS_REFCURSOR)
    RETURN SDO_GEOMETRY_ARRAY
    DETERMINISTIC
    AS
       v_geom    sdo_geometry;
       v_GeomArr sdo_geometry_array;
    BEGIN
       v_GeomArr := SDO_GEOMETRY_ARRAY();
       LOOP
        FETCH p_cursor INTO v_geom;
         EXIT when p_cursor%NOTFOUND ;
         v_GeomArr.extend;
         v_GeomArr(v_GeomArr.count) := v_geom;
       END LOOP;
       RETURN v_GeomArr;
    END;
    -- Results
    FUNCTION Set_Geometry compiled
    -- Now here's how to use it in a SELECT .... GROUP BY ...
    select group_code,
           count(*) as aggrCount,
           sdo_aggr_set_union(set_geometry(CURSOR(SELECT b.geometry FROM test_table b WHERE b.group_code= a.group_code)),0.005) as geoms
    from test_table a
    where a.hierarchy_code = 'WA'
    group by a.group_code;Now I agree SDO_AGGR_SET_UNION is much faster and, with my function, more flexible but it is still ugly.
    I am not an expert on this issue of that system limit of 30k sort area. When we talk to internal groups about it, they always
    ask us to show the customer filed ER. So it will really help if customers like you file an official ER for this.Surely someone in the spatial team is an expert!
    I can't file an ER because my Oracle use is under an OTN license for development purposes. I have asked complaining customers to do so over the years but for some reason they don't so I am left to make a fool of myself on their behalf.
    Having said that, the main problem with this approach is still the multiple copies of the data which tends to take up
    most of the time. So we are coming up with a better way to call sdo_aggr_set_union that avoid this multiple copies of the data.In your UC presentation? It is good for the public to know that the OS team is always trying to make things better and faster.
    regards
    Simon

  • Why reports created in 8.5 and saved in XI.5 are very slow?

    Hello,
    Here is my problem :
    I worked with Crystal Report 8.5 and my reports were speedy.
    I saved a report in a new version (Crystal XI.5) and the report generation is very slow.
    If i saved the same report (saved in 8.5) in the version XI (not the XI.5) the report is speedy.
    If i saved the report from XI.5 to XI there is no change (report slow).
    Have you ever seen this problem?
    The database used is Advantage Database Server 8.1.0.26
    Thanks for the help.
    Mickael

    Hi Mariusz,
    It is expected that the reports will run a bit slow in the newer version as the entire report engine has undergone chnage after version 8.5.
    With the new version, the database security has been enforced a lot more as compared to version 8.5. Along with this, the SQL generation process has also undergone change and the SQL which is now generated is much more optimized.
    Also, there are a lot of new features in terms of formatting the report, which can cause the delay in showing you the first page of the report. It may not just be the database access that is taking time.
    As suggested earlier, if you find the query generated in 8.5 was fetching records faster, you can surely create a command object in XI R2 to check if you get the same speed of data retrival.
    Regards,
    Abhishek Jain.

  • Time Machine Very Slow (321 Days) Please Help!

    Hi all, I've got a 2012 MBP and have had problems with TM since day one.
    I'm running all the latest software with a brand new Time Capsule and it states that it will take over 300 days to back up.
    I originally used a NAS and TM was very very slow along with having to reformat and start again every few weeks so I gave in and got a TC thinking it would solve my problems but it's returned again.
    I've tried reformatting, disabling spotlight, wired and wireless connections, different hard drives and still the problem persists. Occasionally it also fails to find the backup disk so I have to reformat and start again.
    It's driving me absolutely insane because despite messing and messing, trying different solutions the problem always seems to return at random. It will work great for a week or 2 then it's back to square one.
    Can someone please restore my sanity and help?

    To update, I got up this morning and it was still stating some 22 days after copying a few gb in 18 hours so I restarted (again). TM then stated it couldn't find the back up disk so I again pointed it to the right disk and it now appears to be stuck again after doing over a gb in a minute or two.
    Here's the info requested.
    31/01/2013 09:53:34.380 com.apple.backupd[398]: Starting manual backup
    31/01/2013 09:53:46.000 kernel[0]: nfs server localhost:/jiWSkBZ-Y-Uq-EgHJVYVPN: not responding
    31/01/2013 09:53:47.000 kernel[0]: nfs server localhost:/jiWSkBZ-Y-Uq-EgHJVYVPN: can not connect, error 61
    31/01/2013 09:54:16.262 KernelEventAgent[46]: tid 00000000 received event(s) VQ_DEAD (32)
    31/01/2013 09:54:16.263 KernelEventAgent[46]: tid 00000000 type 'mtmfs', mounted on '/Volumes/MobileBackups', from 'localhost:/jiWSkBZ-Y-Uq-EgHJVYVPN', dead
    31/01/2013 09:54:16.000 kernel[0]: nfs server localhost:/jiWSkBZ-Y-Uq-EgHJVYVPN: dead
    31/01/2013 09:54:16.263 KernelEventAgent[46]: tid 00000000 force unmount localhost:/jiWSkBZ-Y-Uq-EgHJVYVPN from /Volumes/MobileBackups
    31/01/2013 09:54:16.263 KernelEventAgent[46]: tid 00000000 found 1 filesystem(s) with problem(s)
    31/01/2013 09:54:22.198 com.apple.backupd[398]: Failed to get NSURLVolumeURLForRemountingKey for /Volumes/MobileBackups, error: Error Domain=NSCocoaErrorDomain Code=4 "The file “MobileBackups” doesn’t exist." UserInfo=0x7f9934a04da0 {NSURL=file://localhost/Volumes/MobileBackups/, NSFilePath=/Volumes/MobileBackups}
    31/01/2013 09:54:22.198 com.apple.backupd[398]: Attempting to mount network destination URL: afp://Dave%[email protected]/AirPort%20Disk
    31/01/2013 09:54:23.036 com.apple.backupd[398]: Mounted network destination at mount point: /Volumes/AirPort Disk using URL: afp://Dave%[email protected]/AirPort%20Disk
    31/01/2013 09:54:23.000 kernel[0]: ASP_TCP asp_tcp_usr_control: invalid kernelUseCount 0
    31/01/2013 09:54:23.000 kernel[0]: AFP_VFS afpfs_mount: /Volumes/AirPort Disk, pid 435
    31/01/2013 09:54:23.000 kernel[0]: AFP_VFS afpfs_mount : succeeded on volume 0xffffff80946e1008 /Volumes/AirPort Disk (error = 0, retval = 0)
    31/01/2013 09:54:48.061 mdworker[434]: Unable to talk to lsboxd
    31/01/2013 09:54:48.000 kernel[0]: Sandbox: sandboxd(442) deny mach-lookup com.apple.coresymbolicationd
    31/01/2013 09:54:51.281 sandboxd[442]: ([434]) mdworker(434) deny mach-lookup com.apple.ls.boxd
    31/01/2013 09:54:51.000 kernel[0]: nspace-handler-set-snapshot-time: 1359626093
    31/01/2013 09:55:28.923 mds[40]: (Error) Volume: Root store set to FSOnly with matching create! (loaded:1)
    31/01/2013 09:55:29.892 com.apple.backupd[398]: Disk image /Volumes/AirPort Disk/Dave’s MacBook Pro.sparsebundle mounted at: /Volumes/Time Machine Backups
    31/01/2013 09:55:29.937 com.apple.backupd[398]: Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    31/01/2013 09:55:30.908 com.apple.backupd[398]: Forcing deep traversal on source: "Macintosh HD" (mount: '/' fsUUID: 74A87C50-39EA-3B75-AACA-0647768C78F6 eventDBUUID: 539B3BBD-544C-444D-9C2C-14001A19F631)
    31/01/2013 09:55:32.032 com.apple.backupd[398]: Didn't get valid migration dates.
    31/01/2013 09:55:32.040 com.apple.backupd[398]: Found 806587 files (286.4 GB) needing backup
    31/01/2013 09:55:32.041 com.apple.backupd[398]: 343.68 GB required (including padding), 1.99 TB available
    31/01/2013 09:56:15.000 kernel[0]: (default pager): [KERNEL]: ps_allocate_cluster - send HI_WAT_ALERT
    31/01/2013 09:56:15.000 kernel[0]: macx_swapon SUCCESS
    31/01/2013 09:56:43.954 com.apple.usbmuxd[27]: _handle_timer heartbeat detected detach for device 0x1b-192.168.0.13:0!
    31/01/2013 09:56:51.626 mdworker[488]: Unable to talk to lsboxd
    31/01/2013 09:56:51.746 sandboxd[489]: ([488]) mdworker(488) deny mach-lookup com.apple.ls.boxd
    31/01/2013 09:56:52.000 kernel[0]: Sandbox: sandboxd(489) deny mach-lookup com.apple.coresymbolicationd
    31/01/2013 09:57:06.212 CVMServer[83]: Check-in to the service com.apple.cvmsCompAgent_x86_64 failed. This is likely because you have either unloaded the job or the MachService has the ResetAtClose attribute specified in the launchd.plist. If present, this attribute should be removed.
    31/01/2013 09:57:06.406 launchctl[493]: launchctl: Dubious ownership on file (skipping): /Library/LaunchAgents/com.davdaerve.UpdatePodcastsAgent.plist
    31/01/2013 09:57:07.174 CVMServer[83]: Check-in to the service com.apple.cvmsCompAgent_x86_64 failed. This is likely because you have either unloaded the job or the MachService has the ResetAtClose attribute specified in the launchd.plist. If present, this attribute should be removed.
    31/01/2013 09:57:14.580 Folder Sync[495]: Growl.framework loaded
    31/01/2013 09:57:14.645 Folder Sync[495]: Using existing log dir: /Users/davedaerve/Library/com.destek.foldersync
    31/01/2013 09:57:20.128 Folder Sync[495]: Scheduler: Run Check for 9:57
    31/01/2013 09:57:26.027 mDNSResponder[41]: ERROR: handle_resolve_request bad interfaceIndex 1
    31/01/2013 09:57:26.129 mDNSResponder[41]: ERROR: handle_resolve_request bad interfaceIndex 15
    31/01/2013 09:57:26.258 mDNSResponderHelper[497]: do_mDNSSendWakeupPacket write failed Device not configured
    31/01/2013 09:57:26.259 mDNSResponderHelper[497]: do_mDNSSendWakeupPacket write failed Device not configured
    31/01/2013 09:57:26.261 mDNSResponder[41]: ERROR: handle_resolve_request bad interfaceIndex 16
    31/01/2013 09:57:26.285 mDNSResponder[41]: ERROR: handle_resolve_request bad interfaceIndex 24
    31/01/2013 09:57:27.132 mDNSResponderHelper[497]: do_mDNSSendWakeupPacket write failed Device not configured
    31/01/2013 09:57:37.838 iTunes[496]:  AVF KeyExchange Version from driver for Certificates 1
    31/01/2013 09:57:46.991 Twitter[498]: font ChicagoBold loaded
    31/01/2013 09:57:47.054 Twitter[498]: font pixChicago loaded
    31/01/2013 09:57:48.244 Twitter[498]: could not fetch oAuthTokenSecret, this account will get removed
    31/01/2013 09:57:51.623 Mail[499]: Using V2 Layout
    31/01/2013 09:57:52.921 com.apple.SecurityServer[16]: Session 100012 created
    31/01/2013 09:57:56.499 com.apple.launchd[1]: (com.apple.qtkitserver[258]) Exited: Killed: 9
    31/01/2013 09:57:56.000 kernel[0]: memorystatus_thread: idle exiting pid 258 [com.apple.qtkits]
    31/01/2013 09:57:57.000 kernel[0]: memorystatus_thread: idle exiting pid 242 [cfprefsd]
    31/01/2013 09:57:58.338 librariand[500]: MMe quota status changed: under quota
    31/01/2013 09:57:58.350 com.apple.usbmuxd[27]: _handle_timer heartbeat detected detach for device 0x1c-192.168.0.6:0!
    31/01/2013 09:57:58.998 com.apple.usbmuxd[27]: _handle_timer heartbeat detected detach for device 0x1d-192.168.0.5:0!
    31/01/2013 09:57:59.088 Mail[499]: *** -[IADomainCache init]: IA domains cache is out of date.
    31/01/2013 09:58:00.137 Folder Sync[495]: Scheduler: Run Check for 9:58
    31/01/2013 09:58:00.147 com.apple.launchd.peruser.501[141]: ([0x0-0x14014].com.apple.inputmethod.ironwood[223]) Exited: Killed: 9
    31/01/2013 09:58:00.000 kernel[0]: memorystatus_thread: idle exiting pid 223 [DictationIM]
    31/01/2013 09:58:19.718 Mail[499]: Couldn't contact spell checker for Multilingual
    31/01/2013 09:58:48.777 com.apple.launchd[1]: (com.apple.iCloudHelper[232]) Exited: Killed: 9
    31/01/2013 09:58:48.000 kernel[0]: memorystatus_thread: idle exiting pid 232 [com.apple.iCloud]
    31/01/2013 09:58:49.895 com.apple.launchd.peruser.501[141]: (com.apple.tccd[213]) Exited: Killed: 9
    31/01/2013 09:58:49.000 kernel[0]: memorystatus_thread: idle exiting pid 213 [tccd]
    31/01/2013 09:58:49.937 WindowServer[106]: CGXRegisterWindowWithSystemStatusBar: window b already registered
    31/01/2013 09:58:50.000 kernel[0]: memorystatus_thread: idle exiting pid 211 [xpcd]
    31/01/2013 09:58:50.453 com.apple.launchd[1]: (com.apple.xpcd.F5010000-0000-0000-0000-000000000000[211]) Exited: Killed: 9
    31/01/2013 09:58:54.351 com.apple.launchd.peruser.0[278]: (com.apple.distnoted.xpc.agent[300]) Exited: Killed: 9
    31/01/2013 09:58:54.000 kernel[0]: memorystatus_thread: idle exiting pid 300 [distnoted]
    31/01/2013 09:58:54.000 kernel[0]: memorystatus_thread: idle exiting pid 240 [distnoted]
    31/01/2013 09:58:54.920 com.apple.launchd.peruser.501[141]: (com.apple.accountsd[210]) Exited: Killed: 9
    31/01/2013 09:58:55.000 kernel[0]: memorystatus_thread: idle exiting pid 210 [accountsd]
    31/01/2013 09:58:55.559 com.apple.launchd.peruser.501[141]: (com.apple.CalendarAgent[187]) Exited: Killed: 9
    31/01/2013 09:58:55.000 kernel[0]: memorystatus_thread: idle exiting pid 187 [CalendarAgent]
    31/01/2013 09:58:58.000 kernel[0]: (default pager): [KERNEL]: ps_select_segment - send HI_WAT_ALERT
    31/01/2013 09:58:58.000 kernel[0]: macx_swapon SUCCESS
    31/01/2013 09:59:00.149 Folder Sync[495]: Scheduler: Run Check for 9:59
    31/01/2013 09:59:06.000 kernel[0]: (default pager): [KERNEL]: ps_select_segment - send HI_WAT_ALERT
    31/01/2013 09:59:06.000 kernel[0]: macx_swapon SUCCESS
    31/01/2013 09:59:08.759 launchctl[512]: launchctl: Dubious ownership on file (skipping): /Library/LaunchAgents/com.davdaerve.UpdatePodcastsAgent.plist
    31/01/2013 10:00:00.158 Folder Sync[495]: Scheduler: Run Check for 10:00
    31/01/2013 10:00:01.180 launchctl[525]: launchctl: Dubious ownership on file (skipping): /Library/LaunchAgents/com.davdaerve.UpdatePodcastsAgent.plist
    31/01/2013 10:00:09.000 kernel[0]: memorystatus_thread: idle exiting pid 515 [cfprefsd]
    31/01/2013 10:00:33.665 mdworker[639]: Unable to talk to lsboxd
    31/01/2013 10:00:34.000 kernel[0]: Sandbox: sandboxd(640) deny mach-lookup com.apple.coresymbolicationd
    31/01/2013 10:00:34.311 sandboxd[640]: ([639]) mdworker(639) deny mach-lookup com.apple.ls.boxd
    31/01/2013 10:01:00.171 Folder Sync[495]: Scheduler: Run Check for 10:01
    31/01/2013 10:01:10.060 Folder Sync[495]: objc[495]: Object 0x10bc05ab0 of class NSUserDefaults autoreleased with no pool in place - just leaking - break on objc_autoreleaseNoPool() to debug
    31/01/2013 10:01:14.200 com.apple.launchd.peruser.501[141]: (com.apple.tccd[647]) Exited: Killed: 9
    31/01/2013 10:01:14.000 kernel[0]: memorystatus_thread: idle exiting pid 647 [tccd]
    31/01/2013 10:01:31.206 lsboxd[228]: @AE relay 4755524c:4755524c
    31/01/2013 10:01:31.502 WindowServer[106]: CGXRegisterWindowWithSystemStatusBar: window b already registered
    31/01/2013 10:01:57.362 CalendarAgent[660]: *** -[IADomainCache init]: IA domains cache is out of date.
    31/01/2013 10:02:00.182 Folder Sync[495]: Scheduler: Run Check for 10:02
    31/01/2013 10:02:14.579 CalendarAgent[660]: [com.apple.calendar.store.log.caldav.queue] [Account refresh failed with error: Error Domain=CoreDAVHTTPStatusErrorDomain Code=503 "The operation couldn’t be completed. (CoreDAVHTTPStatusErrorDomain error 503.)" UserInfo=0x7fc9524481d0 {AccountName=Yahoo!, CalDAVErrFromRefresh=YES, CoreDAVHTTPHeaders=<CFBasicHash 0x7fc9520cc0c0 [0x10aa25fd0]>{type = immutable dict, count = 11,
    entries =>
              0 : Case Insensitive Key: Connection = <CFString 0x7fc9520b7470 [0x10aa25fd0]>{contents = "keep-alive"}
              1 : Case Insensitive Key: Content-Type = <CFString 0x7fc9520bc410 [0x10aa25fd0]>{contents = "text/html; charset=UTF-8"}
              2 : Case Insensitive Key: Retry-After = <CFString 0x7fc952086140 [0x10aa25fd0]>{contents = "3600"}
              3 : Case Insensitive Key: Via = <CFString 0x7fc9520cb850 [0x10aa25fd0]>{contents = "HTTP/1.1 calgate003.cal.ac4.yahoo.com (YahooTrafficServer/1.19.11 [c s f ])"}
              4 : Case Insensitive Key: Age = <CFString 0x10a9fa110 [0x10aa25fd0]>{contents = "0"}
              5 : Case Insensitive Key: P3P = <CFString 0x7fc952086050 [0x10aa25fd0]>{contents = "policyref="http://info.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE LOC GOV""}
              6 : Case Insensitive Key: Date = <CFString 0x7fc95209e440 [0x10aa25fd0]>{contents = "Thu, 31 Jan 2013 10:02:14 GMT"}
              7 : Case Insensitive Key: Server = <CFString 0x7fc9520b7490 [0x10aa25fd0]>{contents = "YTS/1.19.11"}
              8 : Case Insensitive Key: Transfer-Encoding = <CFString 0x10bf826f8 [0x10aa25fd0]>{contents = "Identity"}
              11 : Case Insensitive Key: Cache-Control = <CFString 0x7fc9520cff40 [0x10aa25fd0]>{contents = "private"}
              12 : Case Insensitive Key: Vary = <CFString 0x7fc9520b67f0 [0x10aa25fd0]>{contents = "Accept-Encoding"}
    31/01/2013 10:03:00.193 Folder Sync[495]: Scheduler: Run Check for 10:03
    31/01/2013 10:03:23.921 iTunes[496]: _NotificationSocketReadCallbackGCD (thread 0x10717e180): Unexpected connection closure...
    31/01/2013 10:03:23.923 ath[665]: _NotificationSocketReadCallbackGCD (thread 0x101b21180): Unexpected connection closure...
    31/01/2013 10:03:25.439 mds[40]: (Warning) FMW: event:1 had an arg mismatch.  ac:2 am:51
    31/01/2013 10:03:29.551 com.apple.launchd.peruser.501[141]: (com.apple.CalendarAgent[660]) Exited: Killed: 9
    31/01/2013 10:03:29.000 kernel[0]: memorystatus_thread: idle exiting pid 660 [CalendarAgent]
    31/01/2013 10:03:45.425 iTunes[496]: _NotificationSocketReadCallbackGCD (thread 0x10717e180): Unexpected connection closure...
    31/01/2013 10:03:45.427 ath[665]: _NotificationSocketReadCallbackGCD (thread 0x101b21180): Unexpected connection closure...
    31/01/2013 10:04:00.203 Folder Sync[495]: Scheduler: Run Check for 10:04
    31/01/2013 10:04:13.627 CalendarAgent[673]: *** -[IADomainCache init]: IA domains cache is out of date.
    31/01/2013 10:04:16.750 Twitter[498]: will terminate
    31/01/2013 10:04:16.789 Twitter[498]: Error: no oAuthTokenSecret set for account
    31/01/2013 10:04:23.078 CalendarAgent[673]: [com.apple.calendar.store.log.caldav.queue] [Account refresh failed with error: Error Domain=CoreDAVHTTPStatusErrorDomain Code=401 "The operation couldn’t be completed. (CoreDAVHTTPStatusErrorDomain error 401.)" UserInfo=0x7fca11285a50 {AccountName=Yahoo!, CalDAVErrFromRefresh=YES, CoreDAVHTTPHeaders=<CFBasicHash 0x7fca1160a910 [0x10f5b6fd0]>{type = immutable dict, count = 11,
    entries =>
              0 : Case Insensitive Key: Connection = <CFString 0x7fca13581410 [0x10f5b6fd0]>{contents = "keep-alive"}
              1 : Case Insensitive Key: Content-Type = <CFString 0x7fca11607420 [0x10f5b6fd0]>{contents = "text/html; charset=UTF-8"}
              2 : Case Insensitive Key: Server = <CFString 0x7fca11607450 [0x10f5b6fd0]>{contents = "YTS/1.19.11"}
              3 : Case Insensitive Key: Via = <CFString 0x7fca1160a8b0 [0x10f5b6fd0]>{contents = "HTTP/1.1 calgate007.cal.ac4.yahoo.com (YahooTrafficServer/1.19.11 [c s f ])"}
              4 : Case Insensitive Key: Age = <CFString 0x10f58b110 [0x10f5b6fd0]>{contents = "0"}
              5 : Case Insensitive Key: P3P = <CFString 0x7fca11610b00 [0x10f5b6fd0]>{contents = "policyref="http://info.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE LOC GOV""}
              6 : Case Insensitive Key: Date = <CFString 0x7fca135f2890 [0x10f5b6fd0]>{contents = "Thu, 31 Jan 2013 10:04:22 GMT"}
              7 : Case Insensitive Key: Transfer-Encoding = <CFString 0x110b206f8 [0x10f5b6fd0]>{contents = "Identity"}
              9 : Case Insensitive Key: Www-Authenticate = <CFString 0x7fca135f28c0 [0x10f5b6fd0]>{contents = "Basic realm="Zimbra""}
              11 : Case Insensitive Key: Cache-Control = <CFString 0x7fca116d2810 [0x10f5b6fd0]>{contents = "private"}
              12 : Case Insensitive Key: Vary = <CFString 0x7fca135f28f0 [0x10f5b6fd0]>{contents = "Accept-Encoding"}
    31/01/2013 10:04:27.512 iTunes[496]: _NotificationSocketReadCallbackGCD (thread 0x10717e180): Unexpected connection closure...
    31/01/2013 10:04:27.514 ath[665]: _NotificationSocketReadCallbackGCD (thread 0x101b21180): Unexpected connection closure...
    31/01/2013 10:04:27.851 Dock[156]: trying to take pid 499 out of fullscreen but it is not the current space
    31/01/2013 10:04:27.972 Mail[499]: CGSGetWindowTags: Invalid window 0x0
    31/01/2013 10:04:27.972 Mail[499]: void _NSMoveWindowToSpaceUnlessSticky(NSInteger, CGSSpaceID): CGSGetWindowTags(cid, win, tags, kCGSRealMaximumTagSize) returned CGError 1000 on line 1247
    31/01/2013 10:04:27.977 WindowServer[106]: CGXSetWindowListTags: Invalid window 0
    31/01/2013 10:04:33.945 iTunes[496]: 2013-01-31 10:04:33.944722 AM [AVSystemController] Stopping AirPlay
    31/01/2013 10:04:34.218 WindowServer[106]: CGXGetConnectionProperty: Invalid connection 82119
    31/01/2013 10:04:34.218 WindowServer[106]: CGXGetConnectionProperty: Invalid connection 82119
    31/01/2013 10:04:34.219 WindowServer[106]: CGXGetConnectionProperty: Invalid connection 82119
    31/01/2013 10:05:07.671 mdworker[671]: Unable to talk to lsboxd
    31/01/2013 10:05:08.000 kernel[0]: Sandbox: sandboxd(680) deny mach-lookup com.apple.coresymbolicationd
    31/01/2013 10:05:08.457 sandboxd[680]: ([671]) mdworker(671) deny mach-lookup com.apple.ls.boxd
    31/01/2013 10:05:21.289 com.apple.SecurityServer[16]: Killing auth hosts
    31/01/2013 10:05:21.289 com.apple.SecurityServer[16]: Session 100012 destroyed
    31/01/2013 10:09:59.228 WindowServer[106]: CGXRegisterWindowWithSystemStatusBar: window b already registered
    31/01/2013 10:10:15.787 mdworker[697]: Unable to talk to lsboxd
    31/01/2013 10:10:16.000 kernel[0]: Sandbox: sandboxd(810) deny mach-lookup com.apple.coresymbolicationd
    31/01/2013 10:10:16.708 sandboxd[810]: ([697]) mdworker(697) deny mach-lookup com.apple.ls.boxd

  • Queries very slow in JSP

    Hi,
    I am working with an Application Server and a Database Oracle 10g. In the JSP's I have a problem with the queries, there are some queries that work very slow, it is not logical for me, because the same query executed in Toad or SQLPlus works very fast.
    For example this query:
    SELECT MAX(COL1) FROM MYTABLE WHERE COL2 = +PARAMETER
    it is very slow in JSP, I have try to do a stored function that returns the result of this query, but it's also is slow:
    SELECT MY_FUNCTION FROM DUAL
    I haven't a lot of experience with JSP. Please, can someone help me?
    Thanks in advance.
    Fernando.

    Hello Krystian.
    Yes, I'm using prepared statements, but I don't know if this can altered the result of a query.
    I'm going to add the result of the trace:
    /* THIS IS THE TOAD EXECUTION */
    SELECT MAX(MED2.MEDICION_ID)
    FROM
    T_MEDICIONES MED2 WHERE MED2.DISPOSITIVO_SENSOR_ID=340 AND VALIDA = 'S'
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.00 0 6 0 2
    total 6 0.00 0.00 0 6 0 2
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 58
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3 pr=0 pw=0 time=181 us)
    1 FIRST ROW (cr=3 pr=0 pw=0 time=121 us)
    1 INDEX RANGE SCAN (MIN/MAX) IND_MEDICIONES_DISP_SENS (cr=3 pr=0 pw=0 time=110 us)(object id 50555)
    explain plan set statement_id='ROOT:020306140415' into PLAN_TABLE For SELECT MAX(MED2.MEDICION_ID)
    FROM
    T_MEDICIONES MED2 WHERE MED2.DISPOSITIVO_SENSOR_ID=340 AND VALIDA = 'S'
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.01 0.01 0 0 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 58
    Rows Row Source Operation
    0 SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
    0 FIRST ROW (cr=0 pr=0 pw=0 time=0 us)
    0 INDEX RANGE SCAN (MIN/MAX) IND_MEDICIONES_DISP_SENS (cr=0 pr=0 pw=0 time=0 us)(object id 50555)
    /* THIS IS THE JSP EXECUTION */
    explain plan set statement_id='ROOT:020306121435' into PLAN_TABLE For /* Formatted on 2006/02/03 12:14 (Formatter Plus v4.8.5) */
    SELECT MAX (med2.medicion_id)
    FROM t_mediciones med2
    WHERE med2.dispositivo_sensor_id = 340
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.01 0.01 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 0.02 0 0 0 0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 58
    Rows Row Source Operation
    0 SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
    0 INDEX FAST FULL SCAN IND_MEDICIONES_DISP_SENS (cr=0 pr=0 pw=0 time=0 us)(object id 50555)
    ********************************************************************************

  • IPhone core data - fetched managed objects not being autoreleased on device (fine on simulator)

    I'm currently struggling with a core data issue with my app that defies (my) logic. I'm sure I'm doing something wrong but can't see what. I am doing a basic executeFetchRequest on my core data entity, but the array of managed objects returned never seems to be released ONLY when I run it on the iPhone, under the simulator it works exactly as expected. This is despite using an NSAutoreleasePool to ensure the memory footprint is minimised. I have also checked with Instruments and there are no leaks, just ever increasing allocations of memory (by '[NSManagedObject(_PFDynamicAccessorsAndPropertySupport) allocWithEntity:]'). In my actual app this eventually leads to a didReceiveMemoryWarning call. I have produced a minimal program that reproduces the problem below. I have tried various things such as faulting all the objects before draining the pool, but with no joy. If I provide an NSError pointer to the fetch no error is returned. There are no background threads running.
    +(natural_t) get_free_memory {
        mach_port_t host_port;
        mach_msg_type_number_t host_size;
        vm_size_t pagesize;
        host_port = mach_host_self();
        host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
        host_page_size(host_port, &pagesize);
        vm_statistics_data_t vm_stat;
        if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS) {
            NSLog(@"Failed to fetch vm statistics");
            return 0;
        /* Stats in bytes */
        natural_t mem_free = vm_stat.free_count * pagesize;
        return mem_free;
    - (void)viewDidLoad
        [super viewDidLoad];
        // Set up the edit and add buttons.
        self.navigationItem.leftBarButtonItem = self.editButtonItem;
        UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(insertNewObject)];
        self.navigationItem.rightBarButtonItem = addButton;
        [addButton release];
        // Obtain the Managed Object Context
        NSManagedObjectContext *context = [(id)[[UIApplication sharedApplication] delegate] managedObjectContext];
        // Check the free memory before we start
        NSLog(@"INITIAL FREEMEM: %d", [RootViewController get_free_memory]);
        // Loop around a few times
        for(int i=0; i<20; i++) {
            // Create an autorelease pool just for this loop
            NSAutoreleasePool *looppool = [[NSAutoreleasePool alloc] init];
            // Check the free memory each time around the loop
            NSLog(@"FREEMEM: %d", [RootViewController get_free_memory]);
            // Create a minimal request
            NSEntityDescription *entityDescription = [NSEntityDescription                                                 
                                                  entityForName:@"TestEntity" inManagedObjectContext:context];
            // 'request' released after fetch to minimise use of autorelease pool       
            NSFetchRequest *request = [[NSFetchRequest alloc] init];
            [request setEntity:entityDescription];
            // Perform the fetch
            NSArray *array = [context executeFetchRequest:request error:nil];       
            [request release];
            // Drain the pool - should release the fetched managed objects?
            [looppool drain];
        // Check the free menory at the end
        NSLog(@"FINAL FREEMEM: %d", [RootViewController get_free_memory]);
    When I run the above on the simulator I get the following output (which looks reasonable to me):
    2011-06-06 09:50:28.123 renniksoft[937:207] INITIAL FREEMEM: 14782464
    2011-06-06 09:50:28.128 renniksoft[937:207] FREEMEM: 14807040
    2011-06-06 09:50:28.135 renniksoft[937:207] FREEMEM: 14831616
    2011-06-06 09:50:28.139 renniksoft[937:207] FREEMEM: 14852096
    2011-06-06 09:50:28.142 renniksoft[937:207] FREEMEM: 14872576
    2011-06-06 09:50:28.146 renniksoft[937:207] FREEMEM: 14897152
    2011-06-06 09:50:28.149 renniksoft[937:207] FREEMEM: 14917632
    2011-06-06 09:50:28.153 renniksoft[937:207] FREEMEM: 14938112
    2011-06-06 09:50:28.158 renniksoft[937:207] FREEMEM: 14962688
    2011-06-06 09:50:28.161 renniksoft[937:207] FREEMEM: 14983168
    2011-06-06 09:50:28.165 renniksoft[937:207] FREEMEM: 14741504
    2011-06-06 09:50:28.168 renniksoft[937:207] FREEMEM: 14770176
    2011-06-06 09:50:28.174 renniksoft[937:207] FREEMEM: 14790656
    2011-06-06 09:50:28.177 renniksoft[937:207] FREEMEM: 14811136
    2011-06-06 09:50:28.182 renniksoft[937:207] FREEMEM: 14831616
    2011-06-06 09:50:28.186 renniksoft[937:207] FREEMEM: 14589952
    2011-06-06 09:50:28.189 renniksoft[937:207] FREEMEM: 14610432
    2011-06-06 09:50:28.192 renniksoft[937:207] FREEMEM: 14630912
    2011-06-06 09:50:28.194 renniksoft[937:207] FREEMEM: 14651392
    2011-06-06 09:50:28.197 renniksoft[937:207] FREEMEM: 14671872
    2011-06-06 09:50:28.200 renniksoft[937:207] FREEMEM: 14692352
    2011-06-06 09:50:28.203 renniksoft[937:207] FINAL FREEMEM: 14716928
    However, when I run it on an actual iPhone 4 (4.3.3) I get the following result:
    2011-06-06 09:55:54.341 renniksoft[4727:707] INITIAL FREEMEM: 267927552
    2011-06-06 09:55:54.348 renniksoft[4727:707] FREEMEM: 267952128
    2011-06-06 09:55:54.702 renniksoft[4727:707] FREEMEM: 265818112
    2011-06-06 09:55:55.214 renniksoft[4727:707] FREEMEM: 265355264
    2011-06-06 09:55:55.714 renniksoft[4727:707] FREEMEM: 264892416
    2011-06-06 09:55:56.215 renniksoft[4727:707] FREEMEM: 264441856
    2011-06-06 09:55:56.713 renniksoft[4727:707] FREEMEM: 263979008
    2011-06-06 09:55:57.226 renniksoft[4727:707] FREEMEM: 264089600
    2011-06-06 09:55:57.721 renniksoft[4727:707] FREEMEM: 263630848
    2011-06-06 09:55:58.226 renniksoft[4727:707] FREEMEM: 263168000
    2011-06-06 09:55:58.726 renniksoft[4727:707] FREEMEM: 262705152
    2011-06-06 09:55:59.242 renniksoft[4727:707] FREEMEM: 262852608
    2011-06-06 09:55:59.737 renniksoft[4727:707] FREEMEM: 262389760
    2011-06-06 09:56:00.243 renniksoft[4727:707] FREEMEM: 261931008
    2011-06-06 09:56:00.751 renniksoft[4727:707] FREEMEM: 261992448
    2011-06-06 09:56:01.280 renniksoft[4727:707] FREEMEM: 261574656
    2011-06-06 09:56:01.774 renniksoft[4727:707] FREEMEM: 261148672
    2011-06-06 09:56:02.290 renniksoft[4727:707] FREEMEM: 260755456
    2011-06-06 09:56:02.820 renniksoft[4727:707] FREEMEM: 260837376
    2011-06-06 09:56:03.334 renniksoft[4727:707] FREEMEM: 260395008
    2011-06-06 09:56:03.825 renniksoft[4727:707] FREEMEM: 259932160
    2011-06-06 09:56:04.346 renniksoft[4727:707] FINAL FREEMEM: 259555328
    The amount of free memory reduces each time round the loop in proportion to the managed objects I fetch e.g. if I fetch twice as many objects then the free memory reduces twice as quickly - so I'm pretty confident it is the managed objects that are not being released. Note that the entities that are being fetched are very basic, just two attributes, a string and a 16 bit integer. There are 1000 of them being fetched in the examples above. The code I used to generate them is as follows:
    // Create test entities
    for(int i=0; i<1000; i++) {
        id entity = [NSEntityDescription insertNewObjectForEntityForName:@"TestEntity" inManagedObjectContext:context];
        [entity setValue:[NSString stringWithFormat:@"%d",i] forKey:@"name"];
        [entity setValue:[NSNumber numberWithInt:i] forKey:@"value"];
    if (![context save:nil]) {
        NSLog(@"Couldn't save");
    If anyone can explain to me what is going on I'd be very grateful! This issue is the only only one holding up the release of my app. It works beautifully on the simulator!!
    Please let me know if there's any more info I can supply.

    Update: I modified the above code so that the fetch (and looppool etc.) take place when a timer fires. This means that the fetches aren't blocked in viewDidLoad.
    The result of this is that the issue happens exactly as before, but the applicationDidReceiveMemoryWarning is fired as expected:
    2011-06-08 09:54:21.024 renniksoft[5993:707] FREEMEM: 6131712
    2011-06-08 09:54:22.922 renniksoft[5993:707] Received memory warning. Level=2
    2011-06-08 09:54:22.926 renniksoft[5993:707] applicationDidReceiveMemoryWarning
    2011-06-08 09:54:22.929 renniksoft[5993:707] FREEMEM: 5615616
    2011-06-08 09:54:22.932 renniksoft[5993:707] didReceiveMemoryWarning
    2011-06-08 09:54:22.935 renniksoft[5993:707] FREEMEM: 5656576

  • Export to text file very, very slow

    Hi, I am running 2.1.0.63 and doing an export of a query to a text file. The export was 66000 rows, but took incredibly almost around 2 hours to finish. The row counter in the status bar was very slow. The same exact query/export under 1.5.4 ran in about 5 mins. The row counter on both the fetching and exporting was just flying by.
    Has anybody else experienced this ? What could be causing it to run that slow under this release ?
    The other issue I have is once there has been no activity for a while, sqldeveloper hangs has to be restarted. I saw this being addressed in another post however, and will follow it there.
    Any ideas on when a patch will be made available to this release ?
    BTW, I really like some of the other new features of this release.
    Thanks,
    Mark

    Ok, I think the issue was that I was exporting to a shared network drive. I redid it, saving it locally and it worked fine.
    I still have the issue of inactivity and having to restart sqldeveloper.
    Thanks.

Maybe you are looking for