Better debugging than cfdebug in cflayout?

I am using cflayout extensively in an app.
But I find the debugging useless.  So I must be doing something wrong.
It loads and I get error ...
"Error retreiving markup for element cf_layoutareamanage: OK
[Enable debugging by adding 'cfdebug' ...
Adding that provides me with an AJAX window with error:
error:http: Error retrieving markup for element cf_layoutareamanage : OK
More helpful?  I think not.
What does one do to see the error?

I am using cflayout extensively in an app.
But I find the debugging useless.  So I must be doing something wrong.
It loads and I get error ...
"Error retreiving markup for element cf_layoutareamanage: OK
[Enable debugging by adding 'cfdebug' ...
Adding that provides me with an AJAX window with error:
error:http: Error retrieving markup for element cf_layoutareamanage : OK
More helpful?  I think not.
What does one do to see the error?

Similar Messages

  • Is a PowerMac G5 2.4 Dual Chip (used) a better option than an iMac?

    Hi Guys
    I'm still reasonably new to Macs, and I am really more of a music person than a computer person (which is frustrating at times!!).
    I am looking at selling my iMac 2.4GHz (4GB RAM) and replacing it with a current iMac 2.66GHz, because the new iMacs can handle 8GB of RAM, which would really help me run BFD2 with less problems.
    But I noticed this on eBay:
    APPLE POWERMAC G5 2.3 GHZ DUAL CHIP-240gb HD-1.5 gb RAM (eBay item 230346540784) - would this be a better choice than going with a new iMac?
    I don't even know if this one is Quad Core. My knowledge of Mac Pros is zero! But I do know a lot of people go for them over iMacs, and even suggest getting a second hand Mac Pro in preference to a new iMac if it's for music.
    Could someone help out and tell me if the above machine is a better bet than going for a new iMac?
    Thanks heaps,
    Mike

    i dont have any evidence to prove this, but i would think that your iMac would run circles around the much older power mac G5 system.
    a mac pro would be a diff story. mac pros are the 'new' version of the power macs now running intel chips.
    www.everymac.com can give you all the tech specs you need on macs.

  • IPhoto 09 Gives Better Results than Aperture 2.0 on RAW

    I wanted to compare the results of Aperture 2.0 to iPhoto '09 on a RAW photo. To my chagrin, iPhoto had done a better job on the photo just by using the instant fix wizard and then one click on the color saturation effect.
    With Aperture, I imported the iPhoto library and made sure I was working on the "Original" RAW photo from that library. My workflow started with autoexposure, then the two auto levels (Luminance & RGB) by the histogram. I maxed the Sharpening in the RAW fine tuning.
    I tried tweaking the contrast, saturation, exposure and recovery. Nothing I can seem to do in Aperture can get the photo to look have the details as sharp and the colors as vibrant. iPhoto's automatic algorithms appear to be far superior to Aperture's auto exposure and auto levels of Luminance and RGB. Plus, it appears to do a better job than all the other controls will allow me to do manually.
    Has anyone else compared the two products side by side on the same photo and experienced the same?
    Thanks,
    Don

    Hmmmm,
    This is a meaningless comparison - If you cannot get the results you want in Ap then just use iPhoto and it's auto everything button and be happy. There is nothing magic going on in iPhoto, it uses the same CoreImage subroutines as Aperture for RAW interpretation prior to applying any of it's adjustments. One thing that the iPhoto auto functions will do is it will sacrifice more highlights and shadows detail than the Aperture auto levels will, you can set that in prefs if you like more contrast.
    RB
    Ps. the raw fine tuning sharpen slider will make virtually no visual difference, it is NOT for final output sharpening.

  • Better results than you could achieve in a real-time onlining suite?

    Another for Zeb and anyone else who cares about color. HD for Indies (http://www.hdforindies.com/) reviews The DV Rebel's Guide by Stu Maschwitz. In a section, "Why so AE centric when such a pain - why not do in FCP?", Mike Curtis pseudo-interviews Stu, who says "When onlining, the rule is the polar opposite: no amount of render time is too much to endure in the name of increased image quality. I detail some techniques for extracting every last bit of luminance range from your video source, removing color subsampling artifacts (without plug-ins), and a complete pipeline for multiple digital masters. If you follow these guidelines you can actually achieve better results than you could in an expensive real-time onlining suite."
    He is talking about the poor boy with a three year old laptop for whom long render times don't cost too much. Sounds like me.
    Comment?

    I have a valid example data file attached to this thread.
    If you open BEXTEST.bin in a hex-editor of your choice, you'll see the BEXUS as 42 45 58 55 53 and then the time as 00 28 09 etc.
    I couldn't get Joe Guo's VI to work. It doesn't count packages correctly, and the time is not displayed correctly either.
    The file was saved using a straight save to file VI.
    The data is from actual launching area tests performed a few mintues ago. The time displayed is "On time" e.g. how long the gondola has been powered up.
    I have a spare T-junction, so I can hook into the balloon real-time data as we fly, in case anyone care to see if they can figure out why the latest version of Joe Guo's program is not displaying correctly.
    I will monitor this
    thread during and after flight to see if anyone can make it in time!
    Thanks for the great effort!!
    Attachments:
    bextest.bin ‏53 KB

  • Better client than sqlplus for bulk fetch testing .

    Hi,
    I'm doing some test with rows speed retriving via Net8 and need some better client than sqlplus itself .
    There is araysize limit of 5000 in sqlplus and its not oriented for massive row fetching , although Im using set termout off .
    Test are in 10.2.0.3 environment and 100Mbit ethernet netowrk .
    So is there any better client I can use ? Or I need to write it by myself :) ?
    I've tried with pmdtm (informatica fetch utility) but it has got some problems with thread synchronization , basicaly strace profiling returns
    % time     seconds  usecs/call     calls    errors syscall
    57.35    1.738975         161     10819      2145 futex
    41.35    1.253799       32149        39           poll
      1.21    0.036717           3     11869           read
      0.08    0.002491           1      2163           write
      0.00    0.000000           0        50           fcntl
      0.00    0.000000           0        19           clock_gettime
    100.00    3.031982                 24959      2145 totalso instead of reading it's latching :).
    Regards
    GregG

    GregG wrote:
    its not oriented for massive row fetching , although Im using set termout off .You can use SQL*Plus AUTOTRACE command to disable query result printing:
    SQL> set autotrace  traceonly;
    SQL> select * from dba_objects;
    18816 rows selected.
    Execution Plan
    Plan hash value: 1919983379
    | Id  | Operation                      | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |             | 16154 |  3265K|    75   (3)| 00:00:01 |
    |   1 |  VIEW                          | DBA_OBJECTS | 16154 |  3265K|    75   (3)| 00:00:01 |
    |   2 |   UNION-ALL                    |             |       |       |            |          |
    |*  3 |    TABLE ACCESS BY INDEX ROWID | SUM$        |     1 |    26 |     0   (0)| 00:00:01 |
    |*  4 |     INDEX UNIQUE SCAN          | I_SUM$_1    |     1 |       |     0   (0)| 00:00:01 |
    |   5 |    TABLE ACCESS BY INDEX ROWID | OBJ$        |     1 |    25 |     3   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | I_OBJ1      |     1 |       |     2   (0)| 00:00:01 |
    |*  7 |    FILTER                      |             |       |       |            |          |
    |*  8 |     HASH JOIN                  |             | 19706 |  2290K|    72   (3)| 00:00:01 |
    |   9 |      TABLE ACCESS FULL         | USER$       |    66 |  1122 |     3   (0)| 00:00:01 |
    |* 10 |      HASH JOIN                 |             | 19706 |  1962K|    69   (3)| 00:00:01 |
    |  11 |       INDEX FULL SCAN          | I_USER2     |    66 |  1452 |     1   (0)| 00:00:01 |
    |* 12 |       TABLE ACCESS FULL        | OBJ$        | 19706 |  1539K|    67   (2)| 00:00:01 |
    |* 13 |     TABLE ACCESS BY INDEX ROWID| IND$        |     1 |     8 |     2   (0)| 00:00:01 |
    |* 14 |      INDEX UNIQUE SCAN         | I_IND1      |     1 |       |     1   (0)| 00:00:01 |
    |  15 |     NESTED LOOPS               |             |     1 |    30 |     2   (0)| 00:00:01 |
    |* 16 |      INDEX SKIP SCAN           | I_USER2     |     1 |    20 |     1   (0)| 00:00:01 |
    |* 17 |      INDEX RANGE SCAN          | I_OBJ4      |     1 |    10 |     1   (0)| 00:00:01 |
    |  18 |    NESTED LOOPS                |             |     1 |    43 |     3   (0)| 00:00:01 |
    |  19 |     TABLE ACCESS FULL          | LINK$       |     1 |    26 |     2   (0)| 00:00:01 |
    |  20 |     TABLE ACCESS CLUSTER       | USER$       |     1 |    17 |     1   (0)| 00:00:01 |
    |* 21 |      INDEX UNIQUE SCAN         | I_USER#     |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter(BITAND("S"."XPFLAGS",8388608)=8388608)
       4 - access("S"."OBJ#"=:B1)
       6 - access("EO"."OBJ#"=:B1)
       7 - filter(("O"."TYPE#"<>1 AND "O"."TYPE#"<>10 OR "O"."TYPE#"=1 AND  (SELECT 1
                  FROM "SYS"."IND$" "I" WHERE "I"."OBJ#"=:B1 AND ("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR
                  "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR "I"."TYPE#"=7 OR
                  "I"."TYPE#"=9))=1) AND ("O"."TYPE#"<>4 AND "O"."TYPE#"<>5 AND "O"."TYPE#"<>7 AND
                  "O"."TYPE#"<>8 AND "O"."TYPE#"<>9 AND "O"."TYPE#"<>10 AND "O"."TYPE#"<>11 AND
                  "O"."TYPE#"<>12 AND "O"."TYPE#"<>13 AND "O"."TYPE#"<>14 AND "O"."TYPE#"<>22 AND
                  "O"."TYPE#"<>87 AND "O"."TYPE#"<>88 OR BITAND("U"."SPARE1",16)=0 OR ("O"."TYPE#"=4 OR
                  "O"."TYPE#"=5 OR "O"."TYPE#"=7 OR "O"."TYPE#"=8 OR "O"."TYPE#"=9 OR "O"."TYPE#"=10 OR
                  "O"."TYPE#"=11 OR "O"."TYPE#"=12 OR "O"."TYPE#"=13 OR "O"."TYPE#"=14 OR
                  "O"."TYPE#"=22 OR "O"."TYPE#"=87) AND ("U"."TYPE#"<>2 AND
                  SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' OR "U"."TYPE#"=2 AND
                  "U"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')) OR  EXISTS
                  (SELECT 0 FROM SYS."USER$" "U2",SYS."OBJ$" "O2" WHERE "O2"."OWNER#"="U2"."USER#" AND
                  "O2"."TYPE#"=88 AND "O2"."DATAOBJ#"=:B2 AND "U2"."TYPE#"=2 AND
                  "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))))))
       8 - access("O"."SPARE3"="U"."USER#")
      10 - access("O"."OWNER#"="U"."USER#")
      12 - filter("O"."NAME"<>'_NEXT_OBJECT' AND "O"."NAME"<>'_default_auditing_options_'
                  AND BITAND("O"."FLAGS",128)=0 AND "O"."LINKNAME" IS NULL)
      13 - filter("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR
                  "I"."TYPE#"=6 OR "I"."TYPE#"=7 OR "I"."TYPE#"=9)
      14 - access("I"."OBJ#"=:B1)
      16 - access("U2"."TYPE#"=2 AND "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','curren
                  t_edition_id')))
           filter("U2"."TYPE#"=2 AND "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','curren
                  t_edition_id')))
      17 - access("O2"."DATAOBJ#"=:B1 AND "O2"."TYPE#"=88 AND "O2"."OWNER#"="U2"."USER#")
      21 - access("L"."OWNER#"="U"."USER#")
    Statistics
              0  recursive calls
              0  db block gets
           3397  consistent gets
             78  physical reads
              0  redo size
         908471  bytes sent via SQL*Net to client
          14213  bytes received via SQL*Net from client
           1256  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          18816  rows processed
    SQL>

  • Why does the Fireworks save for web function give better results than in Photoshop?

    Having used the trial version of Fireworks, I have noticed that the save for web function gives greater compression and image quality than saving for web in Photoshop. Why is this?
    As Adobe are not continuing in developing Fireworks, does anyone know if will they will improve the save for web function in Photoshop to match the Fireworks version?
    Are there any third party companies who anyone can recommend who will process large volumes of images for web?
    Thanks

    One of my favourite topics ;-P
    First, the save for web function in Photoshop has not seen any real updates in a long time. In Fireworks PNG export allows for fully transparent optimized files with indexed 256 or less colours, which is impossible in the save for web function in Photohop. It is unsupported.
    This is one of the reasons why Fireworks does a much better job than Photoshop. Another reason is that Photoshop adds meta junk in its exported files, and this also increases the file size (and should be removed, because there are also a number of fields which include information about your setup).
    One other caveat is that Photoshop's save for web functions neither allows for a choice in chroma subsampling, and instead decides automatically below a certain quality threshold to degrade the colour sharpness quality. The user has no control over this. (Fireworks also limits the user this way.)
    One thing to be careful of: FW's jpg quality setting, and PS's quality settings are very different - a 50 in Photoshop is not the same 50 setting in Fireworks.
    For jpg optimization I generally use RIOT (free): http://luci.criosweb.ro/riot/
    (When you install, be careful NOT to install the extra junkware!)
    Fireworks cannot change the chroma subsampling to 4:2:0, which does degrade the quality a bit compared to RIOT and Photoshop in my experience. Photoshop adds useless meta information, even if the user tells it not to do that. RIOT allows you to remove that information, saving 6k. RIOT also offers an automatic mode that optimizes existing jpg images without degrading the quality further, and often saves 10k or more, depending on the images.
    Interestingly enough, in my tests exported Fireworks jpg images are always reduced in file size by RIOT, due to FW's jpg export limitations, without any image degradation.
    In my tests FW's jpg quality versus file size turns out to be the worst of all three. RIOT generally wins, or is at least on par with Photoshop.
    As for PNG export, Photoshop's save for web function is quite abysmal. No 256 colour full transparency export is possible, while Fireworks does support this.
    Having said that, there is a free alternative that leaves both Photoshop AND Fireworks in the dust: Color Quantizer. http://x128.ho.ua/color-quantizer.html
    CQ is an amazing little tool: with it anyone can create PNG files with full transparency and reduced to ANY number of colours! It means that a 512 colour PNG with full transparency is now very easy to do. On top of that, for more difficult images a simple quality mask brush tool allows the user to control and retain even small colour details in a PNG, while reducing the file size to an absolute minimum.
    CQ is one of the best kept secrets of a Web Developer's toolkit. And it is free!
    Both RIOT and Color Quantizer have a built-in batch mode. Both are available for WIndows. Not for Mac. If you are on a Mac, try imageOptim. Not nearly as good as RIOT and CQ, but quite passable.
    PS to be fair, the newest versions of Photoshop do allow for export of 8bit PNGs with full transparency through the use of its Generator functionality. But again, it cannot compete with CQ. And as far as I am aware, Generator cannot be used in Photoshop's batch processing (which, btw, is very slow! For simpler daily image processing tasks I have to do in batches, I prefer IrfanView, which is lightning fast! IrfanView).

  • Will insert (ignore duplicates) have a better performance than merge?

    Will insert (ignore duplicates) have a better performance than merge (insert if not duplicate)?

    Ok. Here is exactly what is happenning -
    We had a table with no unique index on it. We used 'insert all' statement to insert record.
    But later when we found duplicates in there we started removing them manually.
    Now, to resolve the issue we added unique index and added exception handling to ignore DUP_VAL_ON_INDEX exception.
    But with this all records being inserted by 'INSERT ALL' statement gets ignored even if only one record is duplicate.
    Hence we have finally replaced 'insert all' with merge statement. Which inserts only if a corresponding record is not found (match based on column in unique index) in the table.
    But I am wondering how much performance will get impacted.

  • Is m4v better quality than a remuxed MKV file via Subler?

    I'm slowly building my home media server and putting my Blu-Ray's onto it. I normally do a Handbrake conversion for ATV3, so the MKV files end up being significantly smaller m4v's and the quality is outstanding. But I recently learned about the quick remux method using Subler, which quickly converts the MKV container into an m4v container without any quality loss and while keeping the same size file.  But I noticed that, say, a 29GB MKV file is a much poorer softer picture on my plasma TV than the same movie that's only a 9GB MKV file (remuxed to m4v with subler for streaming over ATV3). I'm running a 300mbps cable modem so the streaming shouldn't be a problem over my home wifi.  But I also noticed that the smaller m4v's (say a 3.5GB file that comes from a 9GB MKV file via Handbrake) seem to be slightly better quality than the 9GB file that was remuxed.  So it seems like the larger file should be even higher quality -- but I'm getting better results with a smaller file that's Handbrake'd from MKV to m4v.
    Is there some sort of streaming setting on the ATV3 that needs to be set or adjusted that will allow the full gorgeous pic quality of a 29GB file to stream right through to it, and look better than the Handbrake'd m4v file?  It feels like there's a bottleneck somewhere that's not letting all of the complete picture information through, and an intact, perfect 29GB file should look light years better than that 29GB file Handbrake'd down to 4GB.  Trying to figure this out before I continue down this home media server path cuz it's a lot of work to do these Blu's one at a time.
    Kirby

    I have no experience of the remuxing you describe, but interesting observations.
    There is nothing you can adjust on AppleTV - it will either play the encoded movie or it won't.
    AppleTv's generally playback the h264 codec (in an m4v container) - there are many many versions/levels of this codec and each generation of AppleTv has been able to play slightly more sophisticated versions.
    I suspect but cannot prove that the issue you notice is due to AppleTV attempting to support advanced h264 features but making compromises which affect playback quality - in other words it is cutting corners to playback advanced h264 profile features rather than refusing.  Handbrake on the other hand has time at its disposal - it has been refined over many years by dedicated enthusiasts so if a simple remux is all that's required i'd be surprised they have not implemented that.  Instead I suspect it more accurately processes enhanced h264 features before transcoding into a new smaller m4v file.  Equally there might be settings in HB which artificially sharpen or otherwise alter the video which you prefer.  I'd compare the BluRay tothe remuxed or HB versions to attempt to decide which was more faithful to the original but even then it would be dependent on the BluRay player's settings in some cases.

  • I want to transfer my iPhoto from my old MacBook Pro to my new MacBook Pro. I have a firewire or I could also do it from my TimeCapsule. Would one be a better method than the other?

    I want to transfer my iPhoto from my old MacBook Pro to my new MacBook Pro. I have a firewire or I could also do it from my TimeCapsule. Would one be a better method than the other?

    Hi brotherbrown,
    A direct FireWire transfer (especially if it's 800 to 800) is going to be the fastest method. TimeCapsule would work, if you connect to it via Ethernet, via wireless it would be quite slow (especially if you have a large library).

  • Adobe has made upgrades with the Dreamweaver CS6 fluid grids. Is there a better tutorial than this?

    It there a better tutorial than this anywhere http://blogs.adobe.com/dreamweaver/2013/02/updated-fluid-grids-in-dreamweaver.html ? I can't seem to understand this guy. I know he means well. I really want to learn fluid grids. Thanks any reconmandations?

    If you want to learn to make responsive Web pages, do not rely on
    Adobe's fluid grid system. Learn CSS yourself. If you need to rely on a
    grid framework, then you are barking up the wrong tree. I can absolutely
    guarantee that if you do not fully understand the theories behind
    responsive design and you use the Adobe fluid grid (or the open source
    frameworks/scripts that Adobe uses) your site will eventually break.
    Study and learn. If you are sharp enough, you will quickly come to
    understand that nearly every article and tutorial on the subject is
    plain wrong. The essence of responsive design is very easy and does not
    require anyone's code but your own.
    Al Sparber - PVII
    http://www.projectseven.com
    The Finest Dreamweaver Menus | Galleries | Widgets
    Since 1998

  • Does AirPort Express require better Internet than Ethernet?

    Wondering since I was having intermittent problems w my Linksys router &amp; trouble updating it so thought switching to Airport Express would give me a simple way to get things working.
    Very disappointed that noes it's worse than before. Yesterday after a # of tries I did get it partly set up &amp; although amber blinking lights, I was able to use Internet.
    Called Set up AppleCare call online to resolve amber light issue. She had me reset modem &amp; when we were done, I again had green light on Airport BUT NO INTERNET CONNECTION. She said to call my ISP cox.
    I did try after a reset to connect MacbookPro directly to Ethernet &amp; got it to connect. Then I unplugged Ethernet from Mac &amp; plugged into airport &amp; no connection again.
    So instead of Airport Express making things better, they are worse. I'm typing this on my iPhone 4S which for some reason has really slowed down.
    I learned that my old Motorola Surfboard modem is Docsis 2.0 &amp; cox is switching to 3.0.
    But it was working better, although frequent brief loss of connection.
    So does Airport require a better modem or better connection than just Ethernet or via my Linksys E1200?
    Not happy also that functionally worse off after calling AppleCare. I know she didn't cause the issue but her suggestion while standard tipped it over to not be connecting.
    &amp; typing all this on my iPhone is not fun although it did just now get back temporarily there for just a bit to its usual speed but now back to having to wait for it.

    Well I did the power down reset. But I think you missed the import of my question.
    I'm talking about an older modem that may be marginal. Not positive about the 2 Ethernet cables I have used either.
    So yes if the modem is working as it should I guess it should make a difference.
    But something is weird when I can can next via Ethernet &amp; then I just unplug the the rent from my Mac &amp; plug it into the wan port on the AirPort Express &amp; no connection.
    To look at the modem &amp; Airport across the room the lights make it look like they are fine.
    I just done have Internet connection to my Mac. I did yesterday w the amber lights flashing.
    &amp; I was connecting w the Linksys but had frequent loss of connection.
    So wondering if maybe Airport has slightly greater requirements that act as a tipping point so it connected without it but doesn't connect w it. But the green light supposedly shows its working fine.
    Well I did put out a wanted request on FreeOC for a compatible Docsis 3.0 cable modem. Supposedly they work better &amp; that might just make the difference.
    Don't really want to buy a new modem when I'm about to move to a new area where requirements might be different.

  • A better way than a global temp table to reuse a distinct select?

    I get the impression from other threads that global temp tables are frowned upon so I'm wondering if there is a better way to simplify what I need to do. I have some values scattered about a table with a relatively large number of records. I need to distinct them out and delete from 21 other tables where those values also occur. The values have a really low cardinality to the number of rows. Out of 500K+ rows there might be a dozen distinct values.
    I thought that rather than 21 cases of:
    DELETE FROM x1..21 WHERE value IN (SELECT DISTINCT value FROM Y)
    It would be better for performance to populate a global temp table with the distinct first:
    INSERT INTO gtt SELECT DISTINCT value FROM Y
    DELETE FROM x1..21 WHERE value IN (SELECT value FROM GTT)
    People asking questions about GTT's seem to get blasted so is this another case where there's a better way to do this? Should I just have the system bite the bullet on the DISTINCT 21 times? The big table truncates and reloads and needs to do so quickly so I was hoping not to have to index it and meddle with disable/rebuild index but if that's better than a temp table, I'll have to make do.
    As far as I understand WITH ... USING can't be used to delete from multiple tables or can it?

    Almost, but not quite, as efficient as using a temporary table would be to use a PL/SQL collection and FORALL statements and/or referencing the collection in your subsequent statements). Something like
    DECLARE
      TYPE value_nt IS TABLE OF y.value%type;
      l_values value_nt;
    BEGIN
      SELECT distinct value
        BULK COLLECT INTO l_values
        FROM y;
      FORALL i IN 1 .. l_values.count
        DELETE FROM x1
         WHERE value = l_values(i);
      FORALL i IN 1 .. l_values.count
        DELETE FROM x2
         WHERE value = l_values(i);
    END;or
    CREATE TYPE value_nt
      IS TABLE OF varchar2(100); -- Guessing at the type of y.value
    DECLARE
      l_values value_nt;
    BEGIN
      SELECT distinct value
        BULK COLLECT INTO l_values
        FROM y;
      DELETE FROM x1
       WHERE value = (SELECT /*+ cardinality(v 10) */ column_value from table( l_values ) v );
      DELETE FROM x2
       WHERE value = (SELECT /*+ cardinality(v 10) */ column_value from table( l_values ) v );
    END;Justin

  • What is better choice than Time Machine

    I have a Seagaye GoFlex for Mac ( 2TB) external HD. I have fought with this drive and Time machine for over a year. I have partitioned the drive (2) to use for other purposes. 500 GB is for backup. Often it won't Backup. Sometimes it indicates it's read only. I end up erasing the disk, etc.. It's terrible as a Backup situation. I'm wondering if I would be better off with another application for my backup process? Something different than Time Machine.
    Or is there someting I can do to correct my existing problem.
    One of my situations is if i Shut Down my computer. When I start up, the Backup Partition does not mount. Is the volume just too large? I'm really confused, and upset with this ongoing problem.

    Different backup software won't help if the external drive chipset is the source of the flaky mounting behaviour and other issues.
    I use CCC and maintain an at home and off site clone, but neither CCC nor SuperDuper really replace TM as a file backup system.  TM is ideally suited for finding and restoring specific files, or really, specific versions or time points of specific files.  CCC/SuperDuper are single time point clones of your entire system - not really the same thing or useful in the same ways as TM.
    Relying on a drive that you know has problems or issues that are likely or known to be tied to the actual hardware&firmware of the drive itself is a disaster waiting to happen.  I would never trust any backup solution to such a device, and would bite the bullet and buy a drive that was, in fact, a reliable hardware device instead.
    I've been using a 1TB WD My Book USB2 drive for TM for several years (so OS X 7,8 and up) without any errors or problems with mounting on reboot or after ejection (my clones are on portable firewire bus-powered drives).  Had it ever shown any such issues, I would have stopped using it and gone to something that did not cause such problems - an unreliable backup volume or drive is an unreliable backup, period.
    It is also generally considered a "best practice" to use your backup drives for nothing but that - backups.  A dedicated backup drive is far preferable than a backup volume on a drive that sees all sorts of alternate uses as well - using a drive for both backup and general file use just increases the chance of some logical or physical error leading to the loss of the entire drive's contents.

  • Better options than DAVE for Win NT connection?

    I work for a small Mac design group in a large business environment. We store Quark, InDesign, Photoshop files on the companies Win NT Server. To do this they have installed DAVE because they said just using OS X's PC connect ability was causing the documents to be looked at as .exe or unrecognizable files by the Mac. Lately some files are giving I/O errors on the server and getting corrupted. Is there a better option for communicating with a PC server than DAVE?

    For just using OS X, I've never heard of any problems like that; I'm wondering if they're just confused? Or are they using a fairly fancy, unusual setup of some sort? Has anyone tried just connecting with the standard Mac feature?; I would be curious to hear what breaks (since if something does break, it probably implies that someone either misconfigured the Macs or misconfigured the server).
    For the files in question: Are you able to access these files from a Windows computer?; can you read them there? I'm wondering if the IO error was a network problem (that DAVE might have a connection to), or just that their server's hard disk is failing.

  • Better software than Stickies (go with iCal and Omni Focus and Dropbox)

    I love iCal and I am digging Omni Focus.
    I use Dropbox.
    Is there something a little better than Stickies to go with this suite?
    Is there a better place to post this?
    Thanks

    I have all of the same issues Jeff. What gives?? This needs immediate fix.

Maybe you are looking for

  • Waiting for printer to become available - OSX 10.8.4

    Hi, I foolishly allowed the HP printer updates on my imac running Mountain Lion, 10.8.4.  I learned not to do that on Snow Leopard but suffered a lapse resulting in 'waiting for printer to become available' and when it is reset, it only prints every

  • FileAdapter write mode - Dynamic filename, write down simple text structure

    Hi, I tried to use the FileAdapter in a BPEL Process to write down and existing structure as String, comming from Database (CLOB), which has a defined structure. My first problem is, that I assign an output filename through the output message header

  • CRT versus LCD

    Per specs, Apple's old 21" Studio Display (graphite-CRT) and their original 22" Cinema Display (M5662-LCD) have the same ppi (85), but I am struck by the difference in display clarity. Both are set at max resolution: CRT-1600x1200 and LCD-1600x1024.

  • Issues and suggestions

    There are some features missing from EOS 60D while the one worked ia my old 450D. The first one is for all Canon cameras, the other really a downgrade from my EOS 450D. a) Set Shooting style to M, set ISO to Auto ISO, set aperture to e.g. f8 and Shut

  • Make Illustrator more consistent and useable w/4 basic features that exists in InDesign & Freehand

    Paste into (your clipping mask feature is cumbersome and SUCKS. sorry) Scale contents of box, frame, object by holding down option+shift and dragging a handle Put the X-Y 0 coordinates at the top left of the page by default. Ability to PLACE MORE THA