Using TABLE() to improve performance... Am I on the right track?

I have a situation where I read data into a set of collections (let's assume 10,000 records and an emp_no collection).
I then process each record in a for loop. Based on conditions, a subsequent query is issued to one of two tables:
For i in emp_no.first .. emp_no.last loop
<<processing>>
if <<some condition>> then
select emp_age into age from tab_a where employee_number=emp_no(i);
else
select spouse_age into age from tab_b
where employee_number=emp_no(i) and {other conditions};
end if;
age_array(i) := age;
<<processing>>
end loop;
after the additional fields are retrieved, processing continues using the retrived data.
<<additional processing>>
At the end of the processing I want to update a table's records given the values calculated during processing
ForAll i in emp_no.first .. emp_no.last
Update retirement Set age := age_array(i) ......
where employee_number = emp_no(i);
I imagine the single select queries in the loop structure will cause a lot of context switches between PL/SQL and SQL which will significantly decrease performance.
After some review of the Oraclewebsite I found the TABLE function. It appears I can use this to change my routine to a more efficient bulk processing structure. Something like:
-- In the loop build a collection of emp_no's associated to each query
For i in emp_no.first .. emp_no.last loop
<<processing>>
if emp_no(i) is even then
tab_a_emp_no_array.extend;
tab_a_emp_no_array(tab_a_emp_no_array.last) := emp_no(i);
else
tab_b_emp_no_array.extend;
tab_b_emp_no_array(tab_a_emp_no_array.last) := emp_no(i);
end if;
<<processing>>
end loop;
--After the loop use a Select... Bulk Collect Into statement with a where condition that references the collection values
Select emp_no, emp_age
bulk collect into emp_no_a, age_a
from tab_a
where employee_number in (select column_value from table(tab_a_emp_no_array));
Select emp_no, spouse_age
bulk collect into emp_no_b, age_b
from tab_b
where employee_number in (select column_value from table(tab_b_emp_no_array));
Using the emp_no_a and emp_no_b the age values can be reassociated with the correct employee for further processing.
I HAVE THREE CONCERNS:
1. Am I understanding and using the TABLE function correctly? I don't think "pipelined processing" would help in this situation, correct?
2. I may end up with an IN clause that has thousands of elements. Will this perform poorly and eliminate any performance gains obtained from the bulk collect? Would "where exists (select 1 from table(tab_a_emp_no_array) where column_value=employee_number)" work any better?
3. Is there a better way to solve this issue of optimizing performance when various tables are conditionally queried during a loop?
I hope my issue is clear (obviously the code isn't accurate) and I thank you in advance for any insights!
Peace,
Larry

No.
I will repeat one of Tom Kytes' mantras here
1 when you can do it in 1 SQL statement, you should do it in SQL
2 When you can not do it in SQL, you should do it in PL/SQL
3 When you can not do in in PL/SQL, you should do it in Java
Which means: You should things non-procedurally as often as possible. Quite often people resort too early to 3GL strategies.
update inside a loop raises a red flag, especially if there would have been a commit inside this loop. This means you are not only into slow-by-slow prtogramming, but also increases the possibility of ora-15555 errors.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Hi! I´m having problems with showing video files in Qlab on my Macbook Air. A sound/video technician told me to "blow out" my Mac. Was told to use  cmd+ r  when restarting. Is this the right way?

    Hi! I´m having problems with showing video files in Qlab on my Macbook Air. It started suddenly. Consulted a sound/video technician who told me to "blow out" my Mac. Was told to use cmd+r  when restarting. Is this the right way to clean up my Mac? And is it likely that some kind of bug is causing problems for Qlab to show video files? I´ve already tried with a bunch of different video files and sometimes Qlab plays them and sometimes not. I need the Qlab playlist for a theatre show and only have a week until showtime so starting to really worry. Is there anyone out there who can help?

    Your Mac runs maintenance in the background for you.
    Command + R gives you access to restore, repair, or reformat the drive using OS X Recovery
    No idea why that was suggested.
    You may have a third party video player installed that's causing an incompatibility issue.
    Check these folders:
    /Library/Internet Plug-Ins/
    /Library/Input Methods/
    /Library/InputManagers/
    /Library/ScriptingAdditions
    ~/Library/Internet Plug-Ins/
    ~/Library/Input Methods/
    ~/Library/InputManagers/
    ~/Library/ScriptingAdditions
    The first four locations listed are in the root-level Library on your hard disk, not the user-level Library in your Home folder.The tilde (~) represents your Home folder.
    To access the Home folder in OS X Lion or Mountain Lion, open the Finder, hold the Option key, and chooseGo > Library.

  • I'm unable to view a multiple page pdf document using Safari. A locked padlock appears in the right corner of the tab. Thanks

    I'm unable to view a multiple page pdf document using Safari. A locked padlock appears in the right corner of the tab. Thanks

    hello prgc37, you original problem was likely caused by a fault extension that you have installed. can you try to [[Reset Firefox – easily fix most problems|reset firefox]] and see if this can address the issue...
    otherwise such connection issues are also often caused by a firewall/security software which doesn't recognize & therefore blocks new firefox versions. please remove all program rules for firefox from your firewall and let it detect the new version of the browser again.
    [[Fix problems connecting to websites after updating Firefox]]
    finally please also [https://www.mozilla.org/en-US/plugincheck/ update your plugins] (flash seems a bit out of date).

  • Using hints to improve performance

    Hi I am trying to create table using the below stmt. I am giving the counts on the tables also.
    It is taking 5-6 minutes to create the table. I am trying to improve its performance.
    Are my hints correct? What is nested look?
    Any suggestions.
    Thanks.
    create table dt_25_temp as
    select /*+ index(dt,DT_25_IDX) index(cl, T_CL_CLIENT_ID_CL_ID_IDX1 ) index(ml,T_MAILING_IDX2) index (ms ,t_ms_pk) use_nl(dupe,master)*/
    dt.id ,dt.l_id , dt.cl_id
    , ml.ml_id
    , ml.ml_u3
    , ms.t_cd
    , ml.ml_u1
    , ml.ml_u2
    , ml.ml_u4
    , ml.ml_u5
    from Dt_25 dt
    , t_ml ml
    , t_cl cl
    , t_ms ms
    where dt.cl_id = cl.id
    and cl.ml_id = ml.id
    and dt.ms_id = ms.id
    and cl.client_id = 12345
    and ml.client_id = 12345
    select count(1) from t_cl--1576262 records
    select count(1) from t_ml--280682
    select count(1) from t_ms--10341585
    select count(1) from dt_25--1092469 ( this is temporary table I create daily , it mave have 1000 to 1M records )
    DT_25_IDX is on column ID
    T_CL_CLIENT_ID_CL_ID_IDX1 is on client_id and id
    T_ML_IDX2 on id,cl_id
    t_ms_pk is on column id

    user10405034 wrote:
    Hi I am trying to create table using the below stmt. I am giving the counts on the tables also.
    It is taking 5-6 minutes to create the table. I am trying to improve its performance.
    Are my hints correct? What is nested look?
    Any suggestions.First of all, why are you using hints? If you have representative statistics in place and you're using the cost based optimizer it should be able to come up with a reasonable plan by itself. If not, you need to find out why. You should use hints only as a last resort, if you know something about your data that the optimizer simply doesn't know and therefore can't recognize.
    I wonder why you use the USE_NL hint if you're asking what a "nested look" (I assume you meant NESTED LOOP) is, suggesting that you don't know what this particular hint is about?
    - Can you show us the output of EXPLAIN PLAN using DBMS_XPLAN.DISPLAY? Use the \ tag before and after to use proper formatting in fixed font- What 4-digit version (e.g. 10.2.0.4) of Oracle are you using?
    - Given the fact that your DT_25 table might have significantly different sizes, the question is if the data from the other tables joined to will be restricted by the join or not. It would be possible that if your DT_25 table is small, a index access into the other tables might be faster, whereas if your DT_25 table is large and covers a lot of data from the other tables, a full table scan of those tables might be more appropriate.
    - How restrictive are these filter predicates specified:
    and cl.client_id = 12345
    and ml.client_id = 12345
    i.e. how many rows from the totals provided correspond to those predicates?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Table prefix improving performance

    i read in the oracle manual that when using joins it is a good practice to use table prefixes as it increases performance ,how?

    f7218ad2-7d9f-4e71-ba26-0d6e4b38f87e wrote:
    i read in the oracle manual that when using joins it is a good practice to use table prefixes as it increases performance ,how?
    uh, maybe so the parser doesn't have as many hoops to jump through to resolve non-specific object references.
    You could cite your reference ...
    ============================================================================
    BTW, it would be really helpful if you would go to your profile and give yourself a recognizable name.  It doesn't have to be your real name, just something that looks like a real name.  Who says my name is really Ed Stevens?  But at least when people see that on a message they have a recognizable identity.  Unlike the system generated name of 'ed0f625b-6857-4956-9b66-da280b7cf3a2', which is no better than posting as "Anonymous".
    All you ed0f625b-6857-4956-9b66-da280b7cf3a2's look alike . . .
    ============================================================================

  • I'm embarrassed to ask this: when I click to initiate a new email message in the "From" box it automatically chooses the wrong email address (I have three but use one mainly). How can I get the right choice to appear automatically?

    I'm embarassed to ask this: when I click on the new mail icon to write a new email message, in the box "FROM" appears the wrong email address: I have three but mainly use one. How can I correct this so that the right email address appears in the "FROM" box?

    I assume this is Apple Mail. Try this:
    Preferences > Accounts > your-first-account
    Select Outgoing Mail Server of account/name you wish to use and check Use only this server
    Repeat for other accounts.

  • Using 16:9 and 4:3 footage in the same track

    Hi,
    I've just made the step up to DVD Studio Pro and am currently trying to create a new showreel based on a previous iDVD project. In terms of using DVD Studio Pro - some of it is very simple to get to grips with, but I have a question about using footage with different aspect ratios in the same track.
    If I have, say, one movie which is 16:9 FHA (Clip A), and another which is 4:3 (Clip B) - is there a way to have these play in the same track in their correct format. For example, when viewing the track on a 16:9 screen I would like Clip A to fill the screen, and Clip B to appear with a black bar on the left and right of the picture. When watching the track on a 4:3 monitor, however, I would like Clip B to fill the screen, and Clip A to appear letterboxed. Is this possible?
    FYI, I create the track in Final Cut Pro - marking the sequence as anamorphic and then exporting as a DV Pal Anamorphic Quicktime movie.
    Any help would be greatly appreciated,
    Jack

    No - to do what you're asking, you cannot mix aspect ratios or Display Modes in a single DVDSP track. You must use two tracks and connect them (usually via scripting).
    In case that's confusing, you have two anamorphic clips, right? Clip A is a true Anamorphic 16:9 clip (it fills the full frame) while Clip B is a pillarboxed 4:3 clip. That is, for clip B, you dropped it into an anamorphic 16:9 sequence in FCP so that there are black bars on the side, and then exported it, right? In case you haven't that's what you need to do.
    Now, in order to get the playback you desire (Clip A at full frame, Clip B pillarboxed on a 16:9 display; Clip A letterboxed, Clip B full frame on a 4:3 display) you'll need to place the two clips on different tracks. For the track housing Clip A, the Display Mode (in the Inspector palette) should be set to 16:9 Letterbox (which is responsible for letterboxing when connected to a 4:3 display). For the track containing Clip B, set the Display Mode to 16:9 Pan-Scan (which tells the DVD player to crop off the left and right edges when connected to a 4:3 display). When connected to a 16:9 display - provided the DVD player is not misconfigured - both tracks will play out correctly since both tracks are tagged as 16:9 (no letterboxing or cropping will occur).
    Of course, the key thing is to join the tracks. For that, it's best to use scripting (in my opinion) if you're planning on do a Play A, Play B, Play All scenario. Or, if you're keeping it extra simple (Play means it'll play both segments) you can simply join one track to another via the Connections tab. In both cases though, there will be a slight pause during the jump from one track to another. How much of a pause is dependent on the DVD player.
    Or did I misunderstand what you're hoping to do?

  • Modify a SELECT Query on ISU DB tables to improve performance

    Hi Experts,
    I have a SELECT query in a Program which is hitting 6 DB tables by means of 5 inner joins.
    The outcome is that the program takes an exceptionally long time to execute, the SELECT statement being the main time consumer.
    Need your expertise on how to split the Query without affecting functionality -
    The Query :
    SELECT  fkkvkpgpart eablablbelnr eabladat eablistablart
      FROM eabl
      INNER JOIN eablg  ON eablgablbelnr = eablablbelnr
      INNER JOIN egerh  ON egerhequnr    = eablequnr
      INNER JOIN eastl  ON eastllogiknr  = egerhlogiknr
      INNER JOIN ever   ON everanlage    = eastlanlage
      INNER JOIN fkkvkp ON fkkvkpvkont   = evervkonto
      INTO TABLE itab
    WHERE eabl~adat GT [date which is (sy-datum - 3 years)]
    Thanks in advance,
    PD

    Hi Prajakt
    There are a couple of issues with the code provided by Aviansh:
    1) Higher Memory consumption by extensive use of internal tables (possible shortdump TSV_NEW_PAGE_ALLOC_FAILED)
    2) In many instances multiple SELECT ... FOR ALL ENTRIES... are not faster than a single JOIN statement
    3) In the given code the timeslices tables are limited to records active of today, which is not the same as your select (taking into account that you select for the last three years you probably want historical meter/installation relationships as well*)
    4) Use of sorted/hashed internal tables instead of standard ones could also improve the runtime (in case you stick to all the internal tables)
    Did you create an index on EABL including columns MANDT, ADAT?
    Did you check the execution plan of your original JOIN Select statement?
    Yep
    Jürgen
    You should review your selection, because you probably want business partner that was linked to the meter reading at the time of ADAT, while your select doesn't take the specific Contract / Device Installation of the time of ADAT into account.
    Example your meter reading is from 16.02.2010
    Meter 00001 was in Installation 3000001 between 01.02.2010 and 23.08.2010
    Meter 00002 was in Installation 3000001 between 24.08.2010 and 31.12.9999
    Installation 3000001 was linked to Account 4000001 between 01.01.2010 and 23.01.2011
    Installation 3000001 was linked to Account 4000002 between 24.01.2010 and 31.12.9999
    This means with your select returns four lines and you probably want only one.
    To achieve that you have to limit all timeslices to the date of EABL-ADAT (selects from EGERH, EASTL, EVER).
    Update:
    Coming back to point one and the memory consumption:
    What are you planning to do with the output of the select statment?
    Did you get a shortdump TSV_NEW_PAGE_ALLOC_FAILED with three years meter reading history?
    Or did you never run on production like volumes yet?
    Dependent on this you might want to redesign your program anyway.
    Edited by: sattlerj on Jun 24, 2011 10:38 AM

  • What would be a sensible way to setup my Gigabit network? Will using both NICs improve performance at all?

    Hi everyone,
    I work in a small  digital agency, we do some post production, animation and some web dev.
    I'm looking to improve slightly our networking setup as it is a little disorganised and not always as fast as it should be. In the long term we're looking at spending sensible money on faster centralised storage, fibre etc, but for another year or so I need to do something more affordable based around what we already have.
    In brief, we have currently one Mac Pro Da Vinci grading workstation with attached highspeed storage RAID, and four Mac Pro edit machines for Premiere/After Effects. There are also various other machines and NAS devices, printers etc.
    Currently everything is on one unmanaged Gigabit switch. Each edit machine has a 4 disk RAID set of SATA disks, two of which are in what used to be the SuperDrive bays before we took them out. There is an SSD for the boot drive, and then one empty slot for transfering data in and out on other disks etc. When a project is ready to grade it's moved over to the storage attached to the grading machine over the network. That all works fairly well for us at the moment.
    However, with increasing frequency we're now working on projects with a few animators and editors at once, working over the network on the same files rendering them to one or other Mac Pro machine. I was wondering whether there is any way to improve our network architecture to make this a little faster (I know that ultimately we need a more expensive multi user centralised storage system of some sort, but in the short term..).
    I was considering a couple of options-
    1. Create a content only network on one NIC of each Mac Pro, into a dedicated switch, with jumbo frames turned on identically configured across all machines. Then on the second NIC join them into the 'everything else' network so they can access printers, the Internet, the slower NAS admin shares etc.
    or
    2. Configure a virtual ethernet device on each Mac aggregating the two NICs together (is that LACP?), taking both ethernet lines into the switch, and also using that switch for every other device (but connected in the normal manner).
    Is either of those a sensible way to go? I understand that there isn't any speed increase over link aggregated NICs when only doing one transfer, but is there any performance benefit at all if the machine is getting different files from different machines simultaneously, or if two machines are rendering over the network to one?
    I currently have one unmanaged Netgear JGS516 switch and one TP Link SG1024DE half-smart switch (has some useful functions but not what you'd expect from a more expensive model). Happy to buy a different switch is there's more management to be done, though don't want to spend a huge amount at this moment.
    Any advice whatsoever gratefully received, I could be way off here
    Thanks all.

    Thanks Grant,
    My knowledge of Jumbo Frames is limited.
    A lot of what I know comes from here- http://www.smallnetbuilder.com/lanwan/lanwan-features/30201-need-to-know-jumbo-f rames-in-small-networks?start=3
    One suggestion in there is to separate Jumbo and non-Jumbo supporting devices so that there is no impact on speed.
    I don't know anything about store-and-forward, I'll have to read up.
    I'm the same about Full Duplex

  • Does using TABLE FUNCTIONS degrades performance

    Hi,
    I am using a TABLE Function TABLE(TABLEA)
    I am using a SQL Statement where I am joing TABLE FUNCTION :TABLE(TABLEA) and Normal table:TABLEB
    i.e
    SELECT CODE,SUD FROM TABLEB,TABLE(TABLEA)
    WHERE CODE=CODE1 (CODE1 is a Object Type variable).

    user598986 wrote:
    I am using a TABLE Function TABLE(TABLEA)
    I am using a SQL Statement where I am joing TABLE FUNCTION :TABLE(TABLEA) and Normal table:TABLEB
    i.e
    SELECT CODE,SUD FROM TABLEB,TABLE(TABLEA)
    WHERE CODE=CODE1 (CODE1 is a Object Type variable).One particular issue with table functions and joins is that the optimizer doesn't have a clue about the cardinality, i.e. the number of rows returned by the table function and therefore applies defaults which might be way off. If you somehow know roughly how many rows are going to be returned (may be a quite constant number of rows or your process know the number of rows in a collection etc.) then you can help the optimizer by using the (undocumented) "CARDINALITY" hint, e.g.
    SELECT /*+ CARDINALITY (A, 100) */ CODE,SUD FROM TABLEB B,TABLE(TABLEA) A...
    tells the optimizer that the table function is going to return 100 rows. Note the usage of the alias in the hint.
    But bear in mind that this only helps partially, some other basic information like column statistics are still missing and therefore the estimates of the optimizer still might be inaccurate.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Need help with PC build for use with Photoshop CS4 Extended - Have I made the right choices?

    Hello,
    I'm very new to the inner workings of a PC...this will be first time building one from scratch.  I know enough to be dangerous but not enough to be confident.  I will use the PC primarily for editing, secondarily for iTunes management.  I've heard that I should have, at the very least, an SSD for PS, other apps and the O/S.  I'll also have a hard drive for data storage.  Do I need yet another SSD for scratch?  If so, any recs for one?  Also, my current build (which does not include any SSDs) is about $800.  What can I skinny down, without sacrificing a huge amount of speed/performance?  I am looking for major guidance here...I'm ready to order, but I keep second-guessing my choices.  I want to make sure I get this right the first time around.  I want this machine to last me a good long while.  Any help at all will be HUGELY appreciated! 
    Here are my current picks:
    Part list permalink: http://pcpartpicker.com/p/1bwi
    Part price breakdown by merchant: http://pcpartpicker.com/p/1bwi/by_merchant
    CPU: Intel Core i5-2500K 3.3GHz Quad-Core Processor  ($214.99 @ SuperBiiz)
    Motherboard: Gigabyte GA-P67A-D3-B3 ATX  LGA1155 Motherboard  ($104.99 @ Newegg)
    Memory: G.Skill Ripjaws Series 16GB (4 x 4GB) DDR3-1333 Memory  ($85.00 @ Newegg)
    Hard Drive: Samsung Spinpoint F3 1TB 3.5" 7200RPM Internal Hard Drive  ($49.99 @ Newegg)
    Video Card: XFX Radeon HD 6850 1GB Video Card  ($129.99 @ Newegg)
    Case: Cooler Master Elite ATX Mid Tower Case  ($60.00 @ Amazon)
    Power Supply: Corsair 500W ATX12V Power Supply  ($49.99 @ Newegg)
    Optical Drive: Lite-On iHAS124-04 DVD/CD Writer  ($19.99 @ SuperBiiz)
    Operating System: Microsoft Windows 7 Home Premium SP1 (64-bit)  ($91.98 @ Amazon)
    Total: $806.92
    (Prices include shipping and discounts when available.)
    (Generated 2011-09-19 18:29 EDT-0400)
    Thanks!
    Andrea

    Make sure the motherboard is the third revision as the earlier two had problems with the i5-2500K (Sandy Bridge)
    I prefer Asus motherboards but doesn't mean the Gigabyte is a problem. (I note that it scored 4 eggs...)
    May not make a difference in Photoshop but for Premiere the Geforce video cards with GDDR5 are preferred.
    The LiteOn iHas 224 does lightscribe which is a nice touch for clients/gifts.
    There is no room to save as I would consider your build minimum specs for a nice photoshop experience.
    As for SSD drives...my research of reviews and articles makes me feel the technology is not ready for mission critical work. So if you are running a business then I recommend not using SSD drives. Rather look at Raptor 10,000 rpm drive for Windows and Photoshop and use the Spinpoint for storage. If not business then I've read the intel x25 SSD drives are solid but I have no personal experience with them.
    Cheers,
    Steve

  • HT201317 I can't upload my videos from my ipone 4s to my comuter, i apprecaite if anybody can help me  with this...someone said use safari or drop box..is this the right way to go about it?

    i Can't upload my videos from my iphone 4s to my computer, what software should i use to do this?

    As far as I know you can't delete the primary email address for an iCloud account.  It's assigned when the account is created.  But your neighbor wouldn't have been able to get into your iCloud account without your Apple ID and password.  Are you sure the account wasn't still on your phone when you gave it to him?
    You could migrate a copy of your data to a new iCloud account but I would still be concerned that someone else was using my old account, which presumably still has your data in it.
    I'm fairly certain that you're going to have to have iCloud support help you sort this one out as they may have the ability to make changes to an existing account that users can't.  Make an appointment with the genius bar at a nearby Apple store and have them take a look at it.  If necessary, they should be able to contact iCloud support for you.

  • Multi table inheritance and performance

    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.

    Stefan-
    Thanks for the feedback. Note that 3.0 does make this simpler: we have
    extensions that allow you to define the mechanism for subclass
    identification purely in the metadata file(s). See:
    http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
    The idea for having a separate table mapping numbers to class names is
    good, but we prefer to have as few Kodo-managed tables as possible. It
    is just as easy to do this in the metadata file.
    In article <[email protected]>, Stefan wrote:
    First of all: thx for the fast help, this one (IntegerProvider) helped and
    solves my problem.
    kodo is really amazing with all it's places where customization can be
    done !
    Anyway as a wish for future releases: exactly this technique - using
    integer as class-identifiers rather than the full class names is what I
    meant with "normalization".
    The only thing missing, is a table containing information of how classIDs
    are mapped to classnames (which is now contained as an explicit statement
    in the jdo-File). This table is not mapped to the primary key of the main
    table (as you suggested), but to the classID-Integer wich acts as a
    foreign key.
    A query for a specific class would be solved with a query like:
    select * from classValues, classMapping where
    classValues.JDOCLASSX=classmapping.IDX and
    classmapping.CLASSNAMEX='de.company.whatever'
    This table should be managed by kodo of course !
    Imagine a table with 300.000 rows containing only 3 different derived
    classes.
    You would have an extra table with 4 rows (base class + 3 derived types).
    Searching for the classID is done in that 4row table, while searching the
    actual class instances than would be done over an indexed integer-classID
    field.
    This is much faster than having the database doing 300.000 String
    comparisons (even when indexed).
    (By the way - it would save a lot memory as well, even on classes which
    are not derived)
    If this technique is done by kodo transparently, maybe turned on with an
    extra option ... that would be great, since you wouldn't need to take care
    for different "subclass-indicator-values", can go on as everytime and have
    a far better performance ...
    Stephen Kim wrote:
    You could push off fields to seperate tables (as long as the pk column
    is the same), however, I doubt that would add much performance benefit
    in this case, since we'd simply add a join (e.g. select data.name,
    info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
    info.jdoclassx = 'foo'). One could turn off default fetch group for
    fields stored in data, but now you're adding a second select to load one
    "row" of data.
    However, we DO provide an integer subclass provider which can speed
    these sorts of queries a lot if you need to constrain your queries by
    class, esp. with indexing, at the expense of simple legibility:
    http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
    Stefan wrote:
    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Best Upgrade to Improve Performance?

    I'm looking for the most economical way to boost Aperture's performance. The specs on my equipment are as follows:
    Dual 2.0 ghz G5 Tower (first generation)
    160 gb internal HD (where my current projects and masters are stored)
    320 gb external Lacie HD (for backup, and where I move older master files)
    ATI Radeon 9600 Pro 64 mb (stock)
    2 gb RAM
    OS X 10.4.9, just reformatted and reinstalled over the weekend
    20" Apple Cinema Display (ADC)
    All photos are from my Nikon D80, which is 10 mp, so 2888x3600 pixels.
    In a nutshell, Aperture really chokes on these files. It can handle 10 mp JPEG's OK, or 6 mp RAW files OK, but 10 mp RAW files are just a bit too much for it. Loading an image can take a long time, and as adjustments add up it really bogs down. I basically can't use sliders for anything, I just have to type in a value or click along the slider bar and let it jump there. Rotate is completely unusable. It still WORKS, but it's just slow and inefficient. Exporting JPEGs for uploading to my website takes painfully long.
    I bumped my RAM up to 2 gb a few months ago, and it hasn't made as much of an improvement as I'd hoped. Of course Aperture and OSX have the tendency to gobble up whatever RAM they can, and I suspect the machine will still end up paging out at some point no matter how much RAM I add.
    Hard drive access doesn't seem to be too much of a problem, at least until the memory starts paging anyway. I have an ok amount of capacity so far, but my gut is telling me that this isn't the bottleneck. From everything I've read on the forum here, I suspect it's my lame graphics card. While I wouldn't mind having 4 gb of RAM and 1 tb of hard disk, I doubt that's going to help me as much as a Radeon X800 XT would in the near-term. Do you all agree with that? Is that the only real solution, this being an AGP Mac with an ADC monitor?
    As much as I'd like a new Mac Pro, I just don't see that happening. What I'd want would be in the $3,000-4,000 range, and some of my workhorse programs are not universal yet. I still use Photoshop 7, Illustrator 10, and GoLive 6, so I'd need to up to CS3 for full compatibility. $$$ I do some CAD work with PowerCADD, and that's not scheduled to go universal for a while yet. Then there's my one favorite game, SimCity 4. I did download a beta universal binary from Aspyr, just to see if they'd fixed any of the other long-standing bugs in the program (they haven't). SimCity is really the only other place where I've been disappointed in my computer's performance. I think it's more processor bound than Aperture is in this case, but a better video card should certainly help it some. I do also have a few classic programs that I need to access from time to time (namely ArcView), which are pretty processor intense. Thus it doesn't look like taking the plunge to an Intel Mac would be a good idea just yet. Maybe in another year or so.
    I did have a chance to play with Aperture on a Mac Pro at the Apple Store a month or so ago. While it wasn't glass smooth with the 10 mp files on the machine, it was certainly a lot better than my computer, just not worth the price of a whole new machine. Does it look like I'm on the right track here, or might there be a different area I can optimize performance?
    Thanks

    I agree, if you can find a video card, that would help a lot.
    On the other hand, You might consider getting a iMac 20" and selling your current machine. I have a iMac 20" with 2 gig of ram and it runs Aperture very well. You can get a deal on one of Apple's Refurbished units too. The 24" has the better video cards and 800FW. But, if money is the issue, I would buy the 20" and upgrade your software. Again, with Aperture, buy the best video card that you can. You will love Photoshop CS3.
    I hope this helps.
    Kevin Hawkins

  • Brush Lag? - Here are OpenGL settings that improve performance

    For anyone having horrible Brush lag, please try the following settings and report your findings in this thread.
    After much configuring and comparing CS4 vs CS3, I found that these settings do improve CS4's brush lag significantly. CS3 is still faster, but these settings made CS4 brush strokes a lot more responsive.
    Please try these settings and share your experiences.
    NOTE: these do not improve clone tool performance, the best way to improve clone performance right now seems to be to turn off the clone tools overlay feature.
    Perhaps Adam or Chris from Adobe could explain what is happening here. The most significant option that improved performance appears to be the "Use for Image Display - OFF". I have no idea what this feature does or does not do but it does seem to be the biggest performance hit. The next most influential setting seems to be "3D Interaction Acceleration - OFF"
    Set the following settings in Photoshop CS4 Preferences:
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - OFF
    Color Matching - ON

    Hi guys,
    As I am having very little problems with my system I though I should post my specs and settings for comparison reasons.
    System - Asus p5Q deluxe,
    Intel quad 9650 3ghz,
    16gb pc6400 800Mhz ram,
    loads of drives ( system drive on 10K 74gb Raptor, Vista partition on fast 500gb drive, Ps scratch on an another fast 500gb drive, the rest are storage/bk-ups ),
    Gainward 8800GTS 640mb GPU,
    30ins monitor @2560x1600 and a 24ins 1920x1200,
    Wacom tablet.
    PS CS4 x64bit
    Vista x64
    All latest drivers
    No Antivirus. Index and superfetch is ON, Defender is ON
    No internet connection except for updates
    No faffing around with vista processes
    Wacom Virtual HID and Wacom Mouse Monitor are disabled
    nVidia GPU set to default settings
    On this system I am able to produce massive images, the last major size was 150x600cm@200ppi and the brushes are smooth until they increased to around 700+ pixels then there is a slight lag of around 1 second if I draw fast, if I take it slow then there's no lag. All UI is snappy.
    I have the following settings in Photoshop CS4 Preferences:
    Actual Ram: 14710MB
    Set to use (87%) :12790MB
    Scratch Disk
    on a separate fast 500gb - to become a 80gb ssd soon
    History state: 50
    Cache: 8
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - ON
    Color Matching - ON
    I hope this helps in some way too...
    EDIT: I should also add that I Defrag all my drives with a third party defrag software every night due to the large image files.

Maybe you are looking for

  • Jumanji web browser very slow when trying to open a new website

    Duckduckgo and pwmt.org work just fine but everything else I tried just does not open. What can be the couse of this issue? Why would one website work just fine and another just refuse to open at all? Last edited by khajvah (2014-11-10 19:55:25)

  • TS1702 Help on security questions on App Store?

    When I go to download an app of App Store it tells me to make it more secure and it comes up with security questions. When I select question nothin appears and it is stopping me from my iPad. Could anybody help me?

  • What is this element/feature?

    Hi all, What is the "Format 0" box above?  Not asking specifics, but generalities.  When you click on it, it expands like below.  I would like to make CPHA an input to this function (i.e. provided from the front panel).  Haven't run across something

  • Rescue attempt of iMac via MBP.  DVD issues

    I am trying to rescue my dead iMac using an install DVD and my MacBookPro. Long story short; the iMac died a while back and will only boot in Target Disk mode. I have backed up the iMac to an HDD, so I have the data saved. I am trying to mount the iM

  • Entourage/Ipod 30gig video - Contacts Not Showing UP

    I am new at this Apple stuff - converted PC user. I am trying to setup my Ipod to Sync with my Contacts within Entourage. Followed all steps in manual. When I select contacts within the ipod it displays the Admin icon (which that is what I am running