Huge VO Performance Difference (Repost with corrections)

Performance question
Im comparing the performance of two view object instance models:
the structure is like below:
SalesActivityTView1
--ActivityEntryViewLink1
--ActivityEntryTView1
--------ViewLink1
--------ViewLink2
--------ViewLink3
--------ViewLink8
SalesActivityTView2
--ActivityEntryViewLink2
--ActivityEntryTView2
(no view links below ActivityEntryTView2)
- both SalesActivityTView1 and SalesActivityTView2 are based on the same view object definition
- both ActivityEntryView1 and ActivityEntryView2 are based on the same view object definition
- both ActivityEntryViewLink1 and ActivityEntryViewLink2 are based on different view link xml definitions but the entries of both files are identical
Using the Oracle Business Component Browser, I clicked on ActivityEntryViewLinks (one at a time) to view both the SalesActivityTView and ActivityEntryTView screens in a master and detail form. I tried scrolling through the data using the navigation bar of SalesActivityTView and noticed a huge difference in performance.
Scrolling through each SalesActivityTView1 entry took me about 5-6 seconds (before the data were refreshed) whereas using SalesActivityTView2 took less than a second.
Please note that i haven't opened the forms for the other view objects of the view links under ActivityEntryTView1 yet.
Removing the eight view links under ActivityEtnryTView1 is not an option for me as im using the panelbinding in my swing-based application to automatically synchronize the data based on the selected ActivityEntryTView.
I'm currently using Jdev9i v 9.0.3.2; the performance difference is almost the same whether i use it in either three-tier or two-tier mode.
Can anyone help me out in resolving this performance issue?
-Neil

The difference is due to view-link coordination. It is more link's in the first application module.
Keep an eye on the otn article Performance tips for swing based bc4j applications.
Particulary this part i have copied for you:
Keep an Eye Out for Lazy Master/Detail Coordination Opportunities
More sophisticated user interfaces might make use of Swing's tabs or card layouts to have a set of panels which are conditionally displayed (or displayed only when the user brings them to the foreground). While not automatic in the 9.0.3 release, BC4J does offer API's like setMasterRowSetIterator and removeMasterRowSetIterator on any RowSet which allow you to dynamically add and remove iterators from the list of ones that will cause that rowset to be actively coordinated by the framework. Using these API's in a clever way (where possible, from within a server-side AM custom method of course!) you can have you application automatically coordinate the detail queries for regions on the screen that the user can see, and suppress the active coordindation for data that the user cannot currently see on the screen.

Similar Messages

  • Huge VO Performance Difference

    Performance question
    Im comparing the performance of two view object instance models:
    the structure is like below:
    SalesActivityTView1
    L ActivityEntryViewLink1
    L ActivityEntryTView1
         L ViewLink1
         L ViewLink2
         L ViewLink3
         L ViewLink8
    SalesActivityTView2
    L ActivityEntryViewLink2
    L ActivityEntryTView2
    (no view links below ActivityEntryView)
    - both SalesActivityTView1 and SalesActivityTView2 are based on the same view object definition
    - both ActivityEntryView1 and ActivityEntryView2 are based on the same view object definition
    - both ActivityEntryViewLink1 and ActivityEntryViewLink2 are based on different view link xml definitions but the entries of both files are identical
    Using the Oracle Business Component Browser, I clicked on ActivityEntryViewLinks (one at a time) to view both the SalesActivityTView and ActivityEntryTView screens in a master and detail form. I tried scrolling through the data using the navigation bar of SalesActivityTView and noticed a huge difference in performance.
    Scrolling through each SalesActivityTView1 entry took me about 5-6 seconds (before the data were refreshed) whereas using SalesActivityTView2 took less than a second.
    Please note that i haven't opened the forms for the other view objects of the view links under ActivityEntryTView1 yet.
    Removing the eight view links under ActivityEtnryTView1 is not an option for me as im using the panelbinding in my swing-based application to automatically synchronize the data based on the selected ActivityEntryTView.
    I'm currently using Jdev9i v 9.0.3.2; the performance difference is almost the same whether i use it in either three-tier or two-tier mode.
    Can anyone help me out in resolving this performance issue?

    The difference is due to view-link coordination. It is more link's in the first application module.
    Keep an eye on the otn article Performance tips for swing based bc4j applications.
    Particulary this part i have copied for you:
    Keep an Eye Out for Lazy Master/Detail Coordination Opportunities
    More sophisticated user interfaces might make use of Swing's tabs or card layouts to have a set of panels which are conditionally displayed (or displayed only when the user brings them to the foreground). While not automatic in the 9.0.3 release, BC4J does offer API's like setMasterRowSetIterator and removeMasterRowSetIterator on any RowSet which allow you to dynamically add and remove iterators from the list of ones that will cause that rowset to be actively coordinated by the framework. Using these API's in a clever way (where possible, from within a server-side AM custom method of course!) you can have you application automatically coordinate the detail queries for regions on the screen that the user can see, and suppress the active coordindation for data that the user cannot currently see on the screen.

  • Email event generator support secure password exchange with Exchange? [repost w/ corrected subject]

    [repost with corrected subject - not an Email adapter with events, but the
    Email event generator]
    IHAC who's trying to use the Email event generator against an Exchange server
    (as POP3). However, he can't get get the EG to connect successfully, as the
    Exchange server is configured to refuse cleartext passwords.
    Has anyone run into this, and how did they solve it? Or, is this not
    supported with the 8.1sp3 email EG?
    TIA for any help.
    Regards,
    Steve Elkind
    BEA PS

    In this case, the SMTP domain is the same as the AD domain.  If the wrong domain were configured then the connection would never work, as opposed to sometimes work.
    RunspaceId            : abb30c12-c578-4770-987f-41fe6206a463
    ForestName            : adatum.local
    UserName              : adatum\availtest
    UseServiceAccount     : False
    AccessMethod          : OrgWideFB
    ProxyUrl              :
    TargetAutodiscoverEpr :
    ParentPathId          : CN=Availability Configuration
    AdminDisplayName      :
    ExchangeVersion       : 0.1 (8.0.535.0)
    Name                  : adatum.local
    DistinguishedName     : CN=adatum.local,CN=Availability Configuration,CN=Wayport,CN=Microsoft
                            Exchange,CN=Services,CN=Configuration,DC=contoso,DC=local
    Identity              : adatum.local
    Guid                  : 3e0ebc2c-0ebc-4be8-83d2-077746180d66
    ObjectCategory        : contoso.local/Configuration/Schema/ms-Exch-Availability-Address-Space
    ObjectClass           : {top, msExchAvailabilityAddressSpace}
    WhenChanged           : 4/15/2014 12:33:53 PM
    WhenCreated           : 4/15/2014 12:33:35 PM
    WhenChangedUTC        : 4/15/2014 5:33:53 PM
    WhenCreatedUTC        : 4/15/2014 5:33:35 PM
    OrganizationId        :
    OriginatingServer     : dc01.contoso.local
    IsValid               : True
    ObjectState           : Unchanged

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • Performance difference with my MBP 15" 2010 and the new 2011 one(gaming?)

    Hey guys,
    I have a 2.67 i7 2010 15" MPB(the higher end model)that I got last year. I use it for work and for college but I also like to play games on it.
    I recently got an Apple newsletter as I noticed the huge performance boost of the newer models. I went to a review site and my specific model nearly doubled in benchmark scores with the newer model. That doesn't seem like something that would normally happen. It seems like a rarity thats just too good to ignore. And this summer with the semesters being over I probably will go back to full on gaming(probably Portal 2 and WoW for the summer).
    My question to you is, since I can't go by the opinion of one site, what is the performance difference between the 2010 model I have and the newer 2011 one? Basically I am wondering if its worth it to sell it for several hundred dollars less than I payed for it and get the newer model now, or wait until 2012 to get the newest model then(if its going to be an even bigger difference in performance)?
    Excuse my poor judgement if I'm wrong, I probably seem like a maniac for using this comp for a year and already wanting to get the newer model. Its just that a lot of sites are making it seem like my model is fodder compared to the newer ones, and if so I might not let this offer pass by.

    bump for good measures!

  • SQL Server 2008R2 vs 2012 OLTP performance difference - log flushes size different

    Hi all,
    I'm doing some performance test against 2 identical virtual machine (each VM has the same virtual resources and use the same physical hardware).
    The 1° VM has Windows Server 2008R2 and SQL Server 2008R2 Standard Edition
    the 2° VM has Windows Server 2012R2 and SQL Server 2012 SP2 + CU1 Standard Edition
    I'm using hammerDB (http://hammerora.sourceforge.net/) has benchmark tool to simulate TPC-C test.
    I've noticed a significative performance difference between SQL2008R2 and SQL2012, 2008R2 does perform better. Let's explain what I've found:
    I use a third VM as client where HammerDB software is installed, I run the test against the two SQL Servers (one server at a time), in SQL2008R2 I reach an higher number of transaction per minutes.
    HammerDB creates a database on each database server (so the database are identical except for the compatibility level), and then HammerDB execute a sequence of query (insert-update) simulating the TPC-C standard, the sequence is identical on both servers.
    Using perfmon on the two servers I've found a very interesting thing:
    In the disk used by the hammerDB database's log (I use separate disk for data and log) I've monitored the Avg. Disk Bytes/Write and I've noticed tha the SQL2012 writes to the log with smaller packet (let's say an average of 3k against an average of 5k written
    by the SQL 2008R2).
    I've also checked the value of Log flushes / sec on both servers and noticed that SQL2012 do, on average, more log flushes per second, so more log flushes of less bytes...
    I've searched for any documented difference in the way log buffers are flushed to disk between 2008r2 and 2012 but found no difference.
    Anyone can piont me in the correct direction?

    Andrea,
    1) first of all fn_db_log exposes a lot of fields that do not exist in SQL2008R2
    This is correct, though I can't elaborate as I do not know how/why the changes were made.
    2) for the same DML or DDL the number of log record generated are different
    I thought as much (but didn't know the workload).
    I would like to read and to study what this changes are! Have you some usefu link to interals docs?
    Unfortunately I cannot offer anything as the function used is currently undocumented and there are no published papers or documentation by MS on reading log records/why/how. I would assume this to all be NDA information by Microsoft.
    Sorry I can't be of more help, but you at least know that the different versions do have behavior changes.
    Sean Gallardy | Blog | Microsoft Certified Master

  • IOS7 op syst induces to motion sickness, vertigo, nauseas, and eye hurting. I took all indicated steps as a solution and nothing happened. I wonder if APPLE is taking this HUGE issue seriously? People with Epilepsia has ended in the hospital!!

    IOS7 op syst induces me to motion sickness, vertigo, nauseas, and eye hurting. I took all indicated steps as a solution and nothing happened. I wonder if APPLE is taking this HUGE issue seriously? People with Epilepsia has ended in the hospital!!
    Today I went to the APPLE STORE in the Aventura Mall, Miami FL and the Staff pretended not to have heard anything about the subject, this could imply an I don't care attitude  or lack of knowledge of what is all over the news and internet on this  major issue.
    The solution given about going to settings, general, accesibility, reduce motion ON does not solve the serious problem.  What it most worry me is that I am wondering if I keep on using the IOS7 op system this issue of the motion sickness, dizziness, nauseas, etc coul cause brain damage?

    I don't think Apple would actually do anything about it. They have a 30 day return policy and if customers aren't happy with the product they can return it for a full refund. In fact,  I am pretty sure when you select agree to the "Terms and Conditions" of using the phone - it covers the fact
    "7.4 APPLE DOES NOT WARRANT AGAINST INTERFERENCE WITH YOUR ENJOYMENT OF THE iOS
    SOFTWARE AND SERVICES, THAT THE FUNCTIONS CONTAINED IN, OR SERVICES PERFORMED OR
    PROVIDED BY, THE iOS SOFTWARE WILL MEET YOUR REQUIREMENTS, THAT THE OPERATION OF
    THE iOS SOFTWARE AND SERVICES WILL BE UNINTERRUPTED OR ERROR-FREE, THAT ANY
    SERVICE WILL CONTINUE TO BE MADE AVAILABLE, THAT DEFECTS IN THE iOS SOFTWARE OR
    SERVICES WILL BE CORRECTED, OR THAT THE iOS SOFTWARE WILL BE COMPATIBLE OR WORK
    WITH ANY THIRD PARTY SOFTWARE, APPLICATIONS OR THIRD PARTY SERVICES. INSTALLATION
    OF THIS iOS SOFTWARE MAY AFFECT THE USABILITY OF THIRD PARTY SOFTWARE, APPLICATIONS
    OR THIRD PARTY SERVICES."
    Full terms and conditions - http://www.apple.com/legal/sla/docs/iOS7.pdf
    People have to understand that Apple is a business and it is up to them whether they think it's a big enough problem to put in a patch to remove the effects.
    Simple solution is if you feel that you don't agree to their way of business and unable to use the iPhone to your fullest comfort, either return the product for a full refund or sign a petition...

  • Performance difference between 6490M and 6750M?

    I'm looking at a new 15" MBP. What performance difference will I likely see between the Radeon 6490M and 6750M? It's a huge jump in graphics memory, but how much will it help me? I'm not a gamer at all, but I do a fair amount of work with photos and video.
    Thanks!

    Since you don't play games, you'll notice a difference in graphic and video applications that are GPU accelerated, like Motion, and Photoshop CS4 & 5.

  • Logic Pro X performance difference between iMac i7 3.5ghz (non-Retina) and the i7 4.0 ghz (Retina)?

    Hello - on the verge of getting an iMac - it's for Logic Pro and some video editing with Premiere Pro (don't *really* need the Retina screen though) - i'd just like to get an idea of the performance difference (especially in Logic Pro X) between the quad i7 3.5ghz (non-retina) and the quad i7 4.0ghz (retina)?
    I use loads of plugin instruments (incl. big Kontakt libraries) and effects with just a few audio tracks (only ever record max a couple of audio tracks at a time)
    Thanks!

    I owned the imac and then returned it for the 2.6 MacBook Pro. After using both there is a noticeable speed difference between the 2. Not a huge difference but there is certainly more lag using the MacBook Pro. I found that lots of the lag went away when I attached to an external display though. At the end of the day I think you just need to decide if you need the portability or not. At first I thought I would keep the imac and then get a cheap MacBook Pro to use on the road but the thought of having multiple Lightroom catalogs sounds very annoying to me. Plus all the transferring back and forth. I will say also that I've been getting a lot of freezes in Photoshop on my mbp and weird errors like the sleep wake failure error some people are talking about when connected to a USB device. I didn't have any of these problems with the imac

  • Data in table is huge so performance is reduced

    Hi all,
    I have a tables in the oracle database where the number of records in the table is huge ie 6 years of data,due to which when i generate the report the retrieval of data is slow.
    How to increase the performance,so that the report can be generate fast.
    Thanks in advance

    Thank you for polluting TWO forums with the exact same request.
    Please stick to the thread with answers in it.
    Data in table is huge so performance is reduced

  • Help!  Just did the Update (ML 10.8.4) after receiving a new HD since mine crashed last week, and now my Mac won't let me past the log in screen even with correct password.

    Help!  Just did the Update (ML 10.8.4) after receiving a new HD since mine crashed last week and the shop didn't update it back to Mavricks which I had previously done...and now my Mac won't let me past the log in screen even with correct password entered. It just spins like I'm logging in and then goes back to the user log in screen.  I did the process where I restarted and pressed control S and created a new user and password, but once created same issue and can't get passed that screen.
    Super frustrated and any help is appreciated!

    I have decided to dedicate this thread to the wonderful errors of Lion OSX. Each time I find a huge problem with Lion I will make note of it here.
    Today I discovered a new treasure of doggie poop in Lion. No Save As......
    I repeat. No Save As. In text editor I couldn't save the file with a new extension. I finally accomplished this oh so majorly difficult task (because we all know how difficult it should be to save a file with a new extension) by pressing duplicate and then saving a copy of the file with a new extension. Yet then I had to delete the first copy and send it to trash. And of course then I have to secure empty trash because if I have to do this the rest of my mac's life I will be taking up a quarter of percentage of space with duplicate files. So this is the real reason they got rid of Save As: so that it would garble up some extra GB on the ole hard disk.
    So about 20 minutes of my time were wasted while doing my homework and studying for an exam because I had to look up "how to save a file with a new extension in  mac Lion" and then wasted time sitting here and ranting on this forum until someone over at Apple wakes up from their OSX-coma.
    are you freaking kidding me Apple? I mean REALLY?!!!! who the heck designed this?!!! I want to know. I want his or her name and I want to sit down with them and have a long chat. and then I'd probably splash cold water on their face to wake them up.
    I am starting to believe that Apple is Satan.

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • I'm using the latest photoshop cc 2014 with most updated camera raw... i am having A LOT/REPEATED trouble getting files to "synch" with corrections, no matter what i options i select (i.e. "everything")... WTH is wrong with me/my computer/adobe?! help. fa

    I'm using the latest photoshop cc 2014 with most updated camera raw... i am having A LOT/REPEATED trouble getting files to "synch" with corrections, no matter what i options i select (i.e. "everything")... WTH is wrong with me/my computer/adobe?! help. fast. please

    BOILERPLATE TEXT:
    Note that this is boilerplate text.
    If you give complete and detailed information about your setup and the issue at hand,
    such as your platform (Mac or Win),
    exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
    your settings in Photoshop > Preference > Performance
    the type of file you were working on,
    machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
    what troubleshooting steps you have taken so far,
    what error message(s) you receive,
    if having issues opening raw files also the exact camera make and model that generated them,
    if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
    etc.,
    someone may be able to help you (not necessarily this poster, who is not a Windows user).
    a screen shot of your settings or of the image could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • Change of Video Frame One Sample Out of Sync with Correct Point on Timeline

    Change of Video Frame One Sample Out of Sync with Correct Point on Timeline.
    It has been found on the author's PC that, in Audition 3 in Multitrack View, a video file displayed in the video window will change its frame one sample later than it should do with respect to the correct point on the timeline. This has led to extra work having to be done by the author, as detailed later.
    To demonstrate this issue:
    Create a new multitrack session. Enable the Time and Video windows. Enable only 'Snap to Frames'.
    Using an external program, render a two-frame video clip at 25fps. Let the first frame contain the letter 'A', and the second frame contain the letter 'B'. Making sure that the Display Time Format is set appropriately, in this case at 'SMPTE 25 fps (EBU)', lock this clip on the multitrack timeline, ensuring that the clip starts at time 00:00:00:00.
    Create a mono audio clip of silence at 48kHz 32-bit that is two frames long. Lock it on an audio track, ensuring that the clip starts at time 00:00:00:00.
    Place the cursor at time 00:00:00:01 and create a cue marker at this point by pressing F8.
    Zoom into the timeline so that approximately twenty samples are visible, with the cursor lying on the cue marker in the middle of the screen. The Time window should display 00:00:00:01. The Video window should display the letter 'A'.
    Move the cursor back in time (leftwards) by one sample. The Time window should change its display to 00:00:00:00, whilst the Video window should still show the letter 'A'.
    Move the cursor forwards in time (rightwards) by one sample, so that it overlays the cue marker again. The Time window should change its display back to 00:00:00:01, whilst the Video window remains showing the letter 'A'.
    Move the cursor forwards in time (rightwards) by one more sample. The Time window's display should remain at 00:00:00:01, but the Video window should now display the letter 'B'.
    As can be seen from the above steps, the video's change of frame content occurs incorrectly one sample later than it should; the change of video frame content should occur simultaneously with the change of the frame number displayed in the Time window, which is at the point that the cue marker has been placed.
    The practical effect of this issue is that it has caused the author a huge amount of extra work on an 89-minute long soundtrack-for-video timeline. There were over 1100 changes of shot which the author wished to mark by placing cue markers at the beginning of every shot. This should have been a quite simple process of enabling 'Snap to Frames', stepping through the video, and laying cue markers down by pressing F8 when the shot was seen to change.
    Because currently the Video window (referencing the cursor laying over a cue marker
    i that has been placed on the timeline using the 'Snap to Frames' option
    ) will actually display the end moments of the previous frame rather than the beginning moments of the current frame, laying down a cue marker when the shot is seen to change will actually place that marker at the beginning moments of the second frame of the new shot, not at the beginning moments of the first frame of the new shot, i.e. the cue marker placed will be one frame later in time than it should be. If it is wished for the cue marker to, as best it can, represent the beginning moments of the first frame of the new shot, it has to be manually repositioned so that it is at least a sample later in time than the time at which the shot has actually changed, in order for the Video window to correctly display the content.
    The author had to manually reposition all 1100+ cue markers.
    Ideally, the Video window should display a change of video frame in synchronization with the change of the frame display in the Time window, which occurs at the position of the 'Snap to Frames' point.
    Part of the PC system's specification is detailed below:
    Windows XP Professional Version 2002 SP3;
    Adobe Audition 1.0 Build 3211.2;
    Ad

    obe Audition 2.0 Build 5306.2;
    Adobe Audition 3.0 Build 7283.0;
    ASRock 775dual-VSTA motherboard, BIOS Version P.3.00;
    2.80 GHz Intel Pentium D Processor 915 2x2MB L2 Cache;
    2GB DDRII 667 RAM;
    Matrox G550 DualHead AGPx4 VGA card for Audition's two main work screens;
    Matrox Productiva G100 MMS PCI VGA for two additional "static" information screens;
    4 x PATA HDDs for OS & programs, swapfile & temp, project file sources, and guide audio;
    Onboard Realtek ALC888 7.1 channel audio CODEC with High Definition audio;
    Lite-On DVDRW LH-18A1P;
    VIA OHCI Compliant IEEE 1394 Host Controller.
    ASIO4ALL version 2.8;
    Via Hyperion Drivers 5.16a - (VIA Chipsets INF Update Utility V3.00A, VIA PATA IDE Driver Package V1.90, VIA SATA IDE Driver Package V2.30A, AGP V4.60A);
    Matrox PowerDesk-SE 11.10.400.03;
    Matrox Millennium G550 Display Driver Version 5.99.005;
    Realtek High Definition Audio System Software Version R1.91, Audio Driver Version 5.10.0.5605;
    DirectX 9.0c March 2008;
    Microsoft .NET frameworks 1.1, 2.0 and 3.0 with all service packs.

  • What kind of performance difference would cuda and quad core make in Premiere?

    I am torn between spending $1.5k and $2k on a new macbook because the 15inch one has a 650m and i7 but is more expencive. I currently run a beastly system and want something for a mobile rig for more casual projects. How much of a performance boost would I actually get with cuda and a quad core vs dual core i5?

    Lots of discussions like this in the hardware sub-forum http://forums.adobe.com/community/premiere/hardware_forum?view=discussions
    Many good links, with some performance charts, at http://ppbm7.com/index.php/tweakers-page
    Bottom line... fast CPU and Cuda DO make a difference, along with 2 or more hard drives (or SSD + 7200RPM hard drive) especially for HiDef editing

Maybe you are looking for

  • Ordinary depreciation changed in repeat run

    Hi Gurus, We posted some unplanned depreciation in march 2010. and did afab in repeat run for period 12. System posted unplanned depreciation but in some  assets  ordinary depreciation is also changed. After analysing one of the asset i come to know

  • I can't update license for cue 3.2.1

    Hello I have unity express 3.2.1. I try to install cue-vm-license_25mbx_ccm_3.2.1.pkg. I get this error: cue-munich# software install upgrade url ftp://10.0.0.5/cue/cue-vm-license_25mbx_ccm_3.2.1.pkg username bla-la-la password 123 WARNING:: This com

  • GL Account is blocked for Cost Centre

    Hi All, I try to post amount to G/L account, But SAP throwing this error. GL Account 672800 is blocked for Cost Centre 3330. I tried to post to another account 678200 which has the same functionality as 672800 but i done get any error .  Any Advice p

  • Error when trying to open group policy editor

    When I try to open the Local Group Policy Editor I get error message: "Found duplicate definition of element string with name 'AppxRuntimeBlockFileElavationExplanation'. File c:\Windows\PolicyDefinitions\en-US\AppXRuntime.adml, line43, column 15" I'm

  • Does iPhoto 5 still work fine with Snow Leopard?

    I don't want to give up iP5 because it still let you print N-up on a page (e.g. 4, 9, 16 images/page) and that seems to be gone from iPhoto'08/'09.