DRI (declarative referential integrity) and speed improvements.

EDITED: See my second post--in my testing, the relevant consideration is whether the parent table has a compound primary key or a single primary key.  If the parent has a simple primary key, and there is a trusted (checked) DRI relation
with the child, and a query requests only records from the child on an inner join with the parent, then sql server (correctly) skips performing the join (shown in the execution plan).  However, if the parent has a compound primary key, then sql server
performs a useless join between parent and child.   tested on sql 2008 r2 and denali.  If anyone can get sql server NOT to perform the join with compound primary keys on the parent, let me know.
ORIGINAL POST: I'm not seeing the join behavior in the execution plan given in the link provided (namely that the optimizer does not bother performing a join to the parent tbl when a query needs information from the child side only AND
trusted DRI exists between the tables AND the columns are defined as not null).  The foreign key relation "is trusted" by Sql server ("is not trusted" is false), but the plan always picks both tables for the join although only one is needed. 
If anyone has comments on whether declarative ref integrity does produce speed improvements on certain joins, please post.  thanks.
http://dinesql.blogspot.com/2011/04/does-referential-integrity-improve.html

I'm running sql denali ctp3 x64 and sql 2008 r2 x64, on windows 7 sp1. I've tested it on dozens of tables, and I defy anyone to provide a counter-example (you can create ANY parent table with two ints as a composite primary key, and then a child table using
that compound as a foreign key, and create a trusted dri link between them and use the above queries I posted)--any table with a compound foreign key relation as the basis for the DRI apparently does not benefit from referential integrity between those tables
(in terms of performance). Or to be more precise, the execution plan reveals that sql server performs a costly and unnecessary join in these cases, but not when the trusted DRI relation between them is a single primary key. If anyone has seen a different result,
please let me know, since it does influence my design decisions.
fwiw, a similar behavior is true of sql server's date correlation optimization: it doesn't work if the tables are joined by a composite key, only if they are a joined by a single column:
"There must be a single-column
foreign key relationship between the tables. "
So I speculate, knowing absolutely nothing, that there must be something deep in the bowels of the engine that doesn't optimize compound key relations as well as single column ones.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[parent](
[pId1] [int] NOT NULL,
[pId2] [int] NOT NULL,
CONSTRAINT [PK_parent] PRIMARY KEY CLUSTERED
[pId1] ASC,
[pId2] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Children](
[cId] [int] IDENTITY(1,1) NOT NULL,
[pid1] [int] NOT NULL,
[pid2] [int] NOT NULL,
CONSTRAINT [PK_Children] PRIMARY KEY CLUSTERED
[cId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Children] WITH CHECK ADD CONSTRAINT [FK_Children_TO_parent] FOREIGN KEY([pid1], [pid2])
REFERENCES [dbo].[parent] ([pId1], [pId2])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
/* the dri MUST be trusted to work, but it doesn't work anyway*/
ALTER TABLE [dbo].[Children] CHECK CONSTRAINT [FK_Children_TO_parent]
GO
/* Enter data in parent and children */
select c.cId FROM dbo.Children c INNER JOIN Parent p
ON p.pId1 = c.pId1 AND p.pId2 = c.pId2;
/* Execution plan will be blind to the trusted DRI--performs the join!*/

Similar Messages

  • Schema referential Integrity and Portal

    Please bear with me - I'm relatively new to Oracle and Portal. When using Portal forms to access schemas how does it handle referential integrity. Does portal handle this through the database object interface or is this done when building the schema and portal simply works with whatever schema attributes already exist? i.e. will it be perfectly comfortable with existing triggers and sequences and keys?
    It doesn't look like it would be easy to use portal forms where there are sub-group tables e.g. different categories of people - sales, support, customers etc
    Could anyone comment please?

    We had some performance issues with the retrochange log enabled. We ended up making sure the retrochage log database was in its own disk (we had it in the same physical volume/volume group, etc, in AIX). Everything is working okay now.

  • 2.0.1 Update and speed improvement...

    Has anyone noticed a speed increase when doing Mpeg encoding after installing the 2.0.1 update and Quick Time update ?
    I have just downloaded the update but I have no source media to do a test/compare with on my Powerbook..
    N

    This topic has been discussed several times earlier where several issues and work around´s has "arrived". I was just interested to hear if someone found the new updates to be the eureka to all theses issues (to slow when exporting from FCP timeline etc etc).
    Personally I have done tests on the old FCP4.5/compressor 1.X combo and compared this with FCP5.0/Compressor 2.X with the same source material, and the results was:
    Comp.1.X - 60Min HQ setting: 9min 34 s
    Comp.2.X - 90min HQ setting: 11min 8 s
    This was done on a 1.6GHz Powerbook max´ed with ram before any updates arrived (and the export was done from the FCP timeline).
    My G5 setup is still in the old FCP 4.5/Comp 1.X combo and I was interested to see if it was about time to do the swap..
    N

  • Distributed referential integrity

    Hi, I have a question. If I am on the wrong forum please direct me as I could not find a specific forum for distrituted databases and these may be a application development question.
    I am implementing a simple distributed database application. At this point I am only concerned with creating two tables each on a different server. For simplicty I will name them table1 and table2. This would be the schema if the tables were not distributed and were located on the same server.
    CREATE TABLE table1 (
    table1_id NUMBER CONSTRAINT table1_pk PRIMARY KEY
    CREATE TABLE table2 (
    table2_id NUMBER,
    table1_id NUMBER,
    CONSTRAINTtable2_pk PRIMARY KEY (table2_id, table1_id),
    CONSTRAINT table2_fk FOREIGN KEY (table1_id) REFERENCES table1(table1_id)
    I have referred to the documentation in both Oracle® Database Administrator's Guide 10g Release 2 (10.2) and Oracle Database Application Developer's Guide. I know that I cannot use declarative referential integrity but how can I distribute the tables so that I can be sure that when I insert a row into table2, I can check to see if there is a matching row in table1 for table1_id?

    Realistically, I expect that you need to reconsider your architecture.
    It does not make sense to check for a matching row in table1 in a remote database while inserting data into table2 in a local database. The best case scenario would be that every insert into table2 would incur the overhead of a network round-trip plus the cost of querying the table in the remote database. If the network went down or if either of the two databases went down, the application would fail, which generally defeats the purpose of a distributed application. Plus, there would be all sorts of concurrency issues (i.e. I delete a row, but before I commit you query the row, see that it exists, and insert a child row. I commit, leaving your row orphaned).
    Assuming you really need a distributed architecture, you would want to replicate table1 to both the local and remote nodes. You would then declare referential integrity constraints between your local copy of table1 and table2 (as well as between table1 and table2 on the remote database, assuming you want table2 data available there as well). Your replication process (preferrably using Streams but potentially using multi-master materialized views instead) would then have to be coded to deal with errors because of the asynchronous nature of replication (i.e. to notice that database1 deleted a parent row that you just inserted a child row for and resolve the conflict appropriately).
    Justin

  • What is mean by Referential Integrity? Where do we use it and Why..?

    Hi All,
    Can anybody tell me, What is mean by Referential Integrity? Where do we use it and Why..?
    Regards,
    Kiran Telkar

    Dear Kiran Telkar ,
    you might be knowing that generally refrential integrity is concerned with nothing but primary key and foriegn key relationship. Generally we use to check uniqueness of records.
    In sap we use it during flexible updation...to check the data records of transaction data and master data.
    In other words, to check before loading of data, that whether loading will be properly or not.
    we will check(tick) the option in the maintainance of the
    <b>infosource--> communication structure</b>
    it will be better if you clearly mention your problem, if further help is needed.
    hope this will help you.
    Regards
    vinay
    <i>please assign points to all who will help you.</i>

  • I have been interested in how lightroom uses the catalog so was poking around a backup of the catalog. I found it rather concerning that although the database (catalog) is pretty well designed, there is no referential integrity defined or enforced.

    I have been interested in how lightroom uses the catalog so was poking around a backup of the catalog.  I am a database administrator and I found it rather concerning that although the database (catalog) is pretty well designed, there is no referential integrity defined or enforced. This is non-standard practice and could well be the source of corrupt catalogs I have seen many people complain about. I would strongly recommend the developers modify the catalog and adopt best practices if they want to improve the stability of Lightroom and the catalog.

    I would imagine that data integrity is not enforced for performance reasons. In a closed environment like LR where the application has complete control over the data, enforcing data integrity may not be worth the performance hit. Often what is done in an environment like this is to have data integrity on in test environments which would expose data integrity bugs but where the impact of performance is low. In "production" it is then turned off to get as much performance as possible. I would say there are many more complaints about performance than corrupt catalogs. And corrupt catalogs are more likely due to interruptions in writing to the catalog (like crashes, backups or dropbox activity while LR is running, etc). Data integrity would not help in these cases as they are outside the databases control.

  • Issue with Referential Integrity check in Oracle VPD Policy

    Hi,
    Lets assume I have two tables - Customer and Order, with cust_id in Order table referring to primary key of Customer table.
    Example Data;
    Customer
    cust_id Name
    1 abc
    2 def
    3 ghi
    Order
    Order_id cust_id Order_type
    1 1 A
    2 2 A
    3 1 B
    Now I have policies defined on both the tables;
    - for "Select, Insert, Update" queries on Customer table.
    - for "Select" queries on Order Table.
    Policy 1 on Order Table;
    Irrespective of the user, predicate = 'Order_type = ''A'''
    Policy 2 on Customer Table;
    Irrespective of the user, predicate = '(select count(1) from order o where o.cust_id = customer.cust_id and o.order_type = ''B'') > 0'
    My intention is to show only those customers who have atleast one order of type 'B'. And this policy works fine in case a user tries to read data from customer table. (for example, record for cust_id = 2 will not be returned as it don't have any orders of type "B")
    However, when a user tries to insert record in Order Table, because of the existing referential integrity constraint, the Policy on Customer table is also getting triggered. And an exception is being raised "ORA-28113: policy predicate has error".
    Could someone please explain why this is happening ?

    I'm afraid, there is no such a mean.
    At least I do not know about it.

  • Massive Disk Speed Improvement Plan

    I am moving forward with a disk storage speed improvement plan using my Dell Precision T5400 workstation as the test bed.
    Specifically, my goal is to create a super fast 2 TB drive C: from four OCZ Vertex 3 480GB SATA3 SSD drives in RAID 0 configuration.  This will replace an already fast RAID 0 array made from two Western Digital 1TB RE4 drives.
    So far I have ordered two of these fast SSD drives, along with what is touted to be a very good value in high performance SATA3 RAID controllers, a Highpoint 2420SGL.  I'll get started with this combination and get to know it first as a data drive before trying to make it bootable.
    Getting any kind of hard information online about putting SSDs into RAID is a bit like pulling teeth, so I'm not 100% confident that these parts will work perfectly together, but I think the choice of SSD drives is the right one.  I had briefly considered a PCIe RevoDrive SSD card made by OCZ, but was just too esoteric...  I'm actually getting double the storage this way for the same price, I can swap to a different RAID controller if need be, and these drives can easily be ported to any new workstation I may get in the future.
    Notably, some early concerns with using SSD in RAID configurations (and things like TRIM commands) have already been alleviated, as the drives are now quite intelligent in their internal "garbage collection" processes.  I've verified this with the engineers at OCZ.  They have said that with these modern SSD drives you really don't have to worry about them being special - just use them as you would a normal drive.
    Once I get the first two SSDs set up in RAID 0 I'll specifically do some comparisons with saving large files and also using the array as the Photoshop scratch drive, vs. the spinning 1 TB drive I have in that role now.
    Assuming all goes well, I'll then add the additional two SSDs to complete the four drive array.  After a quick test of that, I'll see if I can restore a Windows System Image backup made from my 2 TB C: (spinning drive) array, which (if it works) will let me hit the ground running using the same exact Windows setup, just faster.
    My current C: drive, made from two Western Digital 1 TB RE4 drives, delivers about 210 MB/sec throughput with very large files, with 400 MB/sec bursts with small files (these drives have big caches).  Where they fall down dismally (by comparison to SSD) is operations involving seeking...  The PassMark advanced "Workstation" benchmark generates random small accesses such as what you might see during real work (and I can hear the drives seeking like crazy) results in a meager 4 MB/sec result.
    My current D: drive, a single Hitachi 1 TB spinning drive, clocks in at about 100 MB/sec for large reads/writes.
    The SSD array should push the throughput up at least 5x as compared to my current drive C: array, to over 1 GB/sec, but the biggest gain should be with random small accesses (no seek time in an SSD), where I'm hoping to see at leasdt a 25x improvement to over 100 MB/second.  That last part is what's going to speed things up from an every day usage perspective.
    I imagine that when the dust settles on this build-up, I'll end up pointing virtually everything at drive C:, including the Photoshop scratch file, since it will have such a massively fast access capability.  It will be interesting to experiment.  I suppose I'll have to come up with some gargantuan panoramas to stitch in order to force Photoshop to go heavily to the scratch drive for testing.
    I'll let you all know how it works out, and I'll be sure and do before/after comparisons of real use scenarios (big files in Photoshop, and various other things).  Perhaps fully my "real world" results can help others looking to get more Photoshop performance out of their systems understand what SSD can and can't do for them.
    I welcome your thoughts and experiences.
    -Noel

    Not sure who might be following this thread, but I have executed the final phase of this plan, restoring a system backup from my spinning drive array onto the new 4 drive SSD array.
    All went off without a hitch, I have my same system configuration including all apps and everything just as it was, except everything is now MUCH faster.
    The 4 drive array achieves a staggering 1.74 gigabytes/second sustained throughput rate.
    Windows 7 WEI score is 7.9 for the Primary hard disk category.
    Windows boots up quickly, everything starts immediately, nothing bogs the system down, and just overall everything feels very fluid and snappy.  And there is no seeking noise from the drives.
    Regarding what this has done for Photoshop...  I've only tested on Photoshop CS6 beta so far today, but everything is incrementally improved.  Startup time is faster, things seem more smooth and fluid while editing overall, and a benchmark I created using an action to run a lot of image adjustment operations on a big, multi-layer image ran this long to completion:
    When the file is opened from (and the Photoshop scratch file is on) a single spinning disk: 
    4 minutes 26 seconds (266 seconds)
    When the file is opened from (and the scratch file was is on) a fast array of spinning drives: 
    3 minutes 45 seconds (225 seconds)
    When the entire system is run from the SSD array: 
    2 minutes 31 seconds (151 seconds)
    During the action, because so many steps are performed on the big file, Photoshop writes a 30+ gigabyte scratch file on the scratch drive.
    Summary
    Clearly the very fast disk access markedly improves Photoshop's speed when it uses scratch space. 
    Plus copying big image files around is virtually instantaneous. 
    I don't use Bridge myself, but I have noticed that all the image thumbnails (via FastPictureViewer Codec Pack) just show up immediately in Explorer windows and Photoshop File Open/Save dialogs.  We can only assume this kind of drive speed would really make Bridge blaze through its operations as well.
    Following my footsteps would be expensive, but it can really work.
    -Noel

  • What  is self referential integrity? How does it effect the Database?

    Hello Gurus,
    What is self referential integrity?
    How could it is achieved and implemented?
    and what is effect on the Database?
    Thanks in advance.
    ~ SubbaReddy .M

    Self referential integrity simple means that the parent end of a foreign key constraint is in the same table as the child end. Consider the SCOTT.EMP table. Every manager is also an employee. Hence there is a foreign key between the MGR column and the EMPNO column.
    rgds, APC

  • Can I increase storage and speed to my iMac computer?

    I have close to 20,000 pictures and videos, and I'm not sure how much more my iMac can handle. Is there a way to increase to several terabytes and increase the speed through Apple?
    Thanks!
    T

    T ~ Welcome to the Support Communities. If you want to use an external hard drive for your iPhoto library, this Apple doc may help:
    iPhoto '11: Move your iPhoto library to a new location
    And regarding improving your iMac's speed by increasing its RAM:
    iMac: Memory specifications and upgrades
    ...Found by searching HERE.

  • Maintaining referential integrity using MS SQL server

    Some time ago I posted a question relating to the following extension:
    <extension vendor-name="kodo" key="jdbc-delete-action" value="null"/>
    Kodo generates a "on delete set null" constraint for this, however ms sql
    server does not support this. Since I don't want to matain this referential
    integrity in my java code (meaning as soon as an object is deleted, setting
    all references to it to null), I tried to implement a custom dictionary with
    the default ms sql server solution for this problem: creating a trigger by
    overriding the "getAddForeignKeySQL" method. This works fine when creating a
    database from scratch using the schema tool, however, when updating an
    existing databasebase schema, this is ignored, thereby not solving the
    problem of having the database maintenance automated. I suppose I have to
    write code for checking whether the trigger already exists. Browsing through
    the code, I couldn't figure out how this is done. Can someone give me
    suggestions on how to do this (and if this takes a lot of effort).
    I also would like to know whether solarmetric has intentions to deal with
    this problem in their framework. To be honest, I was quite surprised that
    Kodo doesnt take care of this, leaving my database in an inconsistent state.
    kind regards,
    Christiaan

    "Abe White" <[email protected]> schreef in bericht
    news:caaunu$ecj$[email protected]..
    >
    I also would like to know whether solarmetric has intentions to dealwith
    this problem in their framework. To be honest, I was quite surprisedthat
    Kodo doesnt take care of this, leaving my database in an inconsistentstate.
    >
    Well, I would say that you're the one leaving the database in an
    inconsistent state by not keeping your object model consistent :)From the manual:
    6.2.2.12. jdbc-delete-action
    If a field holds a relation to another object, you can use the
    jdbc-delete-action field extension to control the delete action of the
    database foreign key that models this relation. Possible values are:
    null: Null the column(s) of this foreign key when the related record is
    deleted.
    It does mean that if the primary key record is deleted, all foreign keys to
    the record are set to null, right? Since the jdo framework is about database
    independence, not writing sql code (and of course lots more;) and Kodo
    supports ms sql server, I would have expected that kodo to set the foreign
    key to null if the object is deleted. Even if I do know sql server supports
    does not support a 'on delete set null' sql statement;)
    >
    We have no plans to create triggers at this time.
    What do you mean when you say "it is ignored" when you're updating?
    Exactly what tool are you running, what actions are you running with, is
    the foreign key extension present, and what is the outcome?Like I said, I wrote code in the "getAddForeignKeySQL" in the dictionary, so
    a trigger is created when creating the database in the workbench (by running
    the schematool). However, if the table already exists, but not the trigger,
    refreshing the database from the workbench does not call the
    getAddForeignKeySql to add the trigger to the table.

  • Can Anybody explain me the role of xi in IS-Retail integration and POS cons

    Can Anybody explain me the role of xi in IS-Retail integration and POS cons

    Hi AnilKumar,
    Find the list is below:
    Q: Role of xi in IS-Retail integration
    Ans: **Business Content Scenario – Retail
    Why using XI in this scenario
    &#56256;&#56452; A push of ‘message type’ data to BW is required
    &#56256;&#56452; XI supports quality of service ‘Exactly once in order’ in push scenarios
    &#56256;&#56452; Stores deliver the data according to ARTS/IX-Retail
    &#56256;&#56452; XI supports ARTS/IX-Retail
    &#56256;&#56452; In case the stores deliver the data as flat files they can be easily transfered to XML format via XI
    Business Content Scenario – Retail
    Store Connectivity Scenario
    &#56256;&#56452; Increase profitability by utilizing POS1 data for
    controlling of retail processes and by understanding
    customer behavior in a better way.
    &#56256;&#56452; SAP XI as single point to collect POS sales information
    as mass data from (3thd party) store systems via an open
    industry specific interface (ARTS/IX-Retail2 compliant).
    &#56256;&#56452; Using SAP XI as additional source for SAP BW
    &#56256;&#56459; improved by Retail POS Data Management to
    ensure better data quality
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/0ccae190-0201-0010-1593-c90ef3c1d159
    Pls rewrds if found helpful.
    BR,
    Alok Sharma

  • Bravo on great speed improvement in LR5!

    I'm currently in the process of editing large sets of close to 300 photos each.  My workflow needs me to go through each single shot, calibrate them, and eventually make a selection of around 100 good photos per set.  I often navigate between the Library and Develop modules, and I use pretty much all tools (as necessary) when calibrating/editing the shots.  My files are D800 raw files. Note also that my library contains 13 years of photography work, with around 250,000 photos (of various sources evidently).
    I just wanted to take a few minutes off my editing process to comment on the huge improvement on speed I am experiencing in LR5 over LR4.
    In LR4, my workflow with my D800 files was very painful, with all the slugginess I would get everytime I moved from the Develop module to the Library Module, and everytime I used tools in the develop module.  There was lag virtually everywhere.  I'm on a very up to date system, with SSD drives both for the phtoos I work on, and my system disc. 
    Since LR5, I can honestly say it feels pleasant once again to edit my photos.  As where I feared opening LR4 to do my job, I now look forward again to visit my shots. 
    I read on the forum about the sharpness issue in lower res exports, and I'm sure there are a few more things to fix here and there.  But the HUGE improvement in responsiveness and speed in the application makes LR5 a real winner to me.  Speed is definitly a main issue that should always be addressed first with each update, always trying to improve on it and make the workflow as pleasant as possible so that we, as photographers, have to focus only on one thing: creativity. 
    I'm glad this issue was addressed with LR5, and my only wish would have been to see it addressed sooner in LR4.
    Back to editing I go...
    cheers!

    I'm currently in the process of editing large sets of close to 300 photos each.  My workflow needs me to go through each single shot, calibrate them, and eventually make a selection of around 100 good photos per set.  I often navigate between the Library and Develop modules, and I use pretty much all tools (as necessary) when calibrating/editing the shots.  My files are D800 raw files. Note also that my library contains 13 years of photography work, with around 250,000 photos (of various sources evidently).
    I just wanted to take a few minutes off my editing process to comment on the huge improvement on speed I am experiencing in LR5 over LR4.
    In LR4, my workflow with my D800 files was very painful, with all the slugginess I would get everytime I moved from the Develop module to the Library Module, and everytime I used tools in the develop module.  There was lag virtually everywhere.  I'm on a very up to date system, with SSD drives both for the phtoos I work on, and my system disc. 
    Since LR5, I can honestly say it feels pleasant once again to edit my photos.  As where I feared opening LR4 to do my job, I now look forward again to visit my shots. 
    I read on the forum about the sharpness issue in lower res exports, and I'm sure there are a few more things to fix here and there.  But the HUGE improvement in responsiveness and speed in the application makes LR5 a real winner to me.  Speed is definitly a main issue that should always be addressed first with each update, always trying to improve on it and make the workflow as pleasant as possible so that we, as photographers, have to focus only on one thing: creativity. 
    I'm glad this issue was addressed with LR5, and my only wish would have been to see it addressed sooner in LR4.
    Back to editing I go...
    cheers!

  • Have speed improvements been made?

    I just want to say that my posts on the forums today have appeared in the topic list nearly instantaneously — which wasn't the case just a few days ago. And if the forum administrators have made speed improvements recently, but are disappointed no-one has noticed, then perhaps this will bring a modicum of satisfaction.
    ...I wonder if someone "in the know" (Eric?) can indicate whether this is a permanent improvement. If it's merely due to draining the _internet tubes_, it may not be...

    Thanks for your reply, but I'm not sure it explains the particular slow-down I'm seeing...
    Displaying forums is as fast as ever, as is the system's acceptance of new posts/replies. The problem is, as I said, that +"posts take a long time to appear"+. ...I list the main forum page and expect to see my just-accepted post at the top of the list, but it doesn't appear for between one and a few minutes after my post has been accepted. Yes, there's a warning to expect a delay, but I'm pointing out that these forums go through periods of near instantaneous appearance of new posts (as was the case a week ago) to a now sluggish appearance of new posts.
    The idea that there is a geographic aspect to the problem is perhaps nullified by the comment by MGW (in New Hampshire?) above:
    "Actually, the speedup has been noticeable for the past couple of days, having complained bitterly about the clog..."
    Also during "clogged" periods, duplicate posts from all over the world tend to appear as members mistakenly think their post, although accepted by the system, didn't "take" and re-enter it — because it doesn't show up in the forum's main list for a couple of minutes or so.
    ...We seem to regularly go through these alternating multi-week periods of members reporting speed and sluggishness but, so far, with no acknowledgement or explanation from the hosts.
    By the way, this post itself took a minute to appear in the main forum list — a week ago, it would have appeared almost instantaneously.
    Message was edited by: Alancito

  • Speed improvements in JDeveloper 3.0

    Will the overall response time/speed improve in JDeveloper 3.0?
    Thanks
    Mike
    null

    Hi
    JDeveloper 3.0 is in beta right now, and it has improvements in
    response time/speed.
    For example the deployment wizard is way faster.
    regards
    raghu
    Michael Maculsay (guest) wrote:
    : Will the overall response time/speed improve in JDeveloper 3.0?
    : Thanks
    : Mike
    null

Maybe you are looking for