Performance Comparisons of Coldfusion on Different OS
I was unsuccessful at finding any information on comparing performance and testing of Coldfusion on various platforms. Most of this depends on the JVM, correct?
Has anyone done any testing like this to compare Coldfusion on Windows vs. Linux?
<cfloop index="pageIndex" from="1" to="#num#" step="1">
<cfthread name="thr#pageIndex#" threadIndex="#pageIndex#" action="run">
<cftry>
<cfhttp url="#watson_url#" method="post" charset="utf-8" timeout="26000000" useragent="#CGI.http_user_agent#" result="THREAD.GETTREAD">
<cfhttpparam type="formfield" name="appGUID" value="#guid#">
<cfhttpparam type="formfield" name="userId" value="#user_id#">
<cfhttpparam type="formfield" name="password" value="#user_pass#">
<cfhttpparam type="formfield" name="sql" value="#TempArr[threadIndex]#">
</cfhttp>
<cfset result_Arr = #DeserializeJSON(THREAD.GETTREAD.Filecontent)#>
<cfset resultArr[threadIndex]=result_Arr >
<cfcatch type="any">
</cfcatch>
</cftry>
</cfthread>
</cfloop>
TempArr is an array for sql,each thread run a sql and get result from http request.
Similar Messages
-
I ma looking for performance comparisons of WebLogic Server on Sun server compared
to an Intel server. I am looking for the number of transactions per second that
can be processed on say a 4-way Wintel server as opposed to a Sun UE or, better
yet, a SunFire 280. Can anyone provide me with this information or point me towards
a source where I can find it?
Thanks!
GregGreg Wojtak wrote:
>
I ma looking for performance comparisons of WebLogic Server on Sun server compared
to an Intel server. I am looking for the number of transactions per second that
can be processed on say a 4-way Wintel server as opposed to a Sun UE or, better
yet, a SunFire 280. Can anyone provide me with this information or point me towards
a source where I can find it?
Thanks!Good question in my opinion ... but this is
a more generel question to java between Solaris and
Wintel ..
But seems you have an 4 Processor Intel Box ..
and only a two processor Solaris Box ..
and also the Ultra III processor have a by
far lower taktrate , right ?
The result should be obvious ..
The more generell question is, whether
the Java-Threads can ran so nice on
all processors, as they do on Solaris ..
and also whether you have so much memory
as you can have under Solaris ?
The Thread handling is still totaly different
under Windows and Solaris .. Wintel makes a times
sliced scheduling between the threads .. and
Solaris uses still the concept of LWPs which
are managed by the kernel .. not the threads
themselved are scheduled by the kernel ..
I recently heared during a SUN talk that SUN thought now, that
the overhead in the kernel for the scheduling in not longer
too high .. and they provide in the upcomming Solaris 2.9
a new thread modell which does the scheduling inside of
the kernel ..
I would assume, that a heavy threaded application performs
better on Solaris .. but this is only an assumption ..
I never had the time to measure this ..
Good luck !
Frank -
Performance comparisons of DES and AES
Hi,
Has anyone gathered data in regards to performance comparison of AES vs DES using the JCE in java?
I would be interested in finding which is faster.
Thanks,
DanOh, sorry by the truncated table. I've pasted an abridged version of the "openssl speed" output.
I will paste the unabridged version of "openssl speed des" output (openssl is a C crypto toolkit, not a Java toolkit).
To get the most accurate results, try to run this
program when this computer is idle.
First we calculate the approximate speed ...
Doing des cbc 20971520 times on 16 size blocks: 20971520 des cbc's in 8.11s
Doing des cbc 5242880 times on 64 size blocks: 5242880 des cbc's in 7.97s
Doing des cbc 1310720 times on 256 size blocks: 1310720 des cbc's in 7.86s
Doing des cbc 327680 times on 1024 size blocks: 327680 des cbc's in 7.78s
Doing des cbc 40960 times on 8192 size blocks: 40960 des cbc's in 7.24s
Doing des ede3 6990506 times on 16 size blocks: 6990506 des ede3's in 6.33s
Doing des ede3 1747626 times on 64 size blocks: 1747626 des ede3's in 6.31s
Doing des ede3 436906 times on 256 size blocks: 436906 des ede3's in 6.27s
Doing des ede3 109226 times on 1024 size blocks: 109226 des ede3's in 6.26s
Doing des ede3 13653 times on 8192 size blocks: 13653 des ede3's in 6.08s
OpenSSL 0.9.7d 17 Mar 2004
built on: Thu Apr 22 13:21:37 2004
options:bn(64,32) md2(int) rc4(idx,int) des(idx,cisc,4,long) aes(partial) idea(int) blowfish(idx)
compiler: cl /MD /W3 /WX /G5 /Ox /O2 /Ob2 /Gs0 /GF /Gy /nologo -DOPENSSL_SYSNAME_WIN32 -DWIN32_LEAN_AND_MEAN -DL_ENDIAN -DDSO_WIN32 -DBN_ASM -DMD5_ASM -DSHA1_ASM -DRMD160_ASM /Fdout32dll -DOPENSSL_NO_KRB5
available timing options: TIMEB HZ=1000
timing function used: ftime
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
des cbc 41374.15k 42106.20k 42695.55k 43123.55k 46377.93k
des ede3 17675.11k 17719.91k 17849.97k 17852.74k 18398.65k -
Looking for Performance Comparisons Between JRockit 6 and Sun Java SE 6
Hello,
Can someone point me to some performance comparisons (benchmarks, etc.) between the JRockit and Sun JVMs?
Thanks in advance.Hi Ben.
Before I send to to the SPEC sites (which can be a tad hard to parse) I must ask - What application or type of application are you interested in? The answer will vary a bit depending on what you need. -
Performance comparison of J2sdkee1.3.1's JMS and iPlanet MQ 2.0
I am evaluating two JMS APIs: The free one comes with J2sdkee 1.3.1 and the iPlanet Message Queue for Java 2.0 (free for development and evaluation, not free on Production environment).
I created two JSP pages running under Tomcat 3.3. One JSP page is calling a Java class 1 which uses j2sdkee1.3.1 JMS API: it creates InitialContext() and lookups for QueueConnectionFactory & Queue, then sends a text message to the Queue.
I did a little performance improvement by putting this process into a static method init(), so it will be called only once. The latter request will only send message.
The second JSP page is calling a Java class 2 which doesn't use JNDI, instead, it calls the new QueueConnectionFactory/QueueConnection classes provided by iPlanet MQ API.
I found out that InitialContext() call and lookup process is taking quite a long time in the first case. After that, sending message is quite fast. However, if "j2ee" is shutdown in the middle, the JSP page can't recover unless I restarted Tomcat server.
The performance of iPlanet MQ API is pretty good even if the QueueConnectionFactory/QueueConnection classes are created for each request. And it can recover after the Broker is restarted.
Anybody experienced in using J2sdkee1.3.1 JMS API? If you know a better way to improve performance other than the static method init() which can't recover, please share your information. Appreciate it.
Thanks,
YeYour performance comparison should be identical in all ways except for the particular server you are trying to evaluate. which should be relatively painless, given the use of JNDI.
At the very least, ignore the JNDI lookup in your first test.
I have found the j2ee JMS provider (the free one) to be quite slow, and also have found a bug with the shutdown and startup process changing the message order. which is a fundamental error.
I have used IBM MQ (websphere MQ) and found it to be very fast and worked as expected. I have not used their pub/sub product (which i suspect is based on Talarian.)
I favour servers built in native code, and integrate using JMS. just like I prefer Oracle over a pure Java RDBMS, but like the ease of integration offered with JDBC.
I would avoid webstyle start-up companies like Iplanet. That joint effort seems like a desperate attempt at reviving Netscape thru technology, rather than thru a business concept. -
Apache-Netscape webserver plugin performance comparison WL8.1
Hi,
Can anyone guide me abt the performance comparisons between apache and netscape plug-in.
which one of the above wuold be best for WLP .l,Windows 2003 Server,Oracle 9i.
Thanks,
sumit
([email protected])Hi,
Can anyone guide me abt the performance comparisons between apache and netscape plug-in.
which one of the above wuold be best for WLP .l,Windows 2003 Server,Oracle 9i.
Thanks,
sumit
([email protected]) -
Database Performance Comparison on Linux n WindowNT
Hi Everybody,
We have an Application running on Oracle7.3 n WindowNT. Now we are shifting this to Oracle8i (Rel 2). OS platform is not yet
decided. We are asked to submit a performance report of Oracle8i on Linux and Oracle8i on WindowsNT. How can we make a comparison of Oracle8i on WinowsNT n Linux(RedHat6.2).
Also if any body knows any web site related to this, please let me know the same also.
Thanks n Regards,
Hari
nullI did something like that a little while ago, so here's a summary of what I did and what I learned:
I also wanted to select a server OS for a data warehouse, and I did some head-to-head comparisons (same machine, just booted into different OS varieties, loading the same data). I used Red Hat 6.0 and 6.2, SuSE 6.3 and 6.4, and Mandrake 7.0 and 7.1, plus NT4.
Overall, I found that any Linux outperformed NT--but not by enough to make that a major factor in selection. Subjectively, I also found that everything I did was just a little bit less straightforward, a little bit less convenient in Red Hat than it was in any other Linux distribution I tried--but again, not necessarily enough to make a big difference in selection.
I also had more crashes and lockups with NT than I did with any Linux, but that may be more a reflection of my skill at administering NT vs. my skill at administering Linux.
I'd say it's definitely worth your while to evaluate Linux, and I'd suggest that you have a look at several Linux distributions while you're at it. Personally, I like SuSE and dislike Red Hat. -
Performance comparisons between Apple's SSD's and hard drives
Am looking for objective performance data comparing SSD's in Apple's MacBook Pro versus Apple's hard drives in MacBook Pro. I've read some material in Tom's Hardware but am looking for specific device comparisons of these storage types in MacBooks. Seek/latency/read transfer rates/write transfer rates/reliability/etc.
Thanks for the information!! I've book-marked the site and plan to refer to it often.
-
Servlets/JDBC vs. servlets/EJB performance comparison/benchmark
I have a PHB who believes that EJB has no ___performance___ benefit
against straightforward servlets/JSP/JDBC. Personally, I believe that
using EJB is more scalable instead of using servlets/JDBC with
connection pooling.
However, I am at a lost on how to prove it. There is all the theory, but
I would appreciate it if anyone has benchmarks or comparison of
servlets/JSP/JDBC and servlets/JSP/EJB performance, assuming that they
were tasked to do the same thing ( e.g. performance the same SQL
statement, on the same set of tables, etc. ).
Or some guide on how to setup such a benchmark and prove it internally.
In other words, the PHB needs numbers, showing performance and
scalability. In particular, I would like this to be in WLS 6.0.
Any help appreciated.First off, whether you use servlets + JDBC or servlets + EJB, you'll
most likely spend much of your time in the database.
I would strongly suggest that you avoid the servlets + JDBC
architecture. If you want to do straight JDBC code, then it's
preferable to use a stateless session EJB between the presentation layer
and the persistence layer.
So, you should definitely consider an architecture where you have:
servlets/jsp --> stateless session ejb --> JDBC code
Your servlet / jsp layer handles presentation.
The stateless session EJB layer abstracts the persistence layer and
handles transaction demarcation.
Modularity is important here. There's no reason that your presentation
layer should be concerned with your persistence logic. Your application
might be re-used or later enhanced with an Entity EJB, or JCA Connector,
or a JMS queue providing the persistence layer.
Also, you will usually have web or graphic designers who are modifying
the web pages. Generally, they should not be exposed to transactions
and jdbc code.
We optimize the RMI calls so they are just local method calls. The
stateless session ejb instances are pooled. You won't see much if any
performance overhead.
-- Rob
jms wrote:
>
I have a PHB who believes that EJB has no ___performance___ benefit
against straightforward servlets/JSP/JDBC. Personally, I believe that
using EJB is more scalable instead of using servlets/JDBC with
connection pooling.
However, I am at a lost on how to prove it. There is all the theory, but
I would appreciate it if anyone has benchmarks or comparison of
servlets/JSP/JDBC and servlets/JSP/EJB performance, assuming that they
were tasked to do the same thing ( e.g. performance the same SQL
statement, on the same set of tables, etc. ).
Or some guide on how to setup such a benchmark and prove it internally.
In other words, the PHB needs numbers, showing performance and
scalability. In particular, I would like this to be in WLS 6.0.
Any help appreciated.--
Coming Soon: Building J2EE Applications & BEA WebLogic Server
by Michael Girdley, Rob Woollen, and Sandra Emerson
http://learnweblogic.com -
Performance with having tables in different schema's
Hi all,
We have a requirement to segregate tables into different schema's based on the sources. But everything is going to be on the same instance and database.
Is there a performance hit (querying between tables) by having tables in different schema as apposed to having them in the same schema?
Thanks
NarasimhaMost likely there is bit of performance impact if your application refers the tables from different schemas. You need to use database link to access the other schemas. Even you schemas may in instance but when you use database link the network also comes into picture. even queries on same instance will routed through network connect and get the data. Distributed transaction also have issues some time. So as far as possible the distribution of objects should be avoided into diffrent schemas.
-
SUN DSEE 6.2 vs Fedora DS 1.1 performance comparison
Hi all,
I've just discovered a nice tool from SUN about performance analysis for ldap servers named SLAMD (http://www.slamd.com)
So I configured it and tried to analyze my servers. I've setup one SUN DSEE 6.2 and one Fedora DS 1.1
in my workstation. Both of them being populated with the same data (160 sample entries from sun) and using the same file descriptors.
My workstation is running fedora 8, Core(TM)2 Duo CPU E6550 @ 2.33GHz / 2 GB ram.
I did a couple of tests but all of them had the same search filters
Entry DN ou=people,dc=example,dc=com
Search Filter objectClass=*
Attribute(s) to Compare/Modify Add Operation Frequency 3
Compare Operation Frequency 7
Delete Operation Frequency 4
Modify Operation Frequency 4
Modify RDN Operation Frequency 1
Search Operation Frequency 10 description
I will give the results of my final test which lasted 240 seconds / 200 threads from one client
DS Overall Operations (Average/sec)
SUN *35,858*
Fedora *304,867*
It seems to me there is a huge difference! I didn't expect to get such numbers. To tell you the truth
I expected SUN DS to be much faster that Fedora DS instead of being *10 times slower*.
Furthermore while running the test on the Fedora DS the system got a max load of around 7-8 which implied that the system
worked hard to perform the test (CPU always at 100%).
On the other hand while running the SUN DS test, the system never got load more that 1 (cpu not more that 22%).
It was like the SUN DS was capable to do better but it was never bothered. I played with indexes, file descriptors, number of threads without
any significant change of performance.
I'm sure SUN DS can do better. So I'm looking for thoughts on the subject as well as performance tunning/optimization documentation.
Is the resource kit also available for 6.2 or is it just for SUN ONE server?
regards
GiannisGiannis,
Giving raw performance numbers doesn't mean anything unless you also provide the details of the data in your directory server, the settings and the exact tests performed (if it's a slamd standard job, give its name).
Slamd contains many jobs that are doing many different things leading to completely different numbers in term of operations per second.
This said, the numbers you show are puzzling me : SUN 35,858 vs Fedora 304,867 (Operations / Second) ?
I assume the , is the unit separator (and not like in the US the separator between thousands and hundreds).
If so, there is definitely something badly configured on Sun DS and/or Slamd.
Regards,
Ludovic. -
SQL Server 2008R2 vs 2012 OLTP performance difference - log flushes size different
Hi all,
I'm doing some performance test against 2 identical virtual machine (each VM has the same virtual resources and use the same physical hardware).
The 1° VM has Windows Server 2008R2 and SQL Server 2008R2 Standard Edition
the 2° VM has Windows Server 2012R2 and SQL Server 2012 SP2 + CU1 Standard Edition
I'm using hammerDB (http://hammerora.sourceforge.net/) has benchmark tool to simulate TPC-C test.
I've noticed a significative performance difference between SQL2008R2 and SQL2012, 2008R2 does perform better. Let's explain what I've found:
I use a third VM as client where HammerDB software is installed, I run the test against the two SQL Servers (one server at a time), in SQL2008R2 I reach an higher number of transaction per minutes.
HammerDB creates a database on each database server (so the database are identical except for the compatibility level), and then HammerDB execute a sequence of query (insert-update) simulating the TPC-C standard, the sequence is identical on both servers.
Using perfmon on the two servers I've found a very interesting thing:
In the disk used by the hammerDB database's log (I use separate disk for data and log) I've monitored the Avg. Disk Bytes/Write and I've noticed tha the SQL2012 writes to the log with smaller packet (let's say an average of 3k against an average of 5k written
by the SQL 2008R2).
I've also checked the value of Log flushes / sec on both servers and noticed that SQL2012 do, on average, more log flushes per second, so more log flushes of less bytes...
I've searched for any documented difference in the way log buffers are flushed to disk between 2008r2 and 2012 but found no difference.
Anyone can piont me in the correct direction?Andrea,
1) first of all fn_db_log exposes a lot of fields that do not exist in SQL2008R2
This is correct, though I can't elaborate as I do not know how/why the changes were made.
2) for the same DML or DDL the number of log record generated are different
I thought as much (but didn't know the workload).
I would like to read and to study what this changes are! Have you some usefu link to interals docs?
Unfortunately I cannot offer anything as the function used is currently undocumented and there are no published papers or documentation by MS on reading log records/why/how. I would assume this to all be NDA information by Microsoft.
Sorry I can't be of more help, but you at least know that the different versions do have behavior changes.
Sean Gallardy | Blog | Microsoft Certified Master -
Performance comparisons between POF & open source serialization mechanism?
I'm curious whether anyone has done any comparisons of performance and serialized object sizes between POF and open source mechanisms such as Google Protocol Buffers and Thrift, both of which seem to be becoming quite popular. Personally, I dislike having to write a separate schema and then generate classes from it, which Protocol Buffers and Thrift require you to do, and I vastly prefer POF's mechanism of keeping everything in the code (although I wish the POF annotation framework was officially supported). But aside from that, I'd prefer to use Coherence for many of the purposes that some of my co-workers are currently using other solutions for, and this would be useful information to have in making the case.
FWIW, I hope someone at Oracle is seriously considering open-sourcing POF. I don't think that anyone who would've bought a Coherence license would decide not to because they could get POF for free. They'd just go and use something else, like the aforementioned Protocol Buffers and Thrift. Not only are many companies adopting these as standards, but as has been mentioned in other threads on this forum, that's exactly what even some Coherence users are doing:
Re: POF compatibility across Coherence versions
I really wish I could to encourage developers that I work with to give POF a look as an alternative to those two (both of which we're currently using), regardless of whether or not they plan on using Coherence in the immediate future. As things stand right now, I can't use Coherence for code that needs to be shared with people in other groups who haven't adopted Coherence yet. But if I could use POF outside of Coherence, it would probably be acceptable to those folks as a generic serialization mechanism, and it would make migrating such code to Coherence at some point down the road that much easier. If, on the other hand, I have to write that code around, say, Protocol Buffers, then it becomes much harder to later justify creating and maintaining POF as a second serialization mechanism for the same set of objects, which means it's much harder to justify using Coherence for those objects.
In short, making POF usable outside of Coherence, and who knows, maybe even getting it supported in popular open source projects such as Cassandra (which, as I understand it, uses Thrift) would make it easier to adopt Coherence in environments where objects are already persisted in other systems.
That's my two cents.Hi,
Thank you for links. It is very interesting.
I have implemented POF serialization plugin for this benchmark http://wiki.github.com/eishay/jvm-serializers/
You can get code, run benchmark for yourself and compare result.
Handmade POF serialization http://gridkit.googlecode.com/svn/wiki/snippets/CoherencePofSerializer.java
Reflection POF serialization http://gridkit.googlecode.com/svn/wiki/snippets/CoherencePofReflection.java
Also you should put a two line in BenchmarkRunner.java, all other instructions are on jvm-serializers project page.
Protobuf.register(groups);
Thrift.register(groups);
ActiveMQProtobuf.register(groups);
Protostuff.register(groups);
Kryo.register(groups);
AvroSpecific.register(groups);
AvroGeneric.register(groups);
// register POF tests here
CoherencePofSerializer.register(groups);
CoherencePofReflection.register(groups);
CksBinary.register(groups);
Hessian.register(groups);
JavaBuiltIn.register(groups);
JavaManual.register(groups);
Scala.register(groups);A few comments on result.
* Micro benchmark is a micro benchmark, I saw quite differnt results then comparis java vs POF vs POF reflection on own domain objects.
* POF score very good compared to protocols like Protobuf or Thrift, especially on deserialization.
* Kryo project is quite interesting, I'm going to give it a try in next project for sure.
Again, thanks a lot for a link. -
Comparison of schemas across different database Instances
Hi All,
I need to compare around 8 schemas across 4 different instances. We expect them to be same (atleast constraints and indxes) but we have some exceptions. Don't have any database link. I tried using PL/SQL developer , it gives only changes for source db vs target db. Which is the best tool for this purpose?Here is for remote Schema comparison:
compareRemoteSchemaDDL.sql
This script compares the database tables and columns between
two schemas in separate databases (local schema and a remote
schema).
This script should be run from the schema of the local user.
A database link will need to be created in order to compare
the schemas.
column sid_name new_value local_schema
column this_user new_value local_user
select substr(global_name, 1, (instr(global_name, '.')) - 1) sid_name, user this_user
from global_name;
select db_link from user_db_links;
define remote_schema=&which_db_link
spool &spool_file
set verify off
column len format 999
column cons_column format a20 truncate
column object_name format a30
break on table_name nodup skip 1
set pages 999
set lines 110
prompt ********************************************************************
prompt *
prompt * Comparison of Local Data Base to Remote Data Base
prompt *
prompt * User: &local_user
prompt *
prompt * Local Data Base: &local_schema
prompt * Remote Data Base: &remote_schema
prompt *
prompt ********************************************************************
prompt **** Additional Tables on Local DB &local_schema ****
Select table_name from user_tables
MINUS
Select table_name from user_tables@&remote_schema
order by 1;
prompt **** Additional Tables on Remote DB &remote_schema ****
Select table_name from user_tables@&remote_schema
MINUS
Select table_name from user_tables
order by 1;
prompt **** Additional Columns on Local DB &local_schema ****
Select table_name, column_name from user_tab_columns
MINUS
Select table_name, column_name from user_tab_columns@&remote_schema
order by 1, 2;
prompt **** Additional Columns on Remote DB &remote_schema ****
Select table_name, column_name from user_tab_columns@&remote_schema
MINUS
Select table_name, column_name from user_tab_columns
order by 1, 2;
prompt **** Columns Changed on Local DB &local_schema ****
Select c1.table_name, c1.column_name, c1.data_type, c1.data_length len
from user_tab_columns c1, user_tab_columns@&remote_schema c2
where c1.table_name = c2.table_name and c1.column_name = c2.column_name
and ( c1.data_type <> c2.data_type or c1.data_length <> c2.data_length
or c1.nullable <> c2.nullable
or nvl(c1.data_precision,0) <> nvl(c2.data_precision,0)
or nvl(c1.data_scale,0) <> nvl(c2.data_scale,0) )
order by 1, 2;
prompt **** Additional Indexes on Local DB &local_schema ****
Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
from user_indexes
MINUS
Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
from user_indexes@&remote_schema
order by 1;
prompt **** Additional Indexes on Remote DB &remote_schema ****
Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
from user_indexes@&remote_schema
MINUS
Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
from user_indexes;
prompt **** Additional Objects on Local DB &local_schema ****
Select object_name, object_type from user_objects
where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
MINUS
Select object_name, object_type from user_objects@&remote_schema
where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
order by 1, 2;
prompt **** Additional Objects on Remote DB &remote_schema ****
Select object_name, object_type from user_objects@&remote_schema
where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
MINUS
Select object_name, object_type from user_objects
where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
order by 1, 2;
prompt **** Additional Triggers on Local DB &local_schema ****
Select trigger_name, trigger_type, table_name from user_triggers
MINUS
Select trigger_name, trigger_type, table_name from user_triggers@&remote_schema
order by 1;
prompt **** Additional Triggers on Remote DB &remote_schema ****
Select trigger_name, trigger_type, table_name from user_triggers@&remote_schema
MINUS
Select trigger_name, trigger_type, table_name from user_triggers
order by 1;
Prompt **** Additional Constraints on Local DB &local_schema ****
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
CONSTRAINT_TYPE
from USER_CONSTRAINTS
MINUS
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
CONSTRAINT_TYPE
from USER_CONSTRAINTS@&remote_schema
order by 1, 2, 3;
Prompt **** Additional Constraints on Remote DB &remote_schema ****
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
CONSTRAINT_TYPE
from USER_CONSTRAINTS@&remote_schema
MINUS
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
CONSTRAINT_TYPE
from USER_CONSTRAINTS
order by 1, 2, 3;
Prompt **** Additional Constraints Columns on Local DB &local_schema ****
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
COLUMN_NAME cons_column
from USER_CONS_COLUMNS
MINUS
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
COLUMN_NAME cons_column
from USER_CONS_COLUMNS@&remote_schema
order by 1, 2, 3;
Prompt **** Additional Constraints Columns on Remote DB &remote_schema ****
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
COLUMN_NAME cons_column
from USER_CONS_COLUMNS@&remote_schema
MINUS
Select TABLE_NAME,
decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
CONSTRAINT_NAME ) CONSTRAINT_NAME,
COLUMN_NAME cons_column
from USER_CONS_COLUMNS
order by 1, 2, 3;
prompt **** Additional Public Synonyms on Local DB &local_schema ****
Select owner, synonym_name from dba_synonyms
where owner = 'PUBLIC'
and table_owner = UPPER('&local_user')
MINUS
Select owner, synonym_name from dba_synonyms@&remote_schema
where owner = 'PUBLIC'
and table_owner = UPPER('&local_user')
order by 1;
prompt **** Additional Public Synonyms on Remote DB &remote_schema ****
Select owner, synonym_name from dba_synonyms@&remote_schema
where owner = 'PUBLIC'
and table_owner = UPPER('&local_user')
MINUS
Select owner, synonym_name from dba_synonyms
where owner = 'PUBLIC'
and table_owner = UPPER('&local_user')
order by 1;
spool off
Daljit Singh -
Multi-Cam Performance Comparison of Premiere, Vegas, Media Composer and EDIUS
The video below provides a comparison of Multi-Cam performance of the following video editors
Adobe Premiere
Sony Vegas
Avid Media Composer
Grass Valley's EDIUS
The following summarizes the results. See the video for all the details. In the lists, 1 is the best.
Playback Performance
Vegas
EDIUS
Premiere
Media Composer
Ease and Flexibility of Setup
EDIUS
Vegas
Premiere
Media Composer
Full Screen Playback
EDIUS
Premiere
Media Composer
Vegas
Multi-Screen Friendliness
EDIUS
Premiere
Vegas
Media ComposerWell, I've come up with a terrible, but comical solution for the second issue which doesn't involve reworking anything. It turns out that, upon launching premiere, I can play the master sequence with nested multi-cam sequences a grand total of ONCE with minimal delay. If I stop the play through, then it will take 3-4 minutes to get the sequence playing again. Of course, the Premiere decides to hang when I try to close it after playing this master timeline, so I'm finding that I can inch my way through this project by (1) launching premiere, (2) playing the master sequence, (3) make the edits I need, (4) forcibly crash Premiere, and (5) relaunch Premiere and repeat. It's a terrible way to work, but the project is so close to completion that I'm willing to suffer through it.
As to issue 1, I've taken each offending clip, extending the audio track one or two frames in the beginning of each clip, and then use those frames to ramp the volume up from silent to 0db. This seems to avoid the brief moment of distortion at the beginning. Notably I did try your suggestion by going back to the original, source audio, but I was still getting the same issue. My work-around is the only solution I've been able to come up with.
Maybe you are looking for
-
"iTunes cannot run because some of it required files are missing."
I first plugged in my iTouch to update and got an error message. Close iTunes and tried to re-open and got that error message: "iTunes cannot run because some of it required files are missing. Please reinstall iTunes." Obviously tried uninstalling an
-
Acrobat 7 Standard making strange character substitutions
Yes, we're still using Acrobat 7. Acrobat has always been unable to handle some fonts (particularly decorative fonts such as cursive fonts) but until recently it has handled most common fonts reasonably well. Today I created a document in MS Word and
-
Trouble converting PDF to Excel
I'm trying to convert this delimited PDF to an excel (or some other delimited format). Using Adobe Acrobat 9, I attempt to save it and copy it as Excel but it gives the error message "BAD PDF; error in processing fonts. [348]". How can I save as an
-
How do I set the default currency in Discoverer Plus?
How do I set the default currency in Discoverer Plus? The currency in currently defaulting to $. Where is the setting to alter this so we can have UK pounds instead? The data has been entered in GBP.
-
Just installed an SSD, cloned my old HD and boot from it, have a Mac Pro so I kept the old boot HD in the machine as well... question- do I have to relink all the programs that I use on my dock so that they pull from the applications on the SSD and n