Is MJPEG/RTP performance improved in Java 6 (Mustang)?
Hi,
I use JMF to stream video from a web cam to a client using the Video JPEG/RTP protocol. I have some performance problems in term of fluidity and image quality.
Is the (M)JPEG/RTP performance is improved in Java 6 (Mustang)?
I read that Mustang allows to gain about 50% when loading a JPEG image thanks to optimization of com.sun.image.codec.jpeg.JPEGImageReader (http://weblogs.java.net/blog/campbell/archive/2006/01/400_horsepower.html). If this JMF uses this class for JPEG/RTP, the performance should be increased.
Does anyone has experience on that point?
Thanks,
Julien
Oops, it's not
com.sun.image.codec.jpeg.JPEGImageReader but
com.sun.imageio.plugins.jpeg.JPEGImageReader
Similar Messages
-
Performance problem with java stored procedure
hi,
i developped a java class, then I stored it in Oracle 8.1.7.
This class contains several import of other classes stored in the database.
It works, but the execution perfomances are disappointing. It's very long. I guess, that's because of the great number of classes to load that are necessary for my class execution.
I tried to increase the size of the java pool (I parameter 70 Mo in the java_pool_size parameter of the init.ora), but the performance is not much better.
Has anyone an idea to increase the performance of this execution of my class ?
In particular, is there a way to keep permanently in memory the java objects used by my class ?
Thanks in advance
bye
[email protected]
nullbefore running Java, the database session needs to be Java enabled; this might be the reason why it is taking so long. If this is the case, you should see an improvement in subsequent calls, once a database session is Java enabled, other users can benefit from it.
Kuassi
I have some performance issue with java stored procedure. Hope some one will be able to help me out. I'm using java stored procedures in my application and basically these procedures are used to do some validation and form the XML message of the database tables. I have noticed that when I call the PL/SQL wrapper function, it is taking time to load the java class and once the class is loaded the execution is faster. Most of the time is spent for loading the class rather than executing the function. if I reduce the class load time, I can improve the performance drastically. Do any one of you know how to reduce the class load time. The following are the platform and oracle version.
O/S: IBM AIX
Oracle: 8.1.7 -
Are any IO performance improvement packs 4 weblogic10 mp3/r3 on solaris avb
Are any IO performance improvement packs 4 weblogic10 mp3/r3 on solaris 5.9 available?
Currently reading files using org.apache.xpath.XPathAPI is taking time. Same thing works on Windows without any issues.
Appreciate any help on this.
Thank you,
Parshuram JuwekarI used CachedXPathAPI.java in the same package from xalan.jar instead of XPathAPI.java which solved my problem.
-
[svn] 3543: Asc front end performance improvements & bug fixes
Revision: 3543
Author: [email protected]
Date: 2008-10-09 11:54:47 -0700 (Thu, 09 Oct 2008)
Log Message:
Asc front end performance improvements & bug fixes
This set of Asc parser/scanner/inputbuffer updates contains changes that simplify the parser?\226?\128?\153s lookahead/match fsm.
A method, ?\226?\128?\152shift()?\226?\128?\153 has been added that replaces match, when the token to be consumed is known.
Also, a simplified version of lookahead has been added that returns the lookahead token, which allows use of switch code when the lookahead set is large.
Simple inputbuffer changes (switching to a String, so that we can use substring instead of valueof) seem to result in about a 2% performance improvement.
Fixes for:
ASC-3519
ASC-2292
ASC-3545
All being overlapping bugs related to regexp recognition in slightly differing contexts.
QA: Yes
Doc:
Tests: checkintests, Performance tests, tamarin, asc-tests, mx-unit
Ticket Links:
http://bugs.adobe.com/jira/browse/ASC-3519
http://bugs.adobe.com/jira/browse/ASC-2292
http://bugs.adobe.com/jira/browse/ASC-3545
Ticket Links:
http://bugs.adobe.com/jira/browse/ASC-3519
http://bugs.adobe.com/jira/browse/ASC-2292
http://bugs.adobe.com/jira/browse/ASC-3545
http://bugs.adobe.com/jira/browse/ASC-3519
http://bugs.adobe.com/jira/browse/ASC-2292
http://bugs.adobe.com/jira/browse/ASC-3545
Modified Paths:
flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/InputBuffer.java
flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/Parser.java
flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/Scanner.java
flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/States.javaIn reference to this change in the Custom Reports... Better experience when exporting data - to prevent customer confusion when exporting data from Mac computers, we have removed the export to excel option and exporting in CSV format by default.
What is the customer confusion we are trying to stop here? I've got even more confused customers at the moment because all of a sudden they can't find the export to excel option but know it exists if they log in on a PC?
Mark -
Hi to all,
i am working on a performance Improve,
we are storeing date in session in specific search results, but it is going to up more than 2MB,
now we want remove unnessery date from session,
for that we have design like,
we have to create side process it will run every 10min,and it will check if the user is not accessing session data from last 15min so we have to remove from session.
what i want is how to create this side process,
if any body knows implementaion pl reply me.
Thanks in Advancewe have to create side process it will run every
10min,and it will check if the user is not accessing
session data from last 15min so we have to remove
from session.
How do you know when user has accessed given session
data?
Do you have timestamp management in your search
result data structure?
Generally speaking, doing a repetitive task can be
achieved with the help of
[url=http://java.sun.com/j2se/1.4.2/docs/api/java/util
/Timer.html]java.util.TimerJust out of curiosity ... I was under the impression that only some call you ... Tim. Can I call you Sam? Or Vsevolod? Or even Matabei?
� {� -
Tabular Model Performance Improvements
Hi !
We have a bulitv tabular model inline which has a fact table and 2 dimension tables .The performance of SSRS report is very slow and we have bottle neck in deciding SSRS as reporting tool.
Can you help us on performance improvements with Tabular Inline
Regards,Hi Bhadri,
As Sorna said, it hard to give you the detail tips to improve the tabular model performance according the limited information. Here are some useful link about performance Tuning of Tabular Models in SQL Server 2012 Analysis Services, please refer to the
link below.
http://msdn.microsoft.com/en-us/library/dn393915.aspx
If this is not what you want, please elaborate the detail information, so that we can make further analysis.
Regards,
Charlie Liao
TechNet Community Support -
JCAActivationAgent::load - Error while performing endpoint activation:java.
Hi,
I am getting following error while deploying a BPEL process. This is very surprising because the process used to run fine until yesterday. Also for some time I was getting a funny error - the JDev was not able to "read" a wsdl file on local machine. I restarted the server, machine many times but it did not help. I do not have any proxies set and the file resides on my local harddrive.
<2006-04-11 11:30:20,090> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound> JCAActivationAgent::load - Er
ror while performing endpoint activation:java.lang.NullPointerException
<2006-04-11 11:30:20,090> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
java.lang.NullPointerException
at oracle.tip.adapter.fw.agent.jca.JCAActivationAgent.load(JCAActivationAgent.java:208)
at com.collaxa.cube.engine.core.BaseCubeProcess.loadActivationAgents(BaseCubeProcess.java:931)
at com.collaxa.cube.engine.core.BaseCubeProcess.load(BaseCubeProcess.java:302)
at com.collaxa.cube.engine.deployment.CubeProcessFactory.create(CubeProcessFactory.java:66)
at com.collaxa.cube.engine.deployment.CubeProcessLoader.create(CubeProcessLoader.java:391)
at com.collaxa.cube.engine.deployment.CubeProcessLoader.load(CubeProcessLoader.java:302)
at com.collaxa.cube.engine.deployment.CubeProcessHolder.loadAndBind(CubeProcessHolder.java:881)
at com.collaxa.cube.engine.deployment.CubeProcessHolder.getProcess(CubeProcessHolder.java:789)
at com.collaxa.cube.engine.deployment.CubeProcessHolder.loadAll(CubeProcessHolder.java:361)
at com.collaxa.cube.engine.CubeEngine.loadAllProcesses(CubeEngine.java:960)
at com.collaxa.cube.admin.ServerManager.loadProcesses(ServerManager.java:284)
at com.collaxa.cube.admin.ServerManager.loadProcesses(ServerManager.java:250)
at com.collaxa.cube.ejb.impl.ServerBean.loadProcesses(ServerBean.java:219)
at IServerBean_StatelessSessionBeanWrapper14.loadProcesses(IServerBean_StatelessSessionBeanWrapper14.java:2399)
at com.collaxa.cube.admin.agents.ProcessLoaderAgent$ProcessJob.execute(ProcessLoaderAgent.java:395)
at org.quartz.core.JobRunShell.run(JobRunShell.java:141)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:281)
<2006-04-11 11:30:20,152> <ERROR> <default.collaxa.cube.engine.deployment> <CubeProcessLoader::create>
<2006-04-11 11:30:20,152> <ERROR> <default.collaxa.cube.engine.deployment> Process "CallHomeBPEL" (revision "1.0") load FAILE
D!!
<2006-04-11 11:30:20,230> <ERROR> <default.collaxa.cube.engine.deployment> <CubeProcessHolder::loadAll> Error while loading p
rocess 'CallHomeBPEL', rev '1.0': Error while loading process.
The process domain encountered the following errors while loading the process "CallHomeBPEL" (revision "1.0"): null.
If you have installed a patch to the server, please check that the bpelcClasspath domain property includes the patch classes.
Please help.Finally , I was able to redeploy the process. After comparing new files with old files I observed an activationAgents entry in bpel.xml, which was not present previously.
<activationAgents>
<activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="CallHomeFileUtility">
<property name="portType">Read_ptt</property>
</activationAgent>
</activationAgents>
when I removed this from bpel.xml the process deployed successfully. Not sure when the <activationAgents> is added to bpel.xml
Thanks for your inputs. -
DS 5.2 P4 performance improvement
We have +/- 300,000 users that regularly authenticate using our DS. The user ou is divided in ou=internal (20,000 ids) and ou=external (280,000) uids. Approximately 85-90% percent of the traffic happens on the internal ou. The question is: Could I get any performance improvement by separating the internal branch into its own suffix/database? Would running two databases adversely affect the performance instead? We see performance impacts when big searches are performed on the ou=external branch. Would the separation isolate the issue, or those searches will most likely affect the DS as a whole?
Thanks for your help!
Enrique.Thank you for the info. Are u a Sun guy - do you work
for sun?Yes I am. I'm the Architect for Directory Server Enterprise Edition 6.0. Previously I worked on all DS 5 releases (mostly on Replication).
You are getting the Dukes!Thanks.
Ludovic. -
Performance improvement in a function module
Hi All,
I am using SAP 6.0 version. I have a function module to retrive the PO's . for just 10,000 records its taking long time.
Can any one sugguest the ways to improve the performance.
Thanks in advance.Moderator message - Welcome to SCN.
But
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting
Just 10,000 records? The first rule in performance improvement is to reduce the amount of selected data. If you cannot do that, it's going to take time.
I wouldn't bother with a BAPI for so many records. Write some custom code to get only the data you need.
Tob -
Pls help me to modify the query for performance improvement
Hi,
I have the below initialization
DECLARE @Active bit =1 ;
Declare @id int
SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END FROM dbo.Students
I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?I dont understand your query...May be below? or provide us sample data and your output...
SELECT * FROM dbo.students
where @Active=CASE
WHEN id=@id and rank ='Good' then 0 else 1 END
But, I doubt you will have performance improvement here?
Do you have index on id?
If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
SELECT * FROM dbo.students
where id=@id
and rank ='Good' -
Performance improvement in OBIEE 11.1.1.5
Hi all,
In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
Thanks,
Haree.Hi Haree,
Steps to improve the performance.
1. implement caching mechanism
2. use aggregates
3. use aggregate navigation
4. limit the number of initialisation blocks
5. turn off logging
6. carry out calculations in database
7. use materialized views if possible
8. use database hints
9. alter the NQSONFIG.ini parameters
Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
This is the latest version for OBIEE11g.
http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
Report level:
1. Enable cache -- change nqsconfig instead of NO change to YES.
2. GO--> Physical layer --> right click table--> properties --> check cacheable.
3. Try to implement Aggregate mechanism.
4.Create Index/Partition in Database level.
There are multiple other ways to fine tune reports from OBIEE side itself:
1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
This will pick your aggr tables and not detailed tables.
2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
OR
http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
Hope this help's
Thanks,
Satya
Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM -
MV Refresh Performance Improvements in 11g
Hi there,
the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
"Refresh operations on materialized views are now faster with the following improvements:
1. Refresh statement combinations (merge and delete)
2. Removal of unnecessary refresh hint
3. Index creation for UNION ALL MV
4. PCT refresh possible for UNION ALL MV
While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
Considerations are necessary for migration decision to 11g or not...
Thanks in advance.I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
And were are the performance improvement? What is the refresh hint?
Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set. -
Why GN_INVOICE_CREATE has no performance improvement even in HANA landscape?
Hi All,
We have a pricing update program which is used to update the price for a Material Customer combination(CMC).This update is done using the FM 'GN_INVOICE_CREATE'.
The logic is designed to loop on customers, wherein this FM will be called passing all the materials valid for that customer.
This process is taking days(Approx 5 days) to get executed and updated for CMC of 100 million records.
Hence we are planning to move towards HANA for better improvement in performance.
We designed the same programs in the HANA landscape and executed it in both systems for 1 customer and 1000 material combination.
Unfortunately, both the systems gave same runtimes around 27 seconds for execution.
This is very disappointing thinking the performance improvement we should have on HANA landscape.
Could anyone throw light on any areas where we are missing out and why no performance improvement was obtained ?
Also is there any configuration related changes to be done on HANA landscape for better performance.?
The details regarding both the systems are as below.
Suite on HANA:
SAP_BASIS : 740
SAP_APPL : 617
ECC
SAP_BASIS : 731
SAP_APPL : 606
Also see the below screenshots of the system details.
HANA:
ECC:
Thanks & regards,
NaseemHi,
just to fill in on Lars' already exhaustive comments:
Migrating to HANA gives you lots of options to replace your own functionality (custom ABAP code) wuth HANA artifacts - views or SQLscript procedures. This is where you can really gain on performance. Expecting ABAP code to automatically run faster on HANA may be unrealistic, since it depends on the functionality of the code and how well it "translates" to a HANA environment. The key to really minimize run time is to replace DB calls with specific HANA views or procedures, then call these from your code.
I wrote a blog on this; you might find it useful as a general introduction:
A practical example of ABAP on HANA optimization
When it comes to SAP standard code, like your mentioned FM, it is true that SAP is migrating some of this functionality to HANA-optimized versions, but this doesn't mean everything will be optimized in one go. This particular FM is probably not among those being initially selected for "HANAification", so you basically have to either create your own functionality (which might not be advisable due to the fact that this might violate data integrity) or just be patient.
But again, the beauty of HANA lies in the brand new options for developers to utilize the new ways of pushing code down to the DB server. Check out the recommendations from Lars and you'll find yourself embarking on a new and exciting journey!
Also - as a good starting point - check out the HANA developer course on open.sap.com.
Regards,
Trond -
Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?
Suren,
first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
Concerning your question:
You didn't tell us what you want to do with your table or your set of tables.
As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
- Lars -
DMA Performance Improvements for TIO-based Devices
Hello!
DMA Performance Improvements for TIO-based Devices
http://digital.ni.com/public.nsf/websearch/1B64310FAE9007C086256A1D006D9BBF
Can I apply the procedure to NI-DAQmx 9? These ini-files dont seem to exist anymore in the newer version.
Best, ViktorHi Viktor,
this page is 7 years old and doesn't apply to the DAQmx.
Regards, Stephan
Maybe you are looking for
-
How to mask out specific area in a specific time?
Hi, I want to make a mask on a specific area of the footage in a specific time. I drew the mask and all the frame went black,so I clicked on Mask-> invert and only the specific area I masked went black,which is exactly what I wanted. Now I want to ma
-
Error message "ipod is currently synced to a different library"
My hard drived crashed and I purchased a new computer. When I installed ITunes on my new machine and tried to sync my music I receive the following error message "your Ipod is currently synced to a different itunes library erase and sync or cancel"
-
Network controller​, pci device, wifi drivers not working
my device model is dv6-6121tx i am not able to install network controller and PCI device drivers... also my wifi is not working.. no wireless adapter in device manager.. please help... This question was solved. View Solution.
-
I cant connect my ipad as the option of devices is not on the left side page of my computer itunes page
-
Lazy Nezumi - settings for use with CC
I am finding it hard to find my way round way this software - the incomprehensible (to me) manual doesn't help - and I'd be grateful for a steer on what are good default settings for smoothing curves in freehand drawing. None of the settings seem to