Measuring the Performance
How to measure the Performance of an XI system and also an XI Interface?
What are all the ways we can tune an XI system for an optimal performance? i.e what are all the areas we have to look into for a superior performance?
Pete,
You can check the start time and end time of the message in sxmb_moni and can find out how much time it takes for the interface to execute.
Go to RWB and then click on performance monitoring. You can get for each interface individually or for the whole.
Also go thorugh this documents:
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/489f5844-0c01-0010-79be-acc3b52250fd
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/defd5544-0c01-0010-ba88-fd38caee02f7?prtmode=navigate
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad
/people/prasad.illapani/blog/2007/04/27/performance-tuning-checks-in-sap-exchange-infrastructurexi-part-iii
http://help.sap.com/saphelp_nw04/helpdata/en/9e/6921e784677d4591053564a8b95e7d/frameset.htm
---Satish
Similar Messages
-
How to measure the performance of sql query?
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i...
It ll be useful for me to write efficient query....
Thanks & Regardspsram wrote:
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
You can use the following approach to get a deeper understanding of the operations performed by each row source:
alter session set statistics_level=all;
alter session set timed_statistics = true;
select /* findme */ ... <your query here>
SELECT
SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
OBJECT_NAME,
CARDINALITY,
LAST_OUTPUT_ROWS,
LAST_CR_BUFFER_GETS,
LAST_DISK_READS,
LAST_DISK_WRITES,
FROM V$SQL_PLAN_STATISTICS_ALL P,
(SELECT *
FROM (SELECT *
FROM V$SQL
WHERE SQL_TEXT LIKE '%findme%'
AND SQL_TEXT NOT LIKE '%V$SQL%'
AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
ORDER BY LAST_LOAD_TIME DESC)
WHERE ROWNUM < 2) S
WHERE S.HASH_VALUE = P.HASH_VALUE
AND S.CHILD_NUMBER = P.CHILD_NUMBER
ORDER BY ID
/Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to measure the performance of a SQL query?
Hello,
I want to measure the performance of a group of SQL queries to compare them, but i don't know how to do it.
Is there any application to do it?
Thanks.You can use STATSPACK (in 10g its called as AWR - Automatic Workload Repository)
Statspack -> A set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. This feature has been replaced by the Automatic Workload Repository.
Automatic Workload Repository - Collects, processes, and maintains performance statistics for problem detection and self-tuning purposes
Oracle Database Performance Tuning Guide - Automatic Workload Repository
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601
or
you can use EXPLAIN PLAN
EXPLAIN PLAN -> A SQL statement that enables examination of the execution plan chosen by the optimizer for DML statements. EXPLAIN PLAN causes the optimizer to choose an execution plan and then to put data describing the plan into a database table.
Oracle Database Performance Tuning Guide - Using EXPLAIN PLAN
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/ex_plan.htm#PFGRF009
Oracle Database SQL Reference - EXPLAIN PLAN
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9010.htm#sthref8881 -
Measuring the Performance of Web Dynpro Applications
I am trying to measure the performance of the ESS MSS WebDynpro applications. I am following the instructions at http://help.sap.com/saphelp_nw04/helpdata/en/bb/fdc4402418742ae10000000a155106/frameset.htm
but no performance data shows up for the ESS MSS WebDynpro components.
has anyone able to see any performance data for the ESS MSS WebDynpro components?
Thanks,
TiberiuThanks Armin,
You are absolutely right.I have displayed all the 150 rows in the table on loading.This is the requirement which I should do for my Customer.(Displaying all rows)
If I give 40 rows to display,it takes 35 seconds to upload the file.But it takes 30 seconds to go to next page of table ,when I click Page Next property of Table.
Is there any way to reduce this display time.
Regards,
Dan.
Edited by: Dan on Mar 13, 2008 12:16 PM -
How can i use Bi-Technical Content is used for measuring the performance of
Hi
recenetly i implemented the BI7.0 for one client. I want to know how to use 'BI Administraction Cockpit,.
Actual what technical content does ???
And how can i use technical content or statistics for measuring the performance of my queries. Please let me know
kumarHi Ravi,
BI Admin Cockpit is enhancement of BW Statistics.
http://help.sap.com/saphelp_nw2004s/helpdata/en/44/08a75d19e32d2fe10000000a11466f/frameset.htm
Check this thread also:
BI Statistics comparision with the old
Regarding the performance check the link below.
Re: Query - Performance
http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
Regards,
Anil -
Measuring the performance of Networking code
Lately I've had renewed interest in Java networking, and been doing some reading on various ways of optimizing networking code.
But then it hit me.
I dont know any way of benchmarking IO/Networking code. To take a simple example, how exactly am I supposed to know if read(buf,i,len) is more efficient than read() ? or how do I know the performance difference between setting sendBufferSize 8k and 32k? etc
1)
When people say "this networking code is faster than that", I assume they are referring to latency. Correct? Obviously these claims need to be verifiable. How do they do that?
2)
I am aware of Java profilers ( http://java-source.net/open-source/profilers), but most of them measure stuff like CPU, memory, heap, etc - I cant seem to find any profiler that measures Networking code. Should I be looking at OS/System level tools? If so, which ones?
I dont want to make the cardinal sin of blindly optimizing because "people say so". I want to measure the performance and see it with my own eyes.
Appreciate the assistance.
Edited by: GizmoC on Apr 23, 2008 11:53 PMIf you're not prepared to assume they know what they're talking about, why do you assume that you know what they're talking about?Ok, so what criteria determine if a certain piece of "networking code" is better/faster than another? My guess is: latency, CPU usage, memory usage - that's all I can think of. Anyway, I think we are derailing here.
The rest of your problem is trivial. All you have to do is time a large download under the various conditions of interest.1)
hmm.. well for my purpose I am mainly interested in latency. I am writing a SOCKS server which is currently encapsulating multiplayer game data. Currently I pay an apprx 100 latency overhead - I dont understand why.. considering both the SOCKS client (my game) and SOCKS server are localhost. And I dont think merely reading a few bytes of SOCKS header information can potentially cause such an overhead.
2)
Let's say I make certain changes to my networking code which results in a slightly faster download - however can I assume that this will also mean lower latency while gaming? Game traffic is extremely sporadic, unlike a regular HTTP download which is a continuous stream of bytes.
3)
"timing a large download" implies that I am using some kind of external mechanism to test my networking performance. Though this sounds like a pragmatic solution, I think there ought to be a formal finely grained test harness that tests networking performance in Java, no? -
How to measure the performance
Hi all,
I would like to compare the performance of my Webdynpros in different machine. I need to measure the time of execution of my Web dynpro, including execution of some interaction from the user (navigation between the pages, etc). How can I do it? I cannot add any additional code, are there any transaction to measure it?
Thanks a lot,
AnnaThank you.
I opened this transaction, but I could see only Transaction, Program or Function module to introduce and analyze. How can I measure Webdynpro?
Regards,
Anna -
How to measure the performance of Extractor
Hi,
How to measure the time taken to by the extractor when executed from rsa3 for a given selection?
Lot of threads speak about ST05... but these transactions are too granular to analyse.
How to get the overall time taken.i need the overall time taken and the time taken by the individual SQL statements... please provide specific pointers.
Thanks,
BalajiMaybe SE30 can help you....
Regards,
Fred -
Any parameter to measure the Performance between two server
Currently I am running more 20 dvlp database in 2 cpu 2 GB RAM Windows2000
server
We are curently in process of upgrading the infarasturcutre
We are moving the DB the a 2003 STD R 2 server with 4gb Ram and 2 processor
I ahve configure everything in the new server except Moving of Dbs from old to NEW server
versions 8,9 10gr2
But starngley i feel the new server performs slowly when compared to old server not frpm DB point of view but while copying between different disks in the same NEW server
It takes long time than usual across our office
Tomorrow i will be moving a few DBS to the new machine
Everythinh s going to be same in the Init.ora.No change in SGA or INIT parameters execept the driectory structure
i want run the DBs in OLD and NEW machine and compare the response time
Willbe sufficent enough to give an idea whether the new server is performing better or worse
Any susgestions
Message was edited by:
Maran ViswarayarI don't think so, it depends on the way you conduct
your testing environment, how you build it, and
what's the goal of this test. I wouldn't name it
synthetic test, I name it standarized test
environment.Just to be clear, I have nothing against doing synthetic tests, and I don't intend "synthetic" to be in any way derogatory. This sort of testing can be quite valuable. You just need to be careful about extrapolating the performance of this sort of testing to the performance that your application will actually achieve. Since the workload your application is performing is generally going to be quite different than the synthetic workload you're describing, the comparison may not be direct.
OS performance metrics can be gathered directly with
the OS party. But knowing exactly how your database
will perform in your specific environment ... You'll
have to make up a testing environment.All true. Knowing how your database will perform, particularly on I/O intensive operations, though, doesn't tell you how a particular application running in your database may perform. Your application may, for example, be CPU bound or may be doing very non-random IO operations.
Given that the original poster is seeing odd disk behavior, and his primary concern is with the IO subsystem, I would suggest starting the test there.
Performance problems have always been multifactorial,
and this always makes a tuning approach to be
obscure. Unless a professional has enough practical
experience, it becomes a black box problem where
interactig subsystems will make it difficult the goal
of finding the most meaningful performance thread and
its interactions.Very true
I have used this test approach and it has assisted me
in obtaining an environment free of subjectivities
where I have been able to benchmark Oracle behaviour
on different platforms.
This kind of test has also helped me in creating
controled stressing situations where I can
proactively plot potential bottlenecks and meassure
different rdbms architectural aspects such as
transactional mechanism, sorting, undo segments,
latches, networking, etc. just to name a few, at
different load scenarios.Synthetic test loads are excellent for this sort of database performance investigation, agreed.
It is sometimes difficult to find hundreds of
volunteers to test the application to find the point
of maximum sessions with minimum response time. This
testing approach has been useful in hiring a variable
number of virtual volunteers that are willing to test
the environment any time. So it has also allowed me
to create useful reports, such as the "users load vs.
response time" which helped me in predicting my
system operational ceilings, and it has been pretty
accurate.If we're talking about testing application performance, rather than testing generic database performance, I'd maintain that you shouldn't need any volunteers. You should have scripts that replicate the key business operations the application does and you should have a harness that can start up arbitrary numbers of concurrent sessions (admittedly, you may need a handful of volunteers to launch these scripts from a sufficient number of laptops).
It all depends on the way you define your test
environment.100% agreed.
Justin -
Measuring the Performance of NIO Selectors ?
Hi,
I am trying to build a java messaging API using the java.nio package. This is similar to MPI, (which is in C/Fortran), so obviously, i need to make a comparison of this in terms of performance (how fast my library can transfer your messages).
Anyway, during a simple point-to-point communication, meaning that one process is sending a message, and the other one is just receiving it, i get latecncy of around 40 milli seconds, which is really unacceptable. What i have tried to understand through painful debugging and analysis of my program is, that when i try to write some thing to other node, then i copy my message onto the buffer, and then i wakep up the selector so that it may write whatever i have copied onto the buffer. my send method takes nearly 40 milliseconds, and for 38 milliseconds(out of total 40 milli seconds), i wait for selector to wakep up. Even if it wakesup, its not ready to fire a write event, coz may be teh channel is not ready or any thing else. So my question is, how can i control this behaviour of the selector, how can i make the channel to be writeable faster than this, i can't afford for the selector to be not ready to write for 38 milliseconds. Its very very slow. Can any one throw some light on this please ...
Thanks
--AamirThanks for your replies. So let me get into a little bit detail now.
You have suggested that one should register for OP_WRITE only when you get short write only. Right, i do get short writes, and i understand what do you mean by this. So now, please leave aside short writes, let's talk about write or no-write situation.
There are two parts of the parts as i assume every NIO program has. One which is interface to users, like send() or recv(), and second part is the selector itself. So with regards to OP_WRITE, if i add OP_WRITE in the interestOps() during the start-up, then i have pay for cent percent CPU usage. I posted some problem like this on the forums and you guyz suggested, add OP_WRITE to interestOps() only when you need to write somne thing. IN my case wehn some one has called send(), because someone has called the send(), offcourse only then, it makes sense to have an OP_WRITE event in the seelctor. Otherwise, it just loops and loops and takes cent percent CPU. So on the other side, if i dont add OP_WRITE in the interestOps() intially, which i am doing right now, then i add it when some one calls send(), because as i said, only then and then i am interested in having OP_WRITE event in the selector. So in the send method, i do this ....
SelectionKey key = tempChannel.keyFor(controlSelector);
key.interestOps(SelectionKey.OP_WRITE);
key.selector().wakeup();to wake up the selector so that i may write. But the time taken in transition from this code to OP_WRITE event code is almost 39 milliseconds (out of total 40 milliseconds) of my send(), so clearly, i am missing some thing, and once i am clear about that, i think it can go down to 1 milliseconds (and its as fastest) as java can go.
Actuallly, let me explain the problem in a little bit more detail, becoz its very interesting for myself, I dont know whats going wrong with it but anyway,
My send and recv methods are like this ,
Send Method _________________|_________________Recv Method
Step 1: User called Send() Step 1__|__Step 1: Usercalled Recv() step 1
Step 2: Control Selector writes a___|__ Step 2: Control Selector reads teh control message.
control message tellling the length_|
and ID of message |
Step 3: Expects a reply for its ctrl __|__ Step 3: Gives an OK to the sender to send the actual data.
message from the receiver. |
Step 4: Sends teh actual data_____|__Step 4: Receives teh actual data
This must be giving an idea of hand-shaking i am donig actually before sending teh actual data, and this handshaking is done by separateSelector and actual transmission is done by other selector. So this was about my application, but now here's what i am trying to do.
Ping Pong Test
NODE 1_________________________________NODE2
First Part :
Send() --------------------------\
-------------------------------------- \-------------------------> Recv() //whatever it receiveed, send it back
Second Part:
--------------------------------------- /---------------------------> Send()
Recv() ----------------------------/
So you may wel imagine that there's alot of waking up the selector and every thing going on here, and now lets see the timings.
NODE1________________________________NODE2
Send() ________________________________Recv()
First Part:
Sender Step 1( 0 milliseonds)------------------------ Recv Step 1 ( 0 milliseonds)
Sender Step 2( 0 milliseonds)------------------------ Recv Step 2 ( 0 milliseonds)
Sender Step 3( 0 milliseonds)------------------------ Recv Step 3 ( 0 milliseonds)
Sender Step 4( 0 milliseonds)------------------------ Recv Step 4 ( 0 milliseonds)
Recv() ________________________________ Send()
Second Part
Recv() Step 1( 0 milliseonds)------------------------ Sender Step 1
(Problematic bit) ----Here on the send(), the transition time between step 1 adn step 2 is teh whole time, like the whole 40 milliseonds, and in step 1 i wakeup the selector to write and in step 2, i just write. So thats the whole problem.
Recv() Step 2( 40 milliseonds)----------------------- Sender Step 2( 40 milliseonds)
Recv() Step 3( 0 milliseonds)-------------------------Sender Step 3( 0 milliseonds)
Recv() Step 4( 0 milliseonds)-------------------------Sender Step 4 ( 0 milliseonds)
Interestingly, if i have some thing like Barrier (snychronization point) with my ping pong test, like
NODE 1______________________________ NODE2
Send() ---------------------------------\
--------------------------------------------\--------------------------------Recv()
Barrier()------------------------------------------------------------------Barrier()
--------------------------------------------/---------------------------------Send()
Recv() ---------------------------------/
Barrier()------------------------------------------------------------------Barrier()
In a scenario like this where i have some time, like you may say sleeping time among send and recv methods, then every thing is fine.
I dont expect you all to understand, but if by any chance some one could gets a clue about what could be going wrong, then please do comment.
Sorry for a long post had no other optoin actually.
Thanks in advance
--Aamir -
How to measure the size of an object written by myself?
Hi all,
I'm going to measure the performance on throughput of an ad hoc wireless network that is set up for my project. I wrote a java class that represents a particular data. In order to calculate the throughput, I'm going to send this data objects from one node to another one in the network for a certain time. But I've got a problem with it- How to measure the size of an object that was written by myself in byte or bit in Java? Please help me with it. Thank you very much.LindaL22 wrote:
wrote a java class that represents a particular data. In order to calculate the throughput, I'm going to send this data "a data" doesn't exist. So there's nothing to measure.
objects from one node to another one in the network for a certain time. But I've got a problem with it- How to measure the size of an object that was written by myself in byte or bit in Java? Not. -
How autoextend affects the performance of a big data load
I'm doing a bit of reorganization on a datawarehouse, and I need to move almost 5 TB worth of tables, and rebuild their indexes. I'm creating a tablespace por each month, using BIGFILE tablespaces, and assigning to them 600GB, that is the approximate of size of the tables for each month. The process of just assigning the space takes a lot of time, and I decided to try a different approach and change the datafile to AUTOEXTEND ON NEXT 512M, and then run the ALTER TABLE MOVE command to move the tables. The database is Oracle 11g Release 2, and it uses ASM. I was wondering what would be the best approach between these two:
1. Create the tablespace, with AUTOEXTEND OFF, and assign 600GB to it, and then run the ALTER TABLE MOVE command. The space would be enough for all the tables.
2. Create the tablespace, with AUTOEXTEND ON, and without assigning more than 1GB, run the ALTER TABLE MOVE command. The diskgroup has enough space for the expected size of the tablespace.
With the first approach my database is taking 10 minutes approx moving each partition (there's one for each day of the month). Would this number be impacted in a big way if the database has to do an AUTOEXTEND each 512 MB?If you measure the performance as the time required to allocate the initial 600 GB data file plus the time to do the load and compare that to allocating a small file and doing the load, letting the data file autoextend, it's unlikely that you'll see a noticable difference. You'll get far more variation just in moving 600 GB around than you'll lose waiting on the data file to extend. If there is a difference, allocating the entire file up front will be slightly more efficient.
More likely, however, is that you wouldn't count the time required to allocate the initial 600 GB data file since that is something that can be done far in advance. If you don't count that time, then allocating the entire file up front will be much more efficient.
If you may need less than 600 GB, on the other hand, allocating the entire file at once may waste some space. If that is a concern, it may make sense to compromise and allocate a 500 GB file initially (assuming that is a reasonable lower bound on the size you'll actually need) and let the file extend in 1 GB chunks. That won't be the most efficient approach and you may waste up to a GB of space but that may be a reasonable compromise.
Justin -
Measuring the Equipment Performance
Hi Expert,
May i know how can i achiev the following trough SAP system.
1. I have some machine we want to monitor the performance of these manchine like machine utilization,
2. Also want to see the current sitiuation (Real time monitoring ) of machine like whether it is in operation or not ?
3.How much time the operator has taken to set the jobs like in case of slotting machine etc.
Please provide your valuable input in this regards.
ARAR,
1. You have to map each machine as a Work Center of category 'Machine' .
Then the Machine utilisation can be had through IW37 query by Work center
(In the ALV report fields such as Actual Work, will give you this information. You can have the total values)
2. This can be explored through various Tcodes avaialble at Capacity Requirements Planning section of SAP Menu-->.Maintenance Processing.
3. Such activities you need to have as an operation in the Tasklist with work center as the operator himself.
Regards... -
How to measure the rotational speed by using rotary encoder and 1 counter?
I want to measure the rotational speed of a shaft, and I have below hardware:
1, a rotary encoder, with A,B,Z signals output;
2, PCI-E6363 card.
I do konw how to use such a encoder to measure the rotational angle by using the function "DAQmxCreateCIAngEncoderChan", but this time I need to measure the speed(rpm), as well as the dirction of the speed, which means a negative speed represent a CCW rotate direction.
More detail informations:
for the encoder, the A,B signal is 600ppr, and Z signal is 1ppr
the rotatinal speed is in range: -300 ~ 5000 rpm.
some one suggested me that I can use the "DAQmxCreateCIAngEncoderChan" task to measure the angle firstly and then do the differential analysis with the angle. but I have to enable the Z index function, and it's hard to calculate when the shaft speed is fast then 2500rpm.
Anyone can help me on this issue?
Thanks in advance!RobertoBozzolo:
Thansk for your reply. You are right that measure frequency to get the speed is the best way, but it's hard to get the dirction at the same time. You suggest me that "perform two angle measurements to get the sense of rotation", but I'm not sure I catched what you mean about this. I try to understand your opinion like this: distribute the signals to 2 counters, and start 2 tasks, one for frequency, the other for angle which used for deciding the dirction?
And by the way, in my application, the counter is limited:
I'm using PCI-E6363, which have 4 counters totally, and I have to measure 4 different speed sensors at the same time, so that means only 1 counter for me to measure the speed and the dirction.
RobertoBozzolo 已写:
To measure the speed from the encoder you can simply follow some of the frequency measurement examples that ship with DAQmx, considering that speed (rpm) is given by frequency (Hz) on one encoder output / 600 (ppr) * 60 (s->min) = frequency / 10.
Difficult is to add a sign to this measurement: frequency measurement gives you no information about sense of rotation, so I suppose you could perform two angle measurements to get the sense of rotation and then get the speed as above. -
the 3d features require 'use graphics processor' is enabled in the performance preferences. Your video card must meet the minimum requirements and you may need to check that your driver is working correctly
Hello I'm also getting this error.. can anybody help me out? Here's my log.
Adobe Photoshop Version: 2014.0.0 20140508.r.58 2014/05/08:23:59:59 x64
Operating System: Windows 7 64-bit
Version: 6.1 Service Pack 1
System architecture: Intel CPU Family:6, Model:5, Stepping:5 with MMX, SSE Integer, SSE FP, SSE2, SSE3
Physical processor count: 2
Processor speed: 2128 MHz
Built-in memory: 2934 MB
Free memory: 231 MB
Memory available to Photoshop: 2354 MB
Memory used by Photoshop: 70 %
3D Multitone Printing: Disabled.
Touch Gestures: Disabled.
Windows 2x UI: Disabled.
Image tile size: 1024K
Image cache levels: 4
Font Preview: Medium
TextComposer: Latin
Display: 1
Display Bounds: top=0, left=0, bottom=768, right=1366
OpenGL Drawing: Enabled.
OpenGL Allow Old GPUs: Not Detected.
OpenGL Drawing Mode: Advanced
OpenGL Allow Normal Mode: True.
OpenGL Allow Advanced Mode: True.
AIFCoreInitialized=1
AIFOGLInitialized=1
OGLContextCreated=1
glgpu[0].GLVersion="2.1"
glgpu[0].GLMemoryMB=1242
glgpu[0].GLName="Intel(R) HD Graphics"
glgpu[0].GLVendor="Intel"
glgpu[0].GLVendorID=32902
glgpu[0].GLDriverVersion="8.15.10.2993"
glgpu[0].GLRectTextureSize=8192
glgpu[0].GLRenderer="Intel(R) HD Graphics"
glgpu[0].GLRendererID=70
glgpu[0].HasGLNPOTSupport=1
glgpu[0].GLDriver="igdumd64.dll,igd10umd64.dll,igdumdx32,igd10umd32"
glgpu[0].GLDriverDate="20130130000000.000000-000"
glgpu[0].CanCompileProgramGLSL=1
glgpu[0].GLFrameBufferOK=1
glgpu[0].glGetString[GL_SHADING_LANGUAGE_VERSION]="1.20 - Intel Build 8.15.10.2993"
glgpu[0].glGetProgramivARB[GL_FRAGMENT_PROGRAM_ARB][GL_MAX_PROGRAM_INSTRUCTIONS_ARB]=[1447 ]
glgpu[0].glGetIntegerv[GL_MAX_TEXTURE_UNITS]=[8]
glgpu[0].glGetIntegerv[GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS]=[16]
glgpu[0].glGetIntegerv[GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS]=[16]
glgpu[0].glGetIntegerv[GL_MAX_TEXTURE_IMAGE_UNITS]=[16]
glgpu[0].glGetIntegerv[GL_MAX_DRAW_BUFFERS]=[8]
glgpu[0].glGetIntegerv[GL_MAX_VERTEX_UNIFORM_COMPONENTS]=[512]
glgpu[0].glGetIntegerv[GL_MAX_FRAGMENT_UNIFORM_COMPONENTS]=[1024]
glgpu[0].glGetIntegerv[GL_MAX_VARYING_FLOATS]=[41]
glgpu[0].glGetIntegerv[GL_MAX_VERTEX_ATTRIBS]=[16]
glgpu[0].extension[AIF::OGL::GL_ARB_VERTEX_PROGRAM]=1
glgpu[0].extension[AIF::OGL::GL_ARB_FRAGMENT_PROGRAM]=1
glgpu[0].extension[AIF::OGL::GL_ARB_VERTEX_SHADER]=1
glgpu[0].extension[AIF::OGL::GL_ARB_FRAGMENT_SHADER]=1
glgpu[0].extension[AIF::OGL::GL_EXT_FRAMEBUFFER_OBJECT]=1
glgpu[0].extension[AIF::OGL::GL_ARB_TEXTURE_RECTANGLE]=1
glgpu[0].extension[AIF::OGL::GL_ARB_TEXTURE_FLOAT]=1
glgpu[0].extension[AIF::OGL::GL_ARB_OCCLUSION_QUERY]=1
glgpu[0].extension[AIF::OGL::GL_ARB_VERTEX_BUFFER_OBJECT]=1
glgpu[0].extension[AIF::OGL::GL_ARB_SHADER_TEXTURE_LOD]=0
License Type: Tryout Version
Serial number: Tryout Version
Application folder: C:\Program Files\Adobe\Adobe Photoshop CC 2014\
Temporary file path: C:\Users\ANNABE~1\AppData\Local\Temp\
Photoshop scratch has async I/O enabled
Scratch volume(s):
C:\, 298.0G, 63.8G free
Required Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CC 2014\Required\Plug-Ins\
Primary Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CC 2014\Plug-ins\
Installed components:
A3DLIBS.dll A3DLIB Dynamic Link Library 9.2.0.112
ACE.dll ACE 2014/04/14-23:42:44 79.554120 79.554120
adbeape.dll Adobe APE 2013/02/04-09:52:32 0.1160850 0.1160850
AdbePM.dll PatchMatch 2014/04/23-10:46:55 79.554276 79.554276
AdobeLinguistic.dll Adobe Linguisitc Library 8.0.0
AdobeOwl.dll Adobe Owl 2014/03/05-14:49:37 5.0.33 79.552883
AdobePDFL.dll PDFL 2014/03/04-00:39:42 79.510482 79.510482
AdobePIP.dll Adobe Product Improvement Program 7.2.1.3399
AdobeXMP.dll Adobe XMP Core 2014/01/13-19:44:00 79.155772 79.155772
AdobeXMPFiles.dll Adobe XMP Files 2014/01/13-19:44:00 79.155772 79.155772
AdobeXMPScript.dll Adobe XMP Script 2014/01/13-19:44:00 79.155772 79.155772
adobe_caps.dll Adobe CAPS 8,0,0,7
AGM.dll AGM 2014/04/14-23:42:44 79.554120 79.554120
ahclient.dll AdobeHelp Dynamic Link Library 1,8,0,31
amtlib.dll AMTLib (64 Bit) 8.0.0.45 BuildVersion: 8.0; BuildDate: Fri Mar 28 2014 20:28:30) 1.000000
ARE.dll ARE 2014/04/14-23:42:44 79.554120 79.554120
AXE8SharedExpat.dll AXE8SharedExpat 2013/12/20-21:40:29 79.551013 79.551013
AXEDOMCore.dll AXEDOMCore 2013/12/20-21:40:29 79.551013 79.551013
Bib.dll BIB 2014/04/14-23:42:44 79.554120 79.554120
BIBUtils.dll BIBUtils 2014/04/14-23:42:44 79.554120 79.554120
boost_date_time.dll photoshopdva 8.0.0
boost_signals.dll photoshopdva 8.0.0
boost_system.dll photoshopdva 8.0.0
boost_threads.dll photoshopdva 8.0.0
cg.dll NVIDIA Cg Runtime 3.0.00007
cgGL.dll NVIDIA Cg Runtime 3.0.00007
CIT.dll Adobe CIT 2.2.6.32411 2.2.6.32411
CITThreading.dll Adobe CITThreading 2.2.6.32411 2.2.6.32411
CoolType.dll CoolType 2014/04/14-23:42:44 79.554120 79.554120
dvaaudiodevice.dll photoshopdva 8.0.0
dvacore.dll photoshopdva 8.0.0
dvamarshal.dll photoshopdva 8.0.0
dvamediatypes.dll photoshopdva 8.0.0
dvametadata.dll photoshopdva 8.0.0
dvametadataapi.dll photoshopdva 8.0.0
dvametadataui.dll photoshopdva 8.0.0
dvaplayer.dll photoshopdva 8.0.0
dvatransport.dll photoshopdva 8.0.0
dvaui.dll photoshopdva 8.0.0
dvaunittesting.dll photoshopdva 8.0.0
dynamiclink.dll photoshopdva 8.0.0
ExtendScript.dll ExtendScript 2014/01/21-23:58:55 79.551519 79.551519
icucnv40.dll International Components for Unicode 2013/02/25-15:59:15 Build gtlib_4.0.19090
icudt40.dll International Components for Unicode 2013/02/25-15:59:15 Build gtlib_4.0.19090
imslib.dll IMSLib DLL 7.0.0.145
JP2KLib.dll JP2KLib 2014/03/12-08:53:44 79.252744 79.252744
libifcoremd.dll Intel(r) Visual Fortran Compiler 10.0 (Update A)
libiomp5md.dll Intel(R) OpenMP* Runtime Library 5.0
libmmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
LogSession.dll LogSession 7.2.1.3399
mediacoreif.dll photoshopdva 8.0.0
MPS.dll MPS 2014/03/25-23:41:34 79.553444 79.553444
pdfsettings.dll Adobe PDFSettings 1.04
Photoshop.dll Adobe Photoshop CC 2014 15.0
Plugin.dll Adobe Photoshop CC 2014 15.0
PlugPlugExternalObject.dll Adobe(R) CEP PlugPlugExternalObject Standard Dll (64 bit) 5.0.0
PlugPlugOwl.dll Adobe(R) CSXS PlugPlugOwl Standard Dll (64 bit) 5.0.0.74
PSArt.dll Adobe Photoshop CC 2014 15.0
PSViews.dll Adobe Photoshop CC 2014 15.0
SCCore.dll ScCore 2014/01/21-23:58:55 79.551519 79.551519
ScriptUIFlex.dll ScriptUIFlex 2014/01/20-22:42:05 79.550992 79.550992
svml_dispmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
tbb.dll Intel(R) Threading Building Blocks for Windows 4, 2, 2013, 1114
tbbmalloc.dll Intel(R) Threading Building Blocks for Windows 4, 2, 2013, 1114
TfFontMgr.dll FontMgr 9.3.0.113
TfKernel.dll Kernel 9.3.0.113
TFKGEOM.dll Kernel Geom 9.3.0.113
TFUGEOM.dll Adobe, UGeom© 9.3.0.113
updaternotifications.dll Adobe Updater Notifications Library 7.0.1.102 (BuildVersion: 1.0; BuildDate: BUILDDATETIME) 7.0.1.102
VulcanControl.dll Vulcan Application Control Library 5.0.0.82
VulcanMessage5.dll Vulcan Message Library 5.0.0.82
WRServices.dll WRServices Fri Mar 07 2014 15:33:10 Build 0.20204 0.20204
wu3d.dll U3D Writer 9.3.0.113
Required plug-ins:
3D Studio 15.0 (2014.0.0 x001)
Accented Edges 15.0
Adaptive Wide Angle 15.0
Angled Strokes 15.0
Average 15.0 (2014.0.0 x001)
Bas Relief 15.0
BMP 15.0
Camera Raw 8.0
Camera Raw Filter 8.0
Chalk & Charcoal 15.0
Charcoal 15.0
Chrome 15.0
Cineon 15.0 (2014.0.0 x001)
Clouds 15.0 (2014.0.0 x001)
Collada 15.0 (2014.0.0 x001)
Color Halftone 15.0
Colored Pencil 15.0
CompuServe GIF 15.0
Conté Crayon 15.0
Craquelure 15.0
Crop and Straighten Photos 15.0 (2014.0.0 x001)
Crop and Straighten Photos Filter 15.0
Crosshatch 15.0
Crystallize 15.0
Cutout 15.0
Dark Strokes 15.0
De-Interlace 15.0
Dicom 15.0
Difference Clouds 15.0 (2014.0.0 x001)
Diffuse Glow 15.0
Displace 15.0
Dry Brush 15.0
Eazel Acquire 15.0 (2014.0.0 x001)
Embed Watermark 4.0
Entropy 15.0 (2014.0.0 x001)
Export Color Lookup NO VERSION
Extrude 15.0
FastCore Routines 15.0 (2014.0.0 x001)
Fibers 15.0
Film Grain 15.0
Filter Gallery 15.0
Flash 3D 15.0 (2014.0.0 x001)
Fresco 15.0
Glass 15.0
Glowing Edges 15.0
Google Earth 4 15.0 (2014.0.0 x001)
Grain 15.0
Graphic Pen 15.0
Halftone Pattern 15.0
HDRMergeUI 15.0
IFF Format 15.0
Ink Outlines 15.0
JPEG 2000 15.0
Kurtosis 15.0 (2014.0.0 x001)
Lens Blur 15.0
Lens Correction 15.0
Lens Flare 15.0
Liquify 15.0
Matlab Operation 15.0 (2014.0.0 x001)
Maximum 15.0 (2014.0.0 x001)
Mean 15.0 (2014.0.0 x001)
Measurement Core 15.0 (2014.0.0 x001)
Median 15.0 (2014.0.0 x001)
Mezzotint 15.0
Minimum 15.0 (2014.0.0 x001)
MMXCore Routines 15.0 (2014.0.0 x001)
Mosaic Tiles 15.0
Multiprocessor Support 15.0 (2014.0.0 x001)
Neon Glow 15.0
Note Paper 15.0
NTSC Colors 15.0 (2014.0.0 x001)
Ocean Ripple 15.0
OpenEXR 15.0
Paint Daubs 15.0
Palette Knife 15.0
Patchwork 15.0
Paths to Illustrator 15.0
PCX 15.0 (2014.0.0 x001)
Photocopy 15.0
Photoshop 3D Engine 15.0 (2014.0.0 x001)
Photoshop Touch 14.0
Picture Package Filter 15.0 (2014.0.0 x001)
Pinch 15.0
Pixar 15.0 (2014.0.0 x001)
Plaster 15.0
Plastic Wrap 15.0
PNG 15.0
Pointillize 15.0
Polar Coordinates 15.0
Portable Bit Map 15.0 (2014.0.0 x001)
Poster Edges 15.0
Radial Blur 15.0
Radiance 15.0 (2014.0.0 x001)
Range 15.0 (2014.0.0 x001)
Read Watermark 4.0
Render Color Lookup Grid NO VERSION
Reticulation 15.0
Ripple 15.0
Rough Pastels 15.0
Save for Web 15.0
ScriptingSupport 15.0
Shake Reduction 15.0
Shear 15.0
Skewness 15.0 (2014.0.0 x001)
Smart Blur 15.0
Smudge Stick 15.0
Solarize 15.0 (2014.0.0 x001)
Spatter 15.0
Spherize 15.0
Sponge 15.0
Sprayed Strokes 15.0
Stained Glass 15.0
Stamp 15.0
Standard Deviation 15.0 (2014.0.0 x001)
STL 15.0 (2014.0.0 x001)
Sumi-e 15.0
Summation 15.0 (2014.0.0 x001)
Targa 15.0
Texturizer 15.0
Tiles 15.0
Torn Edges 15.0
Twirl 15.0
Underpainting 15.0
Vanishing Point 15.0
Variance 15.0 (2014.0.0 x001)
Water Paper 15.0
Watercolor 15.0
Wave 15.0
Wavefront|OBJ 15.0 (2014.0.0 x001)
WIA Support 15.0 (2014.0.0 x001)
Wind 15.0
Wireless Bitmap 15.0 (2014.0.0 x001)
ZigZag 15.0
Optional and third party plug-ins: NONE
Plug-ins that failed to load: NONE
Flash:
Installed TWAIN devices: NONEWhat Intel video card do you have? The only Intel HD graphics cards officially supported by Photoshop CC are:
Intel HD Graphics P3000
Intel HD Graphics P4000
Intel(R) HD Graphics P4600/P4700
Intel HD Graphics 5000
Go here to read more about the requirements: Photoshop CC and CC 2014 GPU FAQ
Maybe you are looking for
-
Does iPhoto on the iPad support the use of an external editor?
I would like to use an external editor in iPhoto on my iPad, but can't seem to find anyway to do this. Is it supported?
-
Can't import photos since 9.1.5
Hi there, since I updated to iPhoto 9.1.5 I can't import any photos from my Sony Cybershot. When I connect the camera, iPhoto is launched and the cameras memory stick shows up on my desktop. But my camera is recognized by iPhoto and actually iPhoto f
-
SU25 step 2: tcodes from MENU or from S_TCODE
Hi all, I am using SU25 step 2 a,b,c,d to migrate roles (3.1 to 4.6). The main logic of SU25 step 2 is that table USOBT_C is used to insert new auth values. To do this, transaction codes are used to SELECT entries from USOBT_C. Now the dilemma: which
-
1 table of 10 million vs. 10 tables of 1 million
If you had a Parts table which had a column Part_Color. And there were 10 colors. And for every Color, there were about 1 million different records (ie. there are about 1 million different parts). Would anyone on this board consider breaking that tab
-
I did have a WiFi that was working but since Friday I can not connect to my WiFi or my Bluetooth. My wife with the same phone model, version and programs work fine. Do I remove the broadband and re-establish it again? Do I need a new SIMM unit? I am