Question about  Load Average in the AWR report

Hi,
I've some database in 11.2 RAC on AIX.
I was analyzing the root causes of eviction.
Looking AWR Report before the reboot I see:
DB1
Host CPU (CPUs:    6 Cores:    3 Sockets: )
~~~~~~~~         Load Average
               Begin       End     %User   %System      %WIO     %Idle
                4.18     12.33     60.9      12.6       1.6      26.5
Instance CPU
~~~~~~~~~~~~
              % of total CPU for Instance:      27.4
              % of busy  CPU for Instance:      37.3
%DB time waiting for CPU - Resource Mgr:      10.6
DB2
Host CPU (CPUs:    6 Cores:    3 Sockets: )
~~~~~~~~         Load Average
               Begin       End     %User   %System      %WIO     %Idle
                3.77    13.93     60.7      12.5       1.6      26.7
Instance CPU
~~~~~~~~~~~~
              % of total CPU for Instance:       6.9
              % of busy  CPU for Instance:       9.5
  %DB time waiting for CPU - Resource Mgr:       0.0
Do you think these value ar high?
This is vmstats at the time of reboot:
DATA
RUN
BCK
AVM
FRE
PRE
PPI
PPO
PFR
PSR
PCY
FIN
FSY
FCS
CUS
CSY
CID
CWA
07/21/2013
  00:08:17
31
0
7.400.345
579.923
0
81
0
0
0
0
3.292
187.010
19.560
84
16
0
0
07/21/2013
  00:08:17
17
1
7.390.187
589.884
0
176
0
0
0
0
3.681
169.994
21.482
81
19
0
0
07/21/2013
  00:08:17
27
1
7.402.121
577.816
0
115
0
0
0
0
3.150
157.210
18.503
84
16
0
0
07/21/2013
  00:08:48
19
1
7.422.966
564.179
0
211
0
0
0
0
2.396
152.667
19.368
84
16
0
0
07/21/2013
  00:08:48
19
1
7.427.693
559.268
0
162
0
0
0
0
2.990
154.733
19.843
85
15
0
0
07/21/2013
  00:08:48
23
1
7.441.204
545.530
0
204
0
0
0
0
2.137
171.501
18.151
84
16
0
0
This is mpstat:
DATA
CPU
MIN
MAJ
MPC
INT
CS
ICS
RQ
MIG
LPA
SYSC
US
SY
WT
ID
PC
07/21/2013
  00:08:48
0
12896
44
0
1279
3030
1362
2
367
100
27313
86
14
0
0
0.49
07/21/2013
  00:08:48
1
11055
93
0
1123
3137
1315
1
222
100
31860
85
15
0
0
0.51
07/21/2013
  00:08:48
2
5938
51
0
1465
3840
1294
2
532
100
29992
85
15
0
0
0.49
07/21/2013
  00:08:48
3
6266
57
0
1247
3177
1046
2
511
100
22793
85
15
0
0
0.51
07/21/2013
  00:08:48
4
2661
18
0
1729
4087
1707
4
264
100
24647
85
15
0
0
0.49
07/21/2013
  00:08:48
5
4211
10
0
1395
2709
1101
2
209
100
21019
86
14
0
0
0.51
07/21/2013
  00:08:49
0
9372
27
0
1150
2583
1219
0
245
100
47745
82
18
0
0
0.47
07/21/2013
  00:08:49
1
11327
13
0
726
1803
794
1
130
100
25239
87
13
0
0
0.52
07/21/2013
  00:08:49
2
8970
118
0
1459
4396
1517
0
602
100
24833
81
19
0
0
0.49
07/21/2013
  00:08:49
3
7328
267
0
1329
4136
1273
2
586
100
25385
81
19
0
0
0.51
07/21/2013
  00:08:49
4
8793
19
0
1133
2583
1036
1
235
100
24327
86
14
0
0
0.50
07/21/2013
  00:08:49
5
8239
12
0
1309
2846
1165
1
277
100
18513
86
14
0
0
0.50
Thank you

Thank you Jonathan,
i'm looking ASH, 15 minutes before the crash.
I've 13% of buffer busy waits and 13% of cpu quantum
                                                               Avg Active
Event                               Event Class        % Event   Sessions
CPU + Wait for CPU                  CPU                  59.09       0.15
buffer busy waits                   Concurrency          13.64       0.04
resmgr:cpu quantum                  Scheduler            13.64       0.04
The buffer busy waits was caused by an update of a table.
There are ETL jobs that runs every nigth.
Looking IO stats I notice a change in the use of the swap:
before the crash:
hdisk66        xfer:  %tm_act      bps      tps      bread      bwrtn   
                         1.0      8.2K     2.0        8.2K       0.0
               read:      rps  avgserv  minserv  maxserv   timeouts      fails
                         2.0      6.7      3.8      9.6           0          0
              write:      wps  avgserv  minserv  maxserv   timeouts      fails
                         0.0      0.0      0.0      0.0           0          0
              queue:  avgtime  mintime  maxtime  avgwqsz    avgsqsz     sqfull
                         0.0      0.0      0.0      0.0        0.0         0.0
near the crash:
hdisk66        xfer:  %tm_act      bps      tps      bread      bwrtn   
                        71.0    241.7K    59.0      241.7K       0.0
               read:      rps  avgserv  minserv  maxserv   timeouts      fails
                        59.0     12.1      0.2    183.5           0          0
              write:      wps  avgserv  minserv  maxserv   timeouts      fails
                         0.0      0.0      0.0      0.0           0          0
              queue:  avgtime  mintime  maxtime  avgwqsz    avgsqsz     sqfull
                         0.0      0.0      0.0      0.0        0.0         0.0

Similar Messages

  • Understanding the AWR report

    Hello,
    Just to start off on the right path I would like you to know that I am a Java developer trying to understand the AWR report. To give a quick overview of my problem :
    I have built a load test framework using JMeter and trying to send SOAP requests to my weblogic server. Each of these requests are getting converted multiple Insert, Update and Merge statements and getting executed on the Oracle 10g productions grade DB server. When I run the AWR report, under the "SQL ordered by Executions (Global)" I see statements that have run for 2 billion times. The JDBC connection to the database is configured to have a maximum of 40 connections and I do not see all of them being used up. The issue now is I am NOT generating that kind of load yet. I am creating around 15000 SOAP requests in an hour and I am expecting around 1million records to hit the database. The test runs fine for a couple of hours and then the server starts failing because the database is not responding back properly. When I run the statistics query on tables "gv$session s, gv$sqlarea t, gv$process p" to get the pending sessions in the database I have seen anywhere between 30 - 62 pending sessions with a activity time of more than 300 minutes.
    I am sure I am not sending in 2 billion requests from the LoadTest env that I have developed but the AWR report says so. I want to know if there is a possible reason for this behavior. The stuck threads start occurring on the Weblogic server after 30 mins I start the test. Below is the exception I got on weblogic just in case it helps
    2014-10-06 19:26:04,960[[STUCK] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)']ERROR DAOUtil -- DAOUtil@SQLException > weblogic.jdbc.extensions.ConnectionDeadSQLException: weblogic.common.resourcepool.ResourceDeadException: Could not create pool connection. The DBMS driver exception was: Closed Connection
        at weblogic.jdbc.common.internal.JDBCUtil.wrapAndThrowResourceException(JDBCUtil.java:249)
        at weblogic.jdbc.pool.Driver.connect(Driver.java:160)
        at weblogic.jdbc.jts.Driver.getNonTxConnection(Driver.java:642)
        at weblogic.jdbc.jts.Driver.connect(Driver.java:124)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
        at com.bci.rms.ea.common.eautil.dao.DAOUtil.getConnectionFromDataSource(DAOUtil.java:222)
    Looking forward for reply/questions...
    Thanks in Advance,
    Sameer.

    Hello,
    Just to start off on the right path I would like you to know that I am a Java developer trying to understand the AWR report. To give a quick overview of my problem :
    I have built a load test framework using JMeter and trying to send SOAP requests to my weblogic server. Each of these requests are getting converted multiple Insert, Update and Merge statements and getting executed on the Oracle 10g productions grade DB server. When I run the AWR report, under the "SQL ordered by Executions (Global)" I see statements that have run for 2 billion times. The JDBC connection to the database is configured to have a maximum of 40 connections and I do not see all of them being used up. The issue now is I am NOT generating that kind of load yet. I am creating around 15000 SOAP requests in an hour and I am expecting around 1million records to hit the database. The test runs fine for a couple of hours and then the server starts failing because the database is not responding back properly. When I run the statistics query on tables "gv$session s, gv$sqlarea t, gv$process p" to get the pending sessions in the database I have seen anywhere between 30 - 62 pending sessions with a activity time of more than 300 minutes.
    I am sure I am not sending in 2 billion requests from the LoadTest env that I have developed but the AWR report says so. I want to know if there is a possible reason for this behavior. The stuck threads start occurring on the Weblogic server after 30 mins I start the test. Below is the exception I got on weblogic just in case it helps
    2014-10-06 19:26:04,960[[STUCK] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)']ERROR DAOUtil -- DAOUtil@SQLException > weblogic.jdbc.extensions.ConnectionDeadSQLException: weblogic.common.resourcepool.ResourceDeadException: Could not create pool connection. The DBMS driver exception was: Closed Connection
        at weblogic.jdbc.common.internal.JDBCUtil.wrapAndThrowResourceException(JDBCUtil.java:249)
        at weblogic.jdbc.pool.Driver.connect(Driver.java:160)
        at weblogic.jdbc.jts.Driver.getNonTxConnection(Driver.java:642)
        at weblogic.jdbc.jts.Driver.connect(Driver.java:124)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
        at com.bci.rms.ea.common.eautil.dao.DAOUtil.getConnectionFromDataSource(DAOUtil.java:222)
    Looking forward for reply/questions...
    Thanks in Advance,
    Sameer.

  • Newbie question about loading servlets on tomcat

    I have what is probably a very basic question about loading simple servlets on to tomcat to test its installation. I have followed instructions from numerous tutorials to the letter but still I can't get it to work.
    I have installed tomcat on win2k in c:\tomcat. I set up the jdk, environment vars (JAVA_HOME, CATALINA_HOME, TOMCAT_HOME) which all point at the correct dirs. I can compile a servlet without errors. I can also place a test jsp and html file into the root directory and they both work fine.
    However, now I am trying a test servlet and no matter what I do it gives me a 404. I have a servlet class file called "HelloServlet.class" which I placed into the %install_dir%\webapps\ROOT\WEB-INF\classes directory. I try to reference it using this url:
    http://localhost/servlet/HelloServlet
    Tomcat is configured to use port 80 and has been restarted after adding the servlet class file. Does anyone have a clue why this is not working for me?
    Many thanks
    Marc

    You have to add in the web.xml file that it is in the WEB-INF dir, the information about your servlet. An example:
    <web-app>
    <servlet>
    <servlet-name>HelloServlet</servlet-name>
    <servlet-class>HelloServlet</servlet-class>
    </servlet>
    <servlet-mapping>
    <servlet-name>HelloServlet</servlet-name>
    <url-pattern>/HelloServlet</url-pattern>
    </servlet-mapping>
    </web-app>

  • Question about printing an image/control Using Report Generator

    The attached VI is an example of creating a report from a control or image. My problem is that when the control (xy graph) gets too large in the vertical direction it clips the image but, the horizontal is scaled to the width of the report. 
    First question is why does it scale in the horizontal and not the vertical? See the Test.pdf for what I am talking about. 
    Second, is it possible to control this behavior (edit/create an alternate for the Append Control Image to report or the Report VI's)?
    I have found many work around but ultimately it comes back to; why automatically scale one direction but not the other?
    For this application I don't mind that the graph is scaled down in the horizontal I just wish it would do the same thing in the vertical axis (I don't even want it to be a 1 to 1 zoom ratio, just fill the avalible report space).
    Thoughts?
    Attachments:
    Print XY Graph TEST.vi ‏55 KB
    Test.pdf ‏217 KB

    I agree all of the workarounds have limitations. One workaround, that I'm not sure if you have tried yet is change the numeric constant from 50 to 20 (I attached a picture showing which constant I am talking about). This seems to fix the issue just because the image never becomes larger than the report page itself.
    Let me know if that option creates any problems for you as well.
    Kevin S
    Applications Engineer
    National Instruments
    Attachments:
    Block Diagram.PNG ‏29 KB

  • Question about loading hierarchy

    Hello expert,
           I have some question about hierarchy loading:
             (1) hierarchy DS are all for BW3.X? does it need DTP for this loading ?
             (2) in Process chain,  "DTP process" or "saving hierarchy process"  are all needed in PC ? or just one of them is needed?
    Many Thanks,

    You do not need DTP to load hierarchy.
    In the PC, you need to add the info package and after that attribute change run to load the hierarchy fron hierarchy DS.
    Regards,
    Gaurav

  • Question about loading and using Images.

    I know how to load an image and to draw it onto a canvas etc etc. However my question is to do with the efficiency of loading multiple images (possibly the same image!) and using them together. I have multiple objects, all instances of the same class, which all draw themselves onto a screen in different positions. However they all use the same 3 images (depending on an internal state).
    So if I create 5 of these objects and each object calls to load the same image using something along the lines of:
    url = this.getClass().getResource("MyImage.png");
    normalImage = Toolkit.getDefaultToolkit().getImage(url);Does this mean that in the memory there are 5 exactly the same copies of this image? Or does java do something clever and use one image that they would all reference? I suspect I will have 5 in memory but I wanted to ask about this before going and making an imageManager class where I load all the images and just use getMethods in that when drawing. I will not be preforming any transforms or anything on any of the images so they can all literally be the same image but simply drawn in multiple positions on the one canvas.
    (note: I am using active rendering to draw these images myself and then blit it to the screen ...).

    Use ImageIO rather than Toolkit--with Toolkit you need to use a MediaTracker to ensure you have your image loaded before you try to use it.
    If you load an image and then do this:
    MyImage1 = MyImage;
    MyImage2 = MyImage;
    MyImage3 = MyImage;
    There are 4 references to the Image in MyImage. If you don't specifically use a method that gives you a new Image, usually, you are getting a refrence to an image already loaded in memory.

  • Question about Removing Permissions from the System Folder with chmod

    Hi
    I have a question about the removal of permissions from the System folder (and sub directories and files).
    Background
    Since installing a new HD, clean install of 10.6, application of updates and moving over backed up user directories I have had several issues with permissions.
    I have read several threads on this and using disk warrior and other tools I have been able to fix most of the issues.
    The Problem
    The issue that remains is a permissions check using Disk Utility keeps reporting
    ACL found but not expected on "System".
    followed by an extensive list of sub directories and all.
    Attempts to repair take hours and the same errors are reported.
    Found Solutions
    I have read about changing and/or completely removing the ACL from the permissions from the System using two different commands:
    sudo chmod -R -N ./System/* ( to remove all ACLs)
    or
    sudo chmod -R -E ./System/* ( to replace all ACLs )
    My Question is ( to the UNIX gurus):
    What is the difference between the usage with -E and -R and which is the best approach for a Systems directory and (subordinates)?
    Many thanks!!

    OK
    So I misread on the your instructions about the PW reset, did it, no harm in that. I did also select the options to reset all the permissions for all the accounts and the ACL issues were not resolved. My bad, I forgot to note that.
    You do suggest getting and expert opinion but alas these are rather elusive. In most cases the Apple solutions is to do a complete reinstall... I have found that unless you completely wipe a drive and rebuild everything there are often artifacts left behind. Since I have full and redundant back-ups I would rather explore and hack a little instead of a dull old system reinstall. The irony is the system issue I had was it was the result a reinstall and combo update on a new drive. I recognize the risks of entering the realms of the System folders but I am willing to explore knowing full well that I have a path to recovery.
    Thanks again for your insights. I come to the forums looking for insights and ideas but not a lecture...

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • Question about X-series with the bundled 65W adapter, gets very hot

    Hi all,
         My company purchased many X230 notebook in last year.
         some are standard configuration (i3 CPU) and some are (i5 CPU).
         The i3 seems to have no problem.
         However, the i5 model had some issue that it can't full run all the 4-core in high load when connecting the bundled 65W AC adapter, and the adapter got hot. After investigation, we change to 90W AC adapter and the problem is gone. All the 4-core runs correctly and the 90W AC adapter is cool.
         The reseller said that the 65W should be factory package and they will not provide any warranty if we use it incorrectly with the 90W adapter (but the lenovo homepage says the adapter supports....).
         Now, we are considering the X230i model with the same upgrade to i5 CPU.
         Can someone provide feedback if using the X230 / X230i + i5 CPU normally with 65W AC adapter?
    Regards,
         Donald

    Could this be the same or related issue?
    I have a T430s with integrated graphics and dual core i5 CPU. According to Lenovo sales literature this system will work with a 65W AC adapter. Also when I ordered the system online from Lenovo's site I was also given the option to select either a 65W or 90W adapter.
    At the office I use a 90W adapter. The system idles at under 40°C with the fan usually not spinning or spinning at slowest setting (according to TPFanControl) even when the system is under load. It may get as hot as 50°C when the system is under heavy load. But I've never seen it overheat or had the fan spin continuously at maximum speed, etc.
    When travelling I use a 65W adapter in order to minimize size and weight. In that situation the fan is always running and temperatures are usually in the 50°C to 60°C range. On more than one occasion system temperature has "run away" getting increasingly hotter, going to 80°C or 90°C.
    In one case the system shut itself down. I let it cool down overnight. When I rebooted I got a fan error. After that incident I sent the T430s to Lenovo for repair. The technicians found nothing wrong. From their report it's not clear if they replaced the fan or not. Nevertheless this "run away" temperature situation has recurred since then.
    From reading posts on this thread and based on my experiences with the T430s it seems to me that the 65W adapter doesn't have enough power for even a basic T430s despite what Lenovo's marketing literature may say.
    I wonder if the same conclusion doesn't also apply to your systems? Do your temperature issues go away when your system runs on a 90W adapter?
    Cheers... Dorian Hausman
    X1C2, TPT2, T430s, SL500, X61s, T60p, A21p, 770, 760ED... 5160, 5150... S360/30

  • How to sort a average in the interactive report

    I have a interactive report. The columns are candidate_name, reviewer_name, score.
    each reviewer will have a score for a candidate. There are many candidates. My boss wants to know which candidate get the highest average score. I build an interactive report.
    I can average the score for each candidate_name and break with this column. I don't know if we can sort the average score for each candidate_name column. If we can, how to do this.
    Thanks a lot!

    Not sure if you got an answer for this, but the only way I know how to do it would be to use an analytic function on the query so that the average of the scores is listed as a column. When you have that, then you can sort by it. As of ApEx 3.2.1, you will not be able to create this new column using the interactive reports "Compute" feature. You'll need to build it into the SQL.
    Shane.

  • Question about Setting Window Title/the use of AVWindowSetTitle()

    Hi everyone,
    I have a question about setting the title of the window in which the Acrobat viewer normally opens a PDF file.  The documentation states that AVWindowSetTitle() cannot be used in this case.  However, using it has worked with versions of Acrobat/Reader up until and including version 8.
    Everything breaks down starting with Acrobat 9.
    According to the documentation I am supposed to do the following: "To set the title of a window in which the Acrobat viewer opens a PDF file, you must replace AVDocOpenFromASFileWithParams() and pass the window title in tempTitle."
    Unfortunately, there are 2 problems I have with this approach:
         I do need to be able to change the document title on document Save, not only on document Open
         I do not know what AVDocOpenFromASFileWithParams() implementation has to look like if I have to replace it using HFTReplaceEntry().
    Is there a sample customized implementation of AVDocOpenFromASFileWithParams() somewhere that I could take a look at?
    Is there a way to change a document Title inside Acrobat/Reader window after a Save operation?
    Thanks a million,
      Lana2010K

         I am sorry.  I don't know how you tested this and came to conclusion that this works correctly in Acrobat X.
         I just tested our Acrobat plug-in with a trial version of Acrobat X and this did not work.
         When we open a file in Acrobat (doing it ourselves in the plug-in by adding another specialized open) we set the window title to something different from the default file name.  Then if a file gets edited and saved (File->Save), the window title gets reset to the file name. In the PDDocDidSave callback we call AVWindowSetTitle() to set it back to a more descriptive window title we need.  This has worked up until and including Acrobat 8, but does not work in either version 9.0 or 10.0.
         Also, I just modified the plug-in code to always change the Window Title of every document (even if opened through native File->Open) on document Save.  It does not work.
         Please help,
              Lana2010K

  • Last data-load date in the WebI report

    Hey,
    I have made a WebI report on the InfoCube. Now  the  client is interested in seeing the date on which the data was latest loaded in the InfoCube. If someone can help with the scenario.
    Thanks.

    Hi,
    Take the max of record load date in the report and show it.
    Let me know if this did not work.
    Cheers,
    Ravichandra K

  • Question about loading Premiere CS2

    I have Premiere Pro 1.5 on a dying computer with XP operating system.  I couldn't get 1.5 reauthorized so Adobe gave me a CS2 download with serial number..  I tried to load CS2 on the newer computer with Microsoft Vista as the operating system.  At first it hung on "Error 1311 - Source file not found.  Data1.cab."  I moved Data1.cab to the directory and then I was able to get past that error.  CS2 continued installation until it got to the help files, which didn't load.  The rest of the CS2 seemed to load OK.
    I then tried to open CS2.  It bombs after 3 seconds of the opening page.  Any ideas?

    I don't have a CD of CS2.  I downloaded a second copy of CS2 from Adobe and had the same result.  However I was repairing the origional CS2 with the second copy.  I will uninstall CS2 and try to install the newer copy of CS2 later. 
    I am wondering... Could it be that CS2  doesn't like Vista? 

  • Question about loading time determination in OVLZ

    Hello,
    In t-code OVLZ, field "Determine load. time", the F1 explanation is bellow:
    -No loading time determination
    No loading time is determined
    -A Route-dependent
    A route-dependent loading time is determined.
    Influencing factors are the shipping point, the route, and the loading group
    -B Route-independent
    A route-independent loading time is determined.
      Influencing factors are the shipping point, the route, and the loading group of the material.
    can somebody tell me what is the difference between A B? It seems to be the same according to the 2nd line explanation of B.
    Thanks,
    Gunadi

    hi,
    in OVLZ-shipping point- working hours
    determine load time: the option A -stands for route dependent
    means loading times is determined with combination of shipping point+ route+ loading group
    option B- stands for route Independent
    Means  loading time is determined with comnination of shipping+ loading group
    the loading time will have effect on delivery sch ie., taken into account for forward scheduling else backward scheduling
    hope this help u
    regards,
    Arun prasad

  • Question about versioning (history) in the latest release

    Hallo,
    Some questions:
    1. Will versioning ever be integrated with locked pages, locked scene, colored pages, revision marks functionality etc, so that at a particular moment you can optionally augment the current versioning with the more traditional way of production revision cycles?
    2. If above will not be the case (or is never activated): what currently happens with scene numbers & manual breakdown info that is already added, but than the scene gets deleted.  Is all this info gone (& scenes renumbered).  Can we save a version to keep this info?  Please clarify.
    3. Regarding breakdown of elements: will there be a way that we can visualize breakdown sheets on screen.  It's much easier for me to look at it that way, e.g. the Scenechronize way (I guess I'm old).
    4. Regarding breakdown and making the script 'production ready', will there be a way to combine scenes that belong together and part scenes (that are not in the same shooting location but where the writer chose to combine the action in one scene, eg like telephone conversation etc).
    I guess all my questions boil down to this: will Story also aim to offer serious production breakdown features that could be used by production managers & assistant directors to prep for physical production, or will it mainly be a collaborative writing tool (even if it does allow you to transfer text into On Location shotlists and then eventually into your clips). 
    I guess there's a conceptual difference (at least when talking about narrative film with a lot of cast, locations, props etc): do you support mainly the writing effort and than transfer into a tool like Scenechronize once production starts - or do you go further? For  (corporate) interview type shoots this matters less.
    I would be interested to hear your views on the matter.
    Regards,
    Bavo

    Yes. Versioning will be integrated with locked pages, colored pages but we are not planning to lock scenes yet. We will also show asterix against changed lines in future updates.
    You can always explicitly save a version (file->save as) to mark a checkpoint draft. You can add your own comments while saving this draft so that you can find it later.
    We'd like to support production related features because we want to leverage script metadata for all it can be leveraged for. That said, "full fledged" is a loaded term. We will take small steps and enable workflows related to production as we go along. Dynamic breakdown reports, that can be viewed and edited in Story are on our roadmap.
    --Anubhav

Maybe you are looking for

  • Adobe LiveCycle Designer & Mac?

    Please help me understand this: I have a Mac. I have Acrobat Pro XI via the Creative Cloud. There is a button within Acrobat Pro XI that says "Edit Text & Images" When I press this button it says "This form cannot be edited in Acrobat. Please use Ado

  • Vc APPLICATION FOR BASIC ANALOG DATA AQUSITION

    I am working on an application that perform a basic analog data aquisition from one analog channel of a PIC-DAQ device. As a starting point I am using examples that comes with my DAQ-software. See attachment for the Example program: DAQdoubleBuf.c I

  • Quicktime render location

    Heres my problem. I have two internal hard drives. One I use as a scratch drive, the other I keep my system and applications on it. Currently I am trying to export fairly large family videos through quicktime 7 into a smaller format for backup (H.264

  • In BDCP table I am not getting any entries for EKKO,EKPO,EKBE,EKET,EBAN

    In BDCP table I am not getting any entries for EKKO,EKPO,EKBE,EKET,EBAN tables. But there are entries for many other tables. I have got objects for EKKO,EKPO,EKBE,EKET,EBAN tables from TCDOB tables.

  • Import fcp filters into fce?

    Is there any way of importing fcp filters into fce? I have a FCE project but cannot open it in FCP as I have version 5 of FCP and FCE 4. All I want to do is get the 3-way color corrector into FCE. Any way to do this?