Performance degredation when running offline
Hi,
I'm noticing that when I run Java Web start applications, that there is a significant difference in performance of aspects of the program when I run offline compared to when I have access to the application host. So, if I click on a Swing button to launch a dialog, for example, the first time I do this, it is really, really slow. On subsequent calls it speeds up. However, when I have an internet connection to the host, the performance is acceptable - even on the first button press. None of the application functionality actually uses the internet connection/host other than to download the application.
I'm running 1.6 update 4.
So, why the delay when offline? Is it to do with the Java Cache? A look up on jar signing? What?
Thanks,
-Paul
Edited by: PaulDBrown on Sep 26, 2008 10:41 AM
[made description more precise]
I find that if I put an entry in my hosts file corresponding to the IP address of the Web Start hosts, then everything works as I would expect.
So the issue is that the host has to be resolvable - there has to be access to either a DNS server or an entry in the hosts file that identifies the host so that performance is not affected when the application is run offline. However, if the user really is offline, then it should be expected that they do not have access to the DNS servers either. In my view, this is a bug.
Thanks,
-Paul
Similar Messages
-
Performance problem when running a personalization rule
We have a serious performance problem when running a personalization rule.
The rule is defined like this:
Definition
Rule Type: Content
Content Type: LoadedData
Name: allAnnouncements
Description: all announcements of types: announcement, deal, new release,
tip of the day
If the user has the following characteristics:
And when:
Then display content based on:
(CONTENT.RessourceType == announcement) or (CONTENT.RessourceType == deal)
or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType == tip
of the week)
and CONTENT.endDate > now
and CONTENT.startDate <= now
END---------------------------------
and is invoked in a JSP page like this:
<%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
|| CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
(CONTENT.userType ='retailer')"%>
<pz:contentselector
id="cdocs"
ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
nHome/b2boost"
rule="allAnnouncements"
sortBy="startDate DESC"
query="<%=customQuery%>"
contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
The customQuery is constructed at runtime from user information, and cannot
be constructed with rules
administration interface.
When I turn on debugging mode, I can see that the rule is parsed and a SQL
query is generated, with the correct parameters.
This is the generated query (with the substitutions):
select
WLCS_DOCUMENT.ID,
WLCS_DOCUMENT.DOCUMENT_SIZE,
WLCS_DOCUMENT.VERSION,
WLCS_DOCUMENT.AUTHOR,
WLCS_DOCUMENT.CREATION_DATE,
WLCS_DOCUMENT.LOCKED_BY,
WLCS_DOCUMENT.MODIFIED_DATE,
WLCS_DOCUMENT.MODIFIED_BY,
WLCS_DOCUMENT.DESCRIPTION,
WLCS_DOCUMENT.COMMENTS,
WLCS_DOCUMENT.MIME_TYPE
FROM
WLCS_DOCUMENT
WHERE
((((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = ''
AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'language'
AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = '*'
AND NOT (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
At this moment, the server makes the user wait more than 10 min for the
query to execute.
This is what I found out about the problem:
1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
, it takes 5-10 seconds.
2)If I remove the second term of (CONTENT.Country='nl' ||
CONTENT.Country='*' ) in the custom query,
thus retricting to CONTENT.Country='nl', the performance is OK.
3)There are currently more or less 130 records in the DB that have
Country='*'
4)When I run the page on our QA server (solaris), which is at the same time
our Oracle server,
the response time is OK, but if I run it on our development server (W2K),
response time is ridiculously long.
5)The problem happens also if I add the term (CONTENT.Country='nl' ||
CONTENT.Country='*' )
to the rule definition, and I remove this part from the custom query.
Am I missing something? Am I using the personalization server correctly?
Is this performance difference between QA and DEV due to differences in the
OS?
Thank you,
Luis MuñizLuis,
I think you are working through Support on this one, so hopefully you are in good
shape.
For others who are seeing this same performance issue with the reference CM implementation,
there is a patch available via Support for the 3.2 and 3.5 releases that solves
this problem.
This issue is being tracked internally as CR060645 for WLPS 3.2 and CR055594 for
WLPS 3.5.
Regards,
PJL
"Luis Muniz" <[email protected]> wrote:
We have a serious performance problem when running a personalization
rule.
The rule is defined like this:
Definition
Rule Type: Content
Content Type: LoadedData
Name: allAnnouncements
Description: all announcements of types: announcement, deal, new release,
tip of the day
If the user has the following characteristics:
And when:
Then display content based on:
(CONTENT.RessourceType == announcement) or (CONTENT.RessourceType ==
deal)
or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType ==
tip
of the week)
and CONTENT.endDate > now
and CONTENT.startDate <= now
END---------------------------------
and is invoked in a JSP page like this:
<%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
|| CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
(CONTENT.userType ='retailer')"%>
<pz:contentselector
id="cdocs"
ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
nHome/b2boost"
rule="allAnnouncements"
sortBy="startDate DESC"
query="<%=customQuery%>"
contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
The customQuery is constructed at runtime from user information, and
cannot
be constructed with rules
administration interface.
When I turn on debugging mode, I can see that the rule is parsed and
a SQL
query is generated, with the correct parameters.
This is the generated query (with the substitutions):
select
WLCS_DOCUMENT.ID,
WLCS_DOCUMENT.DOCUMENT_SIZE,
WLCS_DOCUMENT.VERSION,
WLCS_DOCUMENT.AUTHOR,
WLCS_DOCUMENT.CREATION_DATE,
WLCS_DOCUMENT.LOCKED_BY,
WLCS_DOCUMENT.MODIFIED_DATE,
WLCS_DOCUMENT.MODIFIED_BY,
WLCS_DOCUMENT.DESCRIPTION,
WLCS_DOCUMENT.COMMENTS,
WLCS_DOCUMENT.MIME_TYPE
FROM
WLCS_DOCUMENT
WHERE
((((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = ''
AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'language'
AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = '*'
AND NOT (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
At this moment, the server makes the user wait more than 10 min for the
query to execute.
This is what I found out about the problem:
1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
, it takes 5-10 seconds.
2)If I remove the second term of (CONTENT.Country='nl' ||
CONTENT.Country='*' ) in the custom query,
thus retricting to CONTENT.Country='nl', the performance is OK.
3)There are currently more or less 130 records in the DB that have
Country='*'
4)When I run the page on our QA server (solaris), which is at the same
time
our Oracle server,
the response time is OK, but if I run it on our development server (W2K),
response time is ridiculously long.
5)The problem happens also if I add the term (CONTENT.Country='nl' ||
CONTENT.Country='*' )
to the rule definition, and I remove this part from the custom query.
Am I missing something? Am I using the personalization server correctly?
Is this performance difference between QA and DEV due to differences
in the
OS?
Thank you,
Luis Muñiz -
Very slow performance jclient when running with remote server
We have performance problems when running a JClient application, if the Application Server is on a different machine in the same 100mbit network. In our application we open 6 panels with about 15 TextFieldBindings each, on a tabbed pane. Each panel has it's own viewobject on the server. It takes the panel allmost two minutes to start up. Our own code seems to perform reasonable, but between the last line of code and the actual visibility of the panel there is a long period of low intensity network traffic between the client and the server machine, while both machines have low CPU usage. We tried setting the synchmode of the ApplicationModule to SYNC_LAZY and SYNC_IMMEDIATE, but this does not seem to make any difference.
It seems as if the server starts throwing a lot of events after our code is executed, which are caught by the BC4J controlbinding listeners. The performance is a lot better if we have the server and the client on the same machine, and the database on a different one.
This kind of performance is not acceptable for this application. Are we doing something that should not be done with BC4J, or are we missing something?We have performance problems when running a JClient application, if the Application Server is on a different machine in the same 100mbit network. In our application we open 6 panels with about 15 TextFieldBindings each, on a tabbed pane. Each panel has it's own viewobject on the server. It takes the panel allmost two minutes to start up. Our own code seems to perform reasonable, but between the last line of code and the actual visibility of the panel there is a long period of low intensity network traffic between the client and the server machine, while both machines have low CPU usage. We tried setting the synchmode of the ApplicationModule to SYNC_LAZY and SYNC_IMMEDIATE, but this does not seem to make any difference.
It seems as if the server starts throwing a lot of events after our code is executed, which are caught by the BC4J controlbinding listeners. The performance is a lot better if we have the server and the client on the same machine, and the database on a different one.
This kind of performance is not acceptable for this application. Are we doing something that should not be done with BC4J, or are we missing something? You must be hitting a performance issue regarding download of all property metadata for setting labels etc. on the UI (in case of remote-tier deployment).
This issue has been resolved for our next release of JDeveloper. Basically a new api has been added that allows 3tier apps to "download" the set of "used" VO definitions, Attribute Definitions etc on the client, so that the UI comes up quick.
Also a application/ui/binding load code-generation has been modified to allow for "lazy" loading of controls/lazy binding etc, quite like what's done in the JClient Control-bindings sample on otn.
For 9.0.2, you may shorten the "load" time, by loading only the UI that's first displayed and pre-loading ViewObject definition. However it'll still be slower than what the above mentioned method would do in one roundtrip. -
Performance hit when running in ARCHIVELOG mode.
What is the performance hit when running in ARCHIVELOG mode?
Thank you,
DavidI am not one to disagree with Tom Kyte (unless I think he is wrong :) ), and I am not going to disagree here. I do caution the simplistic answer that the hit is negligible. I commend the respondent who qualified that answer with a discussion of I/O.
I have come across more than one situation where archive logging was a performance hit because of the associated I/O. Many want to put archive logs on cheaper storage and do not recognize that not only can this slow a system but that it could become a major issue resulting in a system that hangs until the logs are written. A better solution for these folks is to write to fast storage and have a secondary process that offloads those logs to the slower storage.
Let us also not assume that the archive location is local disk. It might be that an archive location is remote, such as with log shipping or NFS. Network latency can become an issue.
There are many things to consider as there always are. I suppose with any answer, even if simple, one could spin it with some obscure situation that makes the simple answer inappropriate. Having seen some burned by this issue, I chose to elaborate, and I appreciate your indulgence.
Chris -
Performance slowdown when running through report server
There is a considerable difference between running a particular report through the report server. It is slower than when run directly from ReportBuilder.
Each points to the same data/application server. There is very little formatting or other work required of the client.
The difference is about 10-fold, i.e. a report taking 1 minute to run via ReportBuilder takes about 10 minutes when launched through the report server.
I have no idea how to begin diagnosing this. Our DBA says it is possible for the 'explain plan' to differ because the SQL is contained within the .REP file (and not as a stored procedure in the database). Even so, how/where would one begin looking to discover the reason for the dramatic performance difference?
Performance monitors on both the app/database server and the Oracle engine show that the slow running report consumes about the same amount of resources as the faster report. The slower report simply takes 10 times as long to run.
Perhaps we're not looking at the 'best' performance monitor parameters?
Anything else we can use to trace execution differences?When we get this we try the following.
The dba's have some tool which can view the sql running under the server and the builder - you can then see if the plan is different.
Ordering - if there is a lot of ordering we get the SQL to pre order using the ORDER BY clause, this means the IAS server has less work to do (depending on your PC we find a PC is always quicker at formatting than the server).
Userlogon - we have a different user logon to that used by the server, this sometimes gives rise to different plans. You can change this in reports using ALTER_SESSION. E.g. we find this can speed up reports:
srw.do_sql('ALTER SESSION SET optimizer_index_cost_adj=50');
srw.do_sql('ALTER SESSION SET optimizer_index_caching=90'); if put in the before report trigger (nb the values may differ depending on your environment. -
DB2 instance shutdown when running offline backup
Dear Expert,
After doing some maintenance (upgrade firmware and db2 relocate), the db2 procesess were shutdown when doing offline backup. Please see the message captured in db2diag.log:
2009-06-14-08.06.20.905317+420 E363402871A510 LEVEL: Severe
PID : 426016 TID : 1 PROC : db2gds 0
INSTANCE: db2p01 NODE : 000
FUNCTION: DB2 UDB, oper system services, sqloEDUCodeTrapHandler, probe:10
MESSAGE : ADM0503C An unexpected internal processing error has occurred. ALL
DB2 PROCESSES ASSOCIATED WITH THIS INSTANCE HAVE BEEN SHUTDOWN.
Diagnostic information has been recorded. Contact IBM Support for
further assistance.
2009-06-14-08.06.20.905881+420 E363403382A642 LEVEL: Severe
PID : 426016 TID : 1 PROC : db2gds 0
INSTANCE: db2p01 NODE : 000
FUNCTION: DB2 UDB, oper system services, sqloEDUCodeTrapHandler, probe:20
DATA #1 : Signal Number Recieved, 4 bytes
6
DATA #2 : Siginfo, 64 bytes
0x0FFFFFFFFFFFD270 : 0000 0006 0000 0000 0000 0009 0000 0000 ................
0x0FFFFFFFFFFFD280 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0FFFFFFFFFFFD290 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0FFFFFFFFFFFD2A0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
2009-06-14-08.06.20.922545+420 I363404025A371 LEVEL: Severe
PID : 647618 TID : 1 PROC : db2sysc 0
INSTANCE: db2p01 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleChildCrashHandler, probe:15
MESSAGE : DiagData
DATA #1 : Hexdump, 15 bytes
0x0000000100009834 : 416E 2045 4455 2063 7261 7368 6564 2E An EDU crashed.
2009-06-14-08.06.20.923399+420 I363404397A359 LEVEL: Severe
PID : 647618 TID : 1 PROC : db2sysc 0
INSTANCE: db2p01 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleChildCrashHandler, probe:16
MESSAGE : DiagData
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFD9E4 : 0006 8020 ...
Error report (ERRPT):
LABEL: CORE_DUMP
IDENTIFIER: C69F5C9B
Date/Time: Sun Jun 14 08:06:20 THAIST 2009
Sequence Number: 1621
Machine Id: 0006A03FD600
Node Id: tdsprd01
Class: S
Type: PERM
Resource Name: SYSPROC
Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED
Probable Causes
SOFTWARE PROGRAM
User Causes
USER GENERATED SIGNAL
Recommended Actions
CORRECT THEN RETRY
Failure Causes
SOFTWARE PROGRAM
Recommended Actions
RERUN THE APPLICATION PROGRAM
IF PROBLEM PERSISTS THEN DO THE FOLLOWING
CONTACT APPROPRIATE SERVICE REPRESENTATIVE
Detail Data
SIGNAL NUMBER
6
USER'S PROCESS ID:
426016
FILE SYSTEM SERIAL NUMBER
20
INODE NUMBER
4098
CORE FILE NAME
/db2/P01/db2dump/c426016.000/core
PROGRAM NAME
db2sysc
STACK EXECUTION DISABLED
0
COME FROM ADDRESS REGISTER
PROCESSOR ID
hw_fru_id: N/A
hw_cpu_id: N/A
ADDITIONAL INFORMATION
pthread_k 88
praise 6C
raise 38
abort B8
sqloEDUSI 260
Symptom Data
REPORTABLE
1
INTERNAL ERROR
0
SYMPTOM CODE
PCSS/SPI2 FLDS/db2sysc SIG/6 FLDS/sqloEDUSI VALU/260
LABEL: CORE_DUMP
IDENTIFIER: C69F5C9B
Date/Time: Sun Jun 14 08:06:20 THAIST 2009
Sequence Number: 1620
Machine Id: 0006A03FD600
Node Id: tdsprd01
Class: S
Type: PERM
Resource Name: SYSPROC
Please advice, what's going on with the database.
Your feed back is greatly appreciated.
Thanks and Regards,
RudiLABEL: CORE_DUMP
IDENTIFIER: C69F5C9B
Date/Time: Sun Jun 14 08:06:20 THAIST 2009
Sequence Number: 1621
Machine Id: 0006A03FD600
Node Id: tdsprd01
Class: S
Type: PERM
Resource Name: SYSPROC
Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED
Probable Causes
SOFTWARE PROGRAM
User Causes
USER GENERATED SIGNAL
Recommended Actions
CORRECT THEN RETRY
Failure Causes
SOFTWARE PROGRAM
Recommended Actions
RERUN THE APPLICATION PROGRAM
IF PROBLEM PERSISTS THEN DO THE FOLLOWING
CONTACT APPROPRIATE SERVICE REPRESENTATIVE
Detail Data
SIGNAL NUMBER
6
USER'S PROCESS ID:
426016
FILE SYSTEM SERIAL NUMBER
20
INODE NUMBER
4098
CORE FILE NAME
/db2/P01/db2dump/c426016.000/core
PROGRAM NAME
db2sysc
STACK EXECUTION DISABLED
0
COME FROM ADDRESS REGISTER
PROCESSOR ID
hw_fru_id: N/A
hw_cpu_id: N/A
ADDITIONAL INFORMATION
pthread_k 88
Recommended Actions
RERUN THE APPLICATION PROGRAM
IF PROBLEM PERSISTS THEN DO THE FOLLOWING
CONTACT APPROPRIATE SERVICE REPRESENTATIVE
Detail Data
SIGNAL NUMBER
6
USER'S PROCESS ID:
426016
FILE SYSTEM SERIAL NUMBER
20
INODE NUMBER
4098
CORE FILE NAME
/db2/P01/db2dump/c426016.000/core -
Performance Issues when running 1.5.0_9 with servers
Hi. We have an application in Java. We run it using 1.5.0_9 on Win2003 and have no problems when running it on single Intel CPU PCs and Servers. However, whenever we are finding that some higher spec servers are running the application significantly slower... around 30%. It seems that it's either a problem with the Enterprise version of Win2k3 or perhaps with AMD Opteron CPUs.
Does anyone know of any known issues with Win2k3 Enterprise or AMD Opteron CPUs or simply any dual CPU technology?
Thanks
Edited by: gingerdazza on Jul 2, 2008 1:21 AMI'm able to recreate the problem where a section of a screen goes black using with the following config.
JDK 1.7.0_06
2 graphics cards
3 monitors
Create a New JavaFX Application Project (Hello World) in NetBeans 7.2. Once executed position the window so it spans two monitors, ensure the monitors are on seperate graphics cards and then hover the mouse of the Hello World button and one half of the window goes black. Positioning the window so it doesn't span a monitor draws the window correctly.
Has anyone experienced this issue?
Thanks. -
Performance problems when running PostgreSQL on ZFS and tomcat
Hi all,
I need help with some analysis and problem solution related to the below case.
The long story:
I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
Within a non-global zone Im running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
NPROC USERNAME SWAP RSS MEMORY TIME CPU
49 postgres 749M 669M 4,7% 7:14:38 13%
1 jboss 2519M 2536M 18% 50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
NPROC USERNAME SWAP RSS MEMORY TIME CPU
1 jboss 3104M 913M 6,4% 0:22:48 0,1%
#sar -g 5 5
SunOS vbn-back 5.10 Generic_142901-03 i86pc 05/28/2010
07:49:08 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
07:49:13 27.67 316.01 318.58 14854.15 0.00
07:49:18 61.58 664.75 668.51 43377.43 0.00
07:49:23 122.02 1214.09 1222.22 32618.65 0.00
07:49:28 121.19 1052.28 1065.94 5000.59 0.00
07:49:33 54.37 572.82 583.33 2553.77 0.00
Average 77.34 763.71 771.43 19680.67 0.00Making more memory available to tomcat seemed to worsen the problem or at least didnt prove to have any positive effect.
My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
An unofficial performance evaluation on the database with vacuum analyze took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
The short story:
Im trying different steps but running out of ideas. Weve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didnt find much information on the matter so if any can help please recommend how to make this change
Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
Any help appreciated and I will try to provide additional information on request if needed
Thanks in advance,
Kasperraidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
You can change the record size by "zfs set recordsize=8k <dataset>"
It will only take effect for newly written data. Not existing data. -
AP invoice is not offsetting in the AP system when we perform a check run
Hi Everyone,
The invoice is not offsetting in the AP system when we perform a check run. They are remaining as outstanding in AP. Can you please tell us why? There are several of these in the system and I need them cleared out.
Plz help me as early as possible.
Thanks,Hi,
The status is not changing to "Paid". In this case Iam trying to apply a Credit memo(dated on 01-dec-09 & GL date 15-dec-09),the invoice date was 01-jul-2007,the Invoice and Credit memo transaction amount was 500 each.
Invoice amount = 500 USD
Credit memo = <500> USD
The transactions should knock off.
Pls Suggest me the solution.
Thanks for your early response. -
ZBook 17 g2 - poor DPC Latency performance when running from z Turbo Drive PCIe SSD
I'm setting up a new zBook 17 g2 and am getting very poor DPC latency performance (> 6000 us) when running from the PCIe SSD. I've re-installed the OS (Win 7 64 bit) on both the PCIe SSD and a SATA HDD and the DPC latency performance is fine when running from the HDD (50 - 100 us) but horrible when running from the PCIe SSD (> 6000 us). I've updated the BIOS and tried every combination of driver and component enabling/disabling I can think of. The DPC latency is extremely high from the initial Windows install with no drivers installed. Adding drivers seems to have no effect on the DPC latency. Before purchasing the laptop I found this review: http://www.notebookcheck.net/Review-HP-ZBook-17-E9X11AA-ABA-Workstation.106222.0.html where the DPC latency measurement (middle of the page) looks OK. Of course, this is the prior version of the laptop and I believe it does not have the PCIe SSD. Combining that with the fact that I get fine performance when running from the HDD I am led to believe that the PCIe SSD is the cause of the problem. Has anyone found a solution to this problem? As it stands right now my zBook is not usable for digital audio work when running from the PCIe SSD. But it cost me a lot of money so I'd sure like to use it...! Thanks, rgames
Hi mooktank, No solution yet but, as of about six weeks ago, HP at least acknowledged that it's a problem (finally). I reproduced it perfectly on another zBook 17 g2 and another PCIe SSD in the same laptop and HP was able to reproduce the problem as well. So the problem is clearly in the BIOS or with some driver related to the PCIe SSD. It could also be with the firmware in the drive, itself, but I can't find any other PCIe drives in the 60 mm form factor. So there's no way to see if a differnt type of drive would fix the problem. My suspicion is that it's related to the PCIe sleep states - those are known to cause exactly these types of problems because the drive takes quick "naps" to save power and there's a delay when it is told to wake back up. That delay causes a delay in the audio buffer that results in pops/crackles/stutters that would never be noticed doing other tasks like video editing or CAD work . So it's a problem specific to folks who need low-latency audio performance (very few apps require low latency audio - video editing, for example, uses huge buffers with relatively high latency). A lot of desktops offer a BIOS option to disable those sleep states but no such option exists in HP's BIOS for that laptop. In theory you can do it from within Windows but it doesn't have an effect on my system. That might be one of those options that Windows allows you to change but that actually has no effect. One workaround is to disable CPU throttling. That makes the CPU run at full speed all the time and, I believe, also disables the PCIe and other sleep states. When I disable CPU throttling, DPC latency goes back to normal. However, the CPU is then running full-speed all the time so your battery life basically goes to nothing and the laptop gets *very* hot. Clearly that is not necessary because the laptop runs fine from the SATA SSD. HP needs to fix the latency problem associated with the PCIe drive. The next logical step is to provide a BIOS update that provides a way to disable the PCIe sleep states without disabling CPU throttling, like on many desktop systems. The bad news is that HP tech support is not very technical, so it takes forever for them to figure out what I'm talking about. It took a couple months for them to start using the DPC Latency checker. Hopefully there will be a fix at some point... in the meantime, I hope that HP sends me a check for spending so much time educating their techs on how computers work. And for countless hours lost re-installing different OSes only to show that the performance is exactly the same as shown in the DPC Latency checker. rgames
-
Performance issue when using the same query in a different way
Hello,
I have a performance problem with the statement below when running it with an insert or with execute immediate.
n.b.: This statement could be more optimized, but it is a generated statement.
When I run this statement I get one row back within one second, so there is no performance problem.
select sysdate
,5
,'testje'
,count (1)
,'NL' groupby
from (select 'different (target)' compare_type
,t.id_org_addr id_org_addr -- ID_ORG_ADDR
,t.vpd_country vpd_country -- CTL_COUNTRY
,t.addr_type addr_type -- ADDRESSTYP_COD
from (select *
from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m
on m.vpd_country = t.vpd_country
and m.key_type = 'ORGADDR2'
and m.target_value = t.id_org_addr
where coalesce (t.end_date, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where vpd_country = 'NL' /*EGRB*/
) t
where exists
(select null
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s
where t.id_org_addr = s.id_org_addr)
minus
select 'different (target)' compare_type
,s.id_org_addr id_org_addr -- ID_ORG_ADDR
,s.ctl_country vpd_country -- CTL_COUNTRY
, (select to_number (l.target_value)
from okc_code_foreign l
where l.source_code_type = 'TYS'
and l.target_code_type = 'ADDRLINKTYPE'
and l.source_value = upper (s.addresstyp_cod)
and l.vpd_country = s.ctl_country)
addr_type -- ADDRESSTYP_COD
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s) When I run this statement using a insert by placing
insert into okc_compare_results (
datetime
,compare_tables_id
,compare_target
,record_count
,groupby
) before the statement, then the statement runs about *3 to 4 minutes*, The same is happening when running the select part only using execute immediate.
Below the execution plans of the insert with the select and the select only.
Could somebody tell me what causes the different behavior of the "same" statement and what could I do to avoid this behavior.
The database version is: 11.1.0.7.0
Regards,
Fred.
SQL Statement which produced this data:
select * from table(dbms_xplan.display_cursor ('cuk3uwnxx344q',0 /*3431532430 */))
union all
select * from table(dbms_xplan.display_cursor ('862aq599gfd6n',0/*3531428851 */))
plan_table_output
SQL_ID cuk3uwnxx344q, child number 0
select sysdate ,:"SYS_B_00" ,:"SYS_B_01"
,count (:"SYS_B_02") ,:"SYS_B_03" groupby from ( (select
:"SYS_B_04" compare_type ,t.id_org_addr id_org_addr
-- ID_ORG_ADDR ,t.vpd_country vpd_country --
CTL_COUNTRY ,t.addr_type addr_type -- ADDRESSTYP_COD
from (select * from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m on
m.vpd_country = t.vpd_country ; and
m.key_type = :"SYS_B_05" and
m.target_value = t.id_org_addr ; where
coalesce (t.end_date, to_date (:"SYS_B_06", :"SYS_B_07")) >= sysdate)
/*SGRB*/ where vpd_country = :"SYS_B_08" /*EGRB*/
Plan hash value: 3431532430
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 SELECT STATEMENT 1772 (100)
1 SORT AGGREGATE 1
2 VIEW 3 1772 (1) 00:00:22
3 MINUS
4 SORT UNIQUE 3 492 1146 (1) 00:00:14
* 5 HASH JOIN OUTER 3 492 1145 (1) 00:00:14
6 NESTED LOOPS
7 NESTED LOOPS 3 408 675 (1) 00:00:09
* 8 HASH JOIN 42 4242 625 (1) 00:00:08
9 PARTITION LIST SINGLE 3375 148K 155 (2) 00:00:02 KEY KEY
* 10 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 3375 148K 155 (2) 00:00:02 KEY KEY
* 11 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
* 12 INDEX UNIQUE SCAN UK_ODS_ORG_ADDR 1 1 (0) 00:00:01
* 13 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_ORG_ADDR 1 35 2 (0) 00:00:01 ROWID ROWID
* 14 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 1354K 469 (1) 00:00:06
15 NESTED LOOPS
16 NESTED LOOPS 1 67 9 (12) 00:00:01
17 NESTED LOOPS 1 48 8 (13) 00:00:01
* 18 HASH JOIN 1 23 6 (17) 00:00:01
* 19 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_COUNTRY_SYSTEM 1 11 2 (0) 00:00:01 ROWID ROWID
* 20 INDEX RANGE SCAN PK_ODS_DIVISION_SYSTEM 1 1 (0) 00:00:01
* 21 TABLE ACCESS FULL SY_SOURCE_CODE 8 96 3 (0) 00:00:01
22 TABLE ACCESS BY INDEX ROWID SY_FOREIGN_CODE 1 25 2 (0) 00:00:01
* 23 INDEX RANGE SCAN PK_SY_FOREIGN_CODE 1 1 (0) 00:00:01
* 24 INDEX UNIQUE SCAN PK_SY_TARGET_CODE 1 0 (0)
* 25 TABLE ACCESS BY INDEX ROWID SY_TARGET_CODE 1 19 1 (0) 00:00:01
26 SORT UNIQUE 3375 332K 626 (1) 00:00:08
* 27 HASH JOIN OUTER 3375 332K 625 (1) 00:00:08
28 PARTITION LIST SINGLE 3375 148K 155 (2) 00:00:02 KEY KEY
* 29 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 3375 148K 155 (2) 00:00:02 KEY KEY
* 30 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
Predicate Information (identified by operation id):
5 - access("M"."TARGET_VALUE"="T"."ID_ORG_ADDR" AND "M"."VPD_COUNTRY"="T"."VPD_COUNTRY")
8 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_12 S."ADR_ID_CEGEDIM" :SYS_B_13 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
10 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_14,:SYS_B_15))>=SYSDATE@!)
11 - access("M"."KEY_TYPE"=:SYS_B_11 AND "M"."VPD_COUNTRY"=:SYS_B_16)
12 - access("T"."ID_ORG_ADDR"="M"."TARGET_VALUE")
13 - filter(("T"."VPD_COUNTRY"=:SYS_B_08 AND COALESCE("T"."END_DATE",TO_DATE(:SYS_B_06,:SYS_B_07))>=SYSDATE@!))
14 - access("M"."KEY_TYPE"=:SYS_B_05 AND "M"."VPD_COUNTRY"=:SYS_B_08)
18 - access("CS"."ID_SYSTEM"="SK"."ID_SOURCE_SYSTEM")
19 - filter("CS"."SYSTEM_TYPE"=1)
20 - access("CS"."VPD_COUNTRY"=:B1 AND "CS"."EXP_IMP_TYPE"='I')
filter("CS"."EXP_IMP_TYPE"='I')
21 - filter("SK"."CODE_TYPE"=:SYS_B_18)
23 - access("FK"."ID_SOURCE_CODE"="SK"."ID_SOURCE_CODE" AND "FK"."SOURCE_VALUE"=UPPER(:B1) AND
"CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY")
filter(("FK"."VPD_COUNTRY"=:B1 AND "FK"."SOURCE_VALUE"=UPPER(:B2) AND "CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY"))
24 - access("FK"."ID_TARGET_CODE"="TK"."ID_TARGET_CODE")
25 - filter("TK"."CODE_TYPE"=:SYS_B_19)
27 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_23 S."ADR_ID_CEGEDIM" :SYS_B_24 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
29 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_25,:SYS_B_26))>=SYSDATE@!)
30 - access("M"."KEY_TYPE"=:SYS_B_22 AND "M"."VPD_COUNTRY"=:SYS_B_27)
SQL_ID 862aq599gfd6n, child number 0
insert into okc_compare_results ( datetime
,compare_tables_id ,compare_target
,record_count ,groupby )
select sysdate ,:"SYS_B_00" ,:"SYS_B_01"
,count (:"SYS_B_02") ,:"SYS_B_03" groupby from ( (select
:"SYS_B_04" compare_type ,t.id_org_addr id_org_addr
-- ID_ORG_ADDR ,t.vpd_country vpd_country --
CTL_COUNTRY ,t.addr_type addr_type -- ADDRESSTYP_COD
from (select * from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m on
m.vpd_country = t.vpd_country ; and
m.key_type = :"SYS_B_05" and
m.target_value = t.id_org_addr
Plan hash value: 3531428851
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 INSERT STATEMENT 1646 (100)
1 LOAD TABLE CONVENTIONAL
2 SORT AGGREGATE 1
3 VIEW 1 1646 (1) 00:00:20
4 MINUS
5 SORT UNIQUE 1 163
6 NESTED LOOPS OUTER 1 163 1067 (1) 00:00:13
7 NESTED LOOPS 1 135 599 (1) 00:00:08
* 8 HASH JOIN 19 1919 577 (2) 00:00:07
9 PARTITION LIST SINGLE 1535 69075 107 (4) 00:00:02 KEY KEY
* 10 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 1535 69075 107 (4) 00:00:02 KEY KEY
* 11 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
* 12 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_ORG_ADDR 1 34 2 (0) 00:00:01 ROWID ROWID
* 13 INDEX UNIQUE SCAN UK_ODS_ORG_ADDR 25 1 (0) 00:00:01
* 14 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 1 28 468 (1) 00:00:06
15 NESTED LOOPS
16 NESTED LOOPS 1 67 8 (0) 00:00:01
17 NESTED LOOPS 1 48 7 (0) 00:00:01
18 NESTED LOOPS 1 23 5 (0) 00:00:01
* 19 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_COUNTRY_SYSTEM 1 11 2 (0) 00:00:01 ROWID ROWID
* 20 INDEX RANGE SCAN PK_ODS_DIVISION_SYSTEM 1 1 (0) 00:00:01
* 21 TABLE ACCESS FULL SY_SOURCE_CODE 1 12 3 (0) 00:00:01
22 TABLE ACCESS BY INDEX ROWID SY_FOREIGN_CODE 1 25 2 (0) 00:00:01
* 23 INDEX RANGE SCAN PK_SY_FOREIGN_CODE 1 1 (0) 00:00:01
* 24 INDEX UNIQUE SCAN PK_SY_TARGET_CODE 1 0 (0)
* 25 TABLE ACCESS BY INDEX ROWID SY_TARGET_CODE 1 19 1 (0) 00:00:01
26 SORT UNIQUE 1535 151K
* 27 HASH JOIN OUTER 1535 151K 577 (2) 00:00:07
28 PARTITION LIST SINGLE 1535 69075 107 (4) 00:00:02 KEY KEY
* 29 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 1535 69075 107 (4) 00:00:02 KEY KEY
* 30 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
Predicate Information (identified by operation id):
8 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_12 S."ADR_ID_CEGEDIM" :SYS_B_13 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
10 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_14,:SYS_B_15))>=SYSDATE@!)
11 - access("M"."KEY_TYPE"=:SYS_B_11 AND "M"."VPD_COUNTRY"=:SYS_B_16)
12 - filter((COALESCE("T"."END_DATE",TO_DATE(:SYS_B_06,:SYS_B_07))>=SYSDATE@! AND "T"."VPD_COUNTRY"=:SYS_B_08))
13 - access("T"."ID_ORG_ADDR"="M"."TARGET_VALUE")
14 - access("M"."KEY_TYPE"=:SYS_B_05 AND "M"."VPD_COUNTRY"=:SYS_B_08 AND "M"."TARGET_VALUE"="T"."ID_ORG_ADDR")
filter("M"."TARGET_VALUE"="T"."ID_ORG_ADDR")
19 - filter("CS"."SYSTEM_TYPE"=1)
20 - access("CS"."VPD_COUNTRY"=:B1 AND "CS"."EXP_IMP_TYPE"='I')
filter("CS"."EXP_IMP_TYPE"='I')
21 - filter(("SK"."CODE_TYPE"=:SYS_B_18 AND "CS"."ID_SYSTEM"="SK"."ID_SOURCE_SYSTEM"))
23 - access("FK"."ID_SOURCE_CODE"="SK"."ID_SOURCE_CODE" AND "FK"."SOURCE_VALUE"=UPPER(:B1) AND
"CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY")
filter(("FK"."VPD_COUNTRY"=:B1 AND "FK"."SOURCE_VALUE"=UPPER(:B2) AND "CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY"))
24 - access("FK"."ID_TARGET_CODE"="TK"."ID_TARGET_CODE")
25 - filter("TK"."CODE_TYPE"=:SYS_B_19)
27 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_23 S."ADR_ID_CEGEDIM" :SYS_B_24 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
29 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_25,:SYS_B_26))>=SYSDATE@!)
30 - access("M"."KEY_TYPE"=:SYS_B_22 AND "M"."VPD_COUNTRY"=:SYS_B_27)Edited by: BluShadow on 20-Jun-2012 10:30
added {noformat}{noformat} tags for readability. Please read {message:id=9360002} and learn to do this yourself.yes, all the used tables are analyzed.
Thanks, for pointing to the metalink bug, I have also searched in metalink, but didn't find this bug.
I have a little bit more information about the problem.
I use the following select (now in a readable format)
select count (1)
from ( (select 'different (target)' compare_type
,t.id_org_addr id_org_addr -- ID_ORG_ADDR
,t.vpd_country vpd_country -- CTL_COUNTRY
,t.addr_type addr_type -- ADDRESSTYP_COD
from (select *
from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m
on m.vpd_country = t.vpd_country
and m.key_type = 'ORGADDR2'
and m.target_value = t.id_org_addr
where coalesce (t.end_date, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where vpd_country = 'NL' /*EGRB*/
) t
where exists
(select null
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s
where t.id_org_addr = s.id_org_addr)
minus
select 'different (target)' compare_type
,s.id_org_addr id_org_addr -- ID_ORG_ADDR
,s.ctl_country vpd_country -- CTL_COUNTRY
, (select to_number (l.target_value)
from okc_code_foreign l
where l.source_code_type = 'TYS'
and l.target_code_type = 'ADDRLINKTYPE'
and l.source_value = upper (s.addresstyp_cod)
and l.vpd_country = s.ctl_country)
addr_type -- ADDRESSTYP_COD
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s)) The select is executed in 813 msecs
When I execute the same select using execute immediate like:
declare
ln_count number;
begin
execute immediate q'[<select statement>]' into ln_count;
end;This takes 3:56 minutes to complete.
When I change the second coalesce part (the one within the exists) in the flowing way:
the part
coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate
is replaced by
s.end_val_dat >= sysdate or s.end_val_dat is nullthen the execution time is even faster (560 msecs) in both, the plain select and the select using the execute immediate. -
Opening Excel Workbook Fails when run from Scheduled Task on Windows Server 2008 Rw
Hi,
I have a little vbs script that instantiates the Excel.Application object and then opens a work book to perform some tasks on it. The script runs fine when run from the command line. When I attempt to run it as a scheduled task (it is supposed to update
data that is pulled from a SQL Server at regular intervals), it fails with the following error:
Microsoft Office Excel cannot access the file 'c:\test\SampleWorkbook.xlsm'. There are several possible reasons: .....
The file does exist. The path reported in the error is correct. The account under which the task is running is the same account I use to run it from the command line. User Account Control is not enabled, and the task is set up to run with highest privileges.
When I run the same script through the Task Scheduler from a Windows Server 2003 machine, it works without issue.
I was just wondering if somebody on this forum has run into a similar issue in connection with Windows Server 2008 R2 and figured out what the magic trick is to make it work. I'm sure it is rights related, but I haven't quite figured out what which rights
are missing.
Thanks in advance for any advice you may have.This is truly killing me ... trying to get it working on Windows Server 2012 without success.
I desperately need to automate running Excel macros in a "headless" environment, that is non-interactive, non-GUI, etc.
I can get it to work using Excel.Application COM, either via VBScript or Powershell, successfully on many other Windows systems in our environment - Windows Server 2008 R2, Windows 7 (32-bit), etc., -BUT-
The two servers we built out for running our automation process are Windows Server 2012 (SE) - and it just refuses to run on the 2012 servers - it gives the messages below from VBScript and PowerShell, respectively-
I have tried uninstalling and re-installing several different versions of Microsoft Excel (2007 Standard, 2010 Standard, 2010 Professional Plus, 32-bit vs. 64-bit, etc.), but it makes no difference.
Would be extremely grateful if any one out there has had any success in running Excel automation on Server 2012 in a non-interactive environment that they could share.
( I have tried adding the "%windir%\syswow64\config\systemprofile\desktop"
folder, which did fix the issue for me when testing on Windows Server 2008 R2, but sadly did not resolve it on Windows Server 2012 )
[VBScript error msg]
Z:\TestExcelMacro.vbs(35, 1) Microsoft Office Excel: Microsoft Office Excel cannot
access the file 'Z:\TestExcelMacro.xlsm'. There are several possible reasons:
• The file name or path does not exist.
• The file is being used by another program.
• The workbook you are trying to save has the same name as a currently open work
[Powershell error msg]
Exception calling "Add" with "0" argument(s): "Microsoft Office Excel cannot open or save any more documents because th
ere is not enough available memory or disk space.
To make more memory available, close workbooks or programs you no longer need.
To free disk space, delete files you no longer need from the disk you are saving to."
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : ComMethodTargetInvocation
You cannot call a method on a null-valued expression.
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull -
Performance issues when creating a Report / Query in Discoverer
Hi forum,
Hope you are can help, it involves a performance issues when creating a Report / Query.
I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = Posted we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
Please see attached the SQL Inspector Plan:
Before Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
AND-EQUAL
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_N1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
After Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS FULL GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
Many thanks,
LanceHi Rod,
I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
Ive been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
I think the problem is with the column using DECODE. When querying the column in TOAD the value of P is returned. But in discoverer the condition is done on the value Posted. Im not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
Lance
DECODE( JOURNAL_BATCH1.STATUS,
'+', 'Unable to validate or create CTA',
'+*', 'Was unable to validate or create CTA',
'-','Invalid or inactive rounding differences account in journal entry',
'-*', 'Modified invalid or inactive rounding differences account in journal entry',
'<', 'Showing sequence assignment failure',
'<*', 'Was showing sequence assignment failure',
'>', 'Showing cutoff rule violation',
'>*', 'Was showing cutoff rule violation',
'A', 'Journal batch failed funds reservation',
'A*', 'Journal batch previously failed funds reservation',
'AU', 'Showing batch with unopened period',
'B', 'Showing batch control total violation',
'B*', 'Was showing batch control total violation',
'BF', 'Showing batch with frozen or inactive budget',
'BU', 'Showing batch with unopened budget year',
'C', 'Showing unopened reporting period',
'C*', 'Was showing unopened reporting period',
'D', 'Selected for posting to an unopened period',
'D*', 'Was selected for posting to an unopened period',
'E', 'Showing no journal entries for this batch',
'E*', 'Was showing no journal entries for this batch',
'EU', 'Showing batch with unopened encumbrance year',
'F', 'Showing unopened reporting encumbrance year',
'F*', 'Was showing unopened reporting encumbrance year',
'G', 'Showing journal entry with invalid or inactive suspense account',
'G*', 'Was showing journal entry with invalid or inactive suspense account',
'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
'I', 'In the process of being posted',
'J', 'Showing journal control total violation',
'J*', 'Was showing journal control total violation',
'K', 'Showing unbalanced intercompany journal entry',
'K*', 'Was showing unbalanced intercompany journal entry',
'L', 'Showing unbalanced journal entry by account category',
'L*', 'Was showing unbalanced journal entry by account category',
'M', 'Showing multiple problems preventing posting of batch',
'M*', 'Was showing multiple problems preventing posting of batch',
'N', 'Journal produced error during intercompany balance processing',
'N*', 'Journal produced error during intercompany balance processing',
'O', 'Unable to convert amounts into reporting currency',
'O*', 'Was unable to convert amounts into reporting currency',
'P', 'Posted',
'Q', 'Showing untaxed journal entry',
'Q*', 'Was showing untaxed journal entry',
'R', 'Showing unbalanced encumbrance entry without reserve account',
'R*', 'Was showing unbalanced encumbrance entry without reserve account',
'S', 'Already selected for posting',
'T', 'Showing invalid period and conversion information for this batch',
'T*', 'Was showing invalid period and conversion information for this batch',
'U', 'Unposted',
'V', 'Journal batch is unapproved',
'V*', 'Journal batch was unapproved',
'W', 'Showing an encumbrance journal entry with no encumbrance type',
'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
'X', 'Showing an unbalanced journal entry but suspense not allowed',
'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
'Z', 'Showing invalid journal entry lines or no journal entry lines',
'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ), -
Error when running Stellarium with Intel GPU
I'm trying to run Stellarium on a 2009 desktop computer with an Intel GPU, but I get the following error message:
Your OpenGL subsystem has problems. See log for details. Ignore and suppress this notice in the future and try to continue in degraded mode anyway?
This is the log file:
2015-02-02T20:03:51
Linux version 3.18.4-1-ARCH (builduser@tobias) (gcc version 4.9.2 20141224 (prerelease) (GCC) ) #1 SMP PREEMPT Tue Jan 27 20:45:02 CET 2015
Compiled using GCC 4.9.2
Qt runtime version: 5.4.0
Qt compilation version: 5.4.0
Addressing mode: 64-bit
MemTotal: 8101092 kB
MemFree: 154528 kB
MemAvailable: 4217808 kB
SwapTotal: 0 kB
model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
cpu MHz : 2997.000
model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
cpu MHz : 2997.000
00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller])
Kernel driver in use: i915
stellarium
[ This is Stellarium 0.13.2 - http://www.stellarium.org ]
[ Copyright (C) 2000-2014 Fabien Chereau et al ]
Writing log file to: "/home/fturco/.stellarium/log.txt"
File search paths:
0 . "/home/fturco/.stellarium"
1 . "/usr/share/stellarium"
Config file "/home/fturco/.stellarium/config.ini" does not exist. Copying the default file.
Config file is: "/home/fturco/.stellarium/config.ini"
Detected: OpenGL "2.1"
Driver version string: "2.1 Mesa 10.4.3"
GL vendor is "Intel Open Source Technology Center"
GL renderer is "Mesa DRI Intel(R) Q35 "
GL Shading Language version is "1.20"
MESA Version Number after parsing: 10.4
Mesa version is fine, we should not see a graphics problem.
GLSL Version Number after parsing: 1.2
This is not enough: we need GLSL1.30 or later.
You should update graphics drivers, graphics hardware, or use the MESA version.
Else, please try to use an older version like 0.12.4, and try there with --safe-mode
You can try to run in an unsupported degraded mode by ignoring the warning and continuing.
But more than likely problems will persist.
Aborting due to OpenGL/GLSL version problems.
I'm using xf86-video-intel version 2.99.917-1 at the moment.
$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02)
I also get extremely slow performances when running 3D game Xonotic, but I don't know if it's due to the same problem. Any suggestion?It seems that my GPU (Intel GMA 3100) is not supported anymore. See https://answers.launchpad.net/stellarium/+faq/2570: "Graphics cards which are no longer supported by Stellarium 0.13 and later include early ATI/AMD Radeon cards up to and including the Xxxx series (built 2004/05), NVidia up to GeForce FXxxx (2003/04), and Intel GMA before X3000, unfortunately also including the popular Atom-based netbooks of 2010."
-
Missing Sub VI error when running Applicatio​n.exe in development PC
Hi,
I have created an application(exe) on my computer, which is also where I have also designed the different blocks of code for this application.
All the blocks of code work just fine when made to run through Labview 2013 development software. After adding all the VI files associated with the project in the project explorer of a new project and configuring the build specifications, I was able to create the executable. But, when I try running this executable/application(.exe) file, I get the missing subVI message. I found that some of these missing sub VIs are the Instrument driver VIs which work absolutely fine when run through the LabVIEW development sofware.
How can I resolve this issue?
Attachment "Error Screenshot" shows the Missing sub VI message.
Attachment "Project & Error Screenshot" shows the Missing sub VI message along with the list of dependencies in the project, which shows the error being displayed eventhough the sub VI is present in the dependency list.
Any comments in this regard are highly appreciated.
Regards,
Vivek
Attachments:
Error Screenshot.jpg 257 KB
Project & Error Screenshot.jpg 360 KBvivek.madhavan.13 wrote:
Thanks for your comment Jeff,
But what about the 'subBuildXYGraph.vi', 'Waveform Array To Dynamic.vi' and other such VI's in the attached error message that are not a part of the Rohde and Schwarz drivers but are included in the vi.lib folder? Ideally, labview must be able to find/trace atleast these VIs while performing the build procedure, right?
Vivek
If they are (Non dynamically called)-dependancies of always included dynamic calls, Yes, the build will include them.
I hate to say it but, Start with the rs dynamic vis and keep adding if that meerely helps reduce the list of the missing vis.
Jeff
Maybe you are looking for
-
Can anyone assist me on HOW TO clear my ipod and basically start over with adding the songs? thanks. kimberleeLZ
-
Hi experts, I have connected external oracle database and there is a table ABC.How can i find structure of the ABC table in abap? Do i need to use Describe table statement in EXEC ENDEXEC ?? If it is how to?? Reward Guaranteed thanks kaki
-
I am unable to update my iOS 7.0.6
I am unable to update new software of iOS 7.0.6 in my Iphone
-
Satellite A500-1GL - some single FN keys not working properly
hi I have Toshiba Satellite A500-1GL, and i have a problem with the FN key (Actually not the FN key itself, its the other related keys, the keys that have the commands) for example: FN + ESC = Mute, and it work fine! But.. FN + F8, should enable/disa
-
How do I move all the apps and podcasts from one mac to another mac?
What is the best way to move all the apps and iTunes content (songs, podcasts, etc) from one Mac to another Mac? I am switching to a newer computer. I connect (with cables) an iPhone 4 and an iPad 2 to the old computer for syncing.