Configure no. of session handled for same sql
hi, experts,
hi, if there is one report (only fire 1 sql by this report).
if only one internet explorer to run this report , it use 3 minutes to get it.
I use 3 internet explorers to run the same report but not running the report at the same time.
firstly, I run the report on first internet explorer.
then after 10 seconds, I run the report on 2nd internet explorer.
then after 10 seconds, I run the report on 3rd internet explorer.
after result on all internet explorer came, I check the sessions on the "Manage Sessions"
there are 2 sessions for same report.
How to configure no. of session handled for same sql on OBIEE?
thank you very much!
shipon_97 wrote:
Here I see that at a particular time of the database , the number of session from v$session is "271" where as the number of session in the "v$resource_limit" is "280" . Here I get some differ among these parameters .
Would anybody plz tell why the no of sessions differ between v$session and v$resource_limit parameters .
The v$session views shows current sessions (which change rapidly), while the v$resource_limit shows the maximum resource utilization, like a high-water mark.
http://download-east.oracle.com/docs/cd/A97630_01/server.920/a96536/ch3155.htm
Similar Messages
-
Different 'execution plans' for same sql in 10R2
DB=10.2.0.5
OS=RHEL 3
Im not sure of this, but seeing different plans for same SQL.
select sql_text from v$sqlarea where sql_id='92mb4z83fg4st'; <---TOP SQL from AWR
SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","LOGINSUCCESSFLG",
"ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE"
FROM "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY";
SQL> set autotrace traceonly
SQL> SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","LOGINSUCCESSFLG",
"ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE"
FROM "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY"; 2 3
1822203 rows selected.
Execution Plan
Plan hash value: 568996432
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1803K| 75M| 2919 (2)| 00:00:36 |
| 1 | TABLE ACCESS FULL| ENDUSERLOGINATTEMPTHISTORY | 1803K| 75M| 2919 (2)| 00:00:36 |
Statistics
0 recursive calls
0 db block gets
133793 consistent gets
0 physical reads
0 redo size
76637183 bytes sent via SQL*Net to client
1336772 bytes received via SQL*Net from client
121482 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1822203 rows processed
===================================== another plan ===============
SQL> select * from TABLE(dbms_xplan.display_awr('92mb4z83fg4st'));
15 rows selected.
Execution Plan
Plan hash value: 3015018810
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY_AWR |
Note
- rule based optimizer used (consider using cbo)
Statistics
24 recursive calls
24 db block gets
49 consistent gets
0 physical reads
0 redo size
1529 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
15 rows processed
=========second one shows only 15 rows...
Which one is correct ?Understood, second plan is for self 'dbms_xplan'.
Anyhow I opened a new session where I did NOT on 'auto-trace'. but plan is somewhat than the original.
SQL> /
PLAN_TABLE_OUTPUT
SQL_ID 92mb4z83fg4st
SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","
LOGINSUCCESSFLG","ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE" FROM
"BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY"
Plan hash value: 568996432
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | | | 2919 (100)| |
| 1 | TABLE ACCESS FULL| ENDUSERLOGINATTEMPTHISTORY | 1803K| 75M| 2919 (2)| 00:00:36 |
15 rows selected.
I am just wondering, which plan is the accurate and which I need to believe ? -
Implementation of session handling for using web services
Hi,
I would like to use session handling in web services using ABAP stack in order to start the session with an user login function followed by other RFC calls till a user logout. So far, I found only the following help note in the SAP online help:
Interface Profile
In the interface profile, choose the required processing type: Stateful or Stateless.
A stateful service retains its status within the framework of a HTTP session throughout several calls form the same service consumer. The standard value for services is Stateless.If you require stateful communication, you can choose this instead.
[http://help.sap.com/saphelp_nwpi71/helpdata/de/45/25291b5a2657c0e10000000a1553f7/content.htm |http://help.sap.com/saphelp_nwpi71/helpdata/en/45/25291b5a2657c0e10000000a1553f7/content.htm]
Please, could someone explain me the further required steps of SAPs session handling idea cause just settting the status to stateful is still not the solution itself...
Regards,
JensNow, I found the possible scenarios, suggest by SAP Help, regarding security for Web Services ([http://help.sap.com/saphelp_nw73/helpdata/en/48/8ebbba66be06b2e10000000a42189b/content.htm|http://help.sap.com/saphelp_nw73/helpdata/en/48/8ebbba66be06b2e10000000a42189b/content.htm]):
- SAML & WS SecureConversation -> SSO
- WS Security UsernameToken & WS SecureConversation
- User ID and Password in HTTP Header & HTTPS
- SAP Authentication Assertion Ticket & HTTPS -> SSO
- X.509 SSL Client Certificate through HTTPS
- WS Security: X.509 Certificate Authentication at Message Level
Are scenarios with SSO the solution for creating sessions!? -
Different plan for same SQL in different version of DB
Hello,
At client side i have upgraded database from 11.2.0.1.0 to 11.2.0.2.0 . and after that i am facing performance related issue.
is this happen because of up-gradation ?
for testing i have created two different version db (i.e 11.2.0.1.0 to 11.2.0.2.0 ) with same volume and same parameter. now i am executing same query then i am getting different plan.
please help me to resolve this issue..Which issue? CBO is upgraded with every new release! So, yes, execution plans may change because of that.
And sensible DBA's first upgrade a test database.
Also no statement is to be seen.
Secondly, you can always set optimizer_features_enable to the lower release.
Finally, please look up 'upgradation' in an English dictionary. You won't find it!!!
Sybrand Bakker
Senior Oracle DBA -
Multiple environment handles for same file directory in DPL
Hi,
I have a standalone berkeley DB created by DPL. When one application is already running and accessing the DB, if i try to execute another application accessing the same DB i am getting EnvironmentLockedException. Went through articles of FileManager and EnvironmentImpl but could not understand how to handle that?
Please suggest.This is not related to DPL. Please see the link Linda mentioned.
--mark -
Many sessions simultaneously executing a sql statement ?
Hi gurus,
I'm on a sittuation which affect to system performance. When i monitor current activities, i notice that many session is executing the same sql statement.
SPID SQL_ID
16133 5ptuft7h7y8jk
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
21385 9yayt7t5wsv07
It's up to 70-80 session executing one same sql. Is there any idea or suggest for this my case.
So thanks
ChYou could look at the SQL (from V$SQLSTATS, V$SQL or V$SQLAREA) and then determine what it is doing and then get some clues as to why one database has multiple executions of the SQL . Most developer leads would be able to say "ah ... this SQL is executed by this module in this part of the application when the user chooses this option". Then you figure out why you have multiple users choosing the option.
OR there could be dependencies between different statements or enqueue waits -- and multiple sessions waiting on the same lock !
Hemant K Chitale -
Revision: 4258
Author: [email protected]
Date: 2008-12-08 16:33:17 -0800 (Mon, 08 Dec 2008)
Log Message:
Bug: LCDS-522 - Add more configurable reconnect handling for connecting up again over the same channel when there is a connection failure/outage.
QA: Yes
Doc: No
Checkintests Passes: Yes
Details:
* Updates to configuration handling code and MXMLC code-gen to support new long-duration reliable reconnect setting.
Ticket Links:
http://bugs.adobe.com/jira/browse/LCDS-522
Modified Paths:
blazeds/trunk/modules/common/src/flex/messaging/config/ClientConfiguration.java
blazeds/trunk/modules/common/src/flex/messaging/config/ClientConfigurationParser.java
blazeds/trunk/modules/common/src/flex/messaging/config/ConfigurationConstants.java
blazeds/trunk/modules/common/src/flex/messaging/config/ServicesDependencies.java
blazeds/trunk/modules/common/src/flex/messaging/errors.properties
Added Paths:
blazeds/trunk/modules/common/src/flex/messaging/config/FlexClientSettings.java
Removed Paths:
blazeds/trunk/modules/core/src/flex/messaging/config/FlexClientSettings.javaRemember that Arch Arm is a different distribution, but we try to bend the rules and provide limited support for them. This may or may not be unique to Arch Arm, so you might try asking on their forums as well.
-
Possibility to register Pre-/Post-Procedures for an SQL Template Handler
I would appreciate to see the possibility to register pre-/post-procedures for an SQL template handler in ORDS 3.0.
Why:
We use Oracle VPD/Row-Level-Security to secure data access. Hence a trigger sets a couple of attributes in the database session context at login time which are then used in static RLS predicates to limit which records the user can see/modify.
With ORDS 3.0 all sessions are opened under the same technical user (e.g. APEX_REST_PUBLIC_USER), hence all users have the same/no attributes in the session context and could see/modify all data.
To avoid this situation, I need to set the attributes (e.g. the authenticated user) in the database session context before the actual query/plsql handler is executed.
Also, resetting the session context after the handler is executed would be good.
This scenario is in line with scenarios 'One Big Application User' and 'Web-based applications' in http://docs.oracle.com/cd/B28359_01/network.111/b28531/vpd.htm#DBSEG98291.
Different solution approach:
Kris suggested to write a PL/SQL handler where the pre-procedure is called before the business logic procedure/query. This is ok for me as long as I modify data and only need to return no or little data.
As soon as I need to return a lot of data (e.g. select c1, c19, c30 from t1), this approach will force me to write a lot of code in the PL/SQL handler in order to marshal the c1, c19 and c30 to JSON and put it in the HTTP response.
But this is the beauty of ORDS - I can simply define a template, write a GET SQL Handler 'select c1, c19, c30 from t1' and have the data available as REST service, without writing any code to write JSON marshaled data in the HTTP response.I tried to log the request at Oracle REST Data Services (ORDS) but I could only start a new discussion: Possibility to register Pre-/Post-Procedures for an SQL Template Handler
As I mentioned there, the PL/SQL handler approach works for me as long as I have no or only little data to send back to the client (e.g. put/post/delete succeeded or an error message why the call failed).
If I need to return a lot of data from the PL/SQL handler I would need to, as far as I understand, to marshal the data to JSON and write it to the response body in the PL/SQL handler.
I don't want to do the marshaling, because ORDS does it better.
However, this works for me:
I write a pipelined stored procedure that takes as input the attributes I need to set in the session context. I then can reference it in the SQL handler:
select * from table(my_pipelined_function(:USER, ....)
Now the JSON/HTTP response is created by ORDS again.
I still needed to code a couple of lines, but it is way better than duplicating the functionality already existing in ORDS.
With the hooks it would be perfect because I would not have to write any code (apart from the procedure to set the session context attributes), just configure the REST services in ORDS. -
How to configure Send Handler for BizTalk 2013 Dynamic Send Port on deployment?
Hi,
I do know how to manually configure a send handler for a dynamic send port in BizTalk 2013 Administration console. Though, once you export your application's configuration to a binding file, the dynamic send port's configuration does not
contain any information regarding the send handler. When you try to use this binding file when deploying your application your dynamic port's send handler falls back to the default host instance.
So my question is, how could we automate this process to avoid manual step in a dynamic port configuration during deployment?
Thank you,
--VladHey vlad,
As discussed at work in the office.. I woudl take the powershelll approach for now as a workaround. Here's a trivial script for my local dev box (all in one biztalk & SQL) (must be run in an x86 powershell session):
param
[string] $bizTalkDbServer = ".",
[string] $bizTalkDbName = "BizTalkMgmtDb",
[string] $fileHostInstance = "SendingHost",
[string] $sendPortName = "sm_dynamic_sp_test"
[System.reflection.Assembly]::LoadWithPartialName("Microsoft.BizTalk.ExplorerOM") | Out-Null
$catalog = New-Object Microsoft.BizTalk.ExplorerOM.BtsCatalogExplorer
$catalog.ConnectionString = "SERVER=$bizTalkDbServer;DATABASE=$bizTalkDbName;Integrated Security=SSPI"
foreach($sp in $catalog.SendPorts)
if($sp.Name -eq $sendPortName)
"Found send port $($sp.Name), analyzing send handler"
foreach($sh in $sp.DynamicSendHandlers)
if($sh.SendHandler.TransportType.Name -eq "FILE")
if($sh.SendHandler.Host.Name -ne $fileHostInstance)
"Changing $($sh.Name) send handler to '$fileHostInstance' from '$($sh.SendHandler.Host.Name)'"
$sp.SetSendHandler("FILE", $fileHostInstance)
else
"Send handler for $($sp.Name) is already '$fileHostInstance' ignorning .. "
$catalog.SaveChanges() -
Multiple Executions Plans for the same SQL statement
Dear experts,
awrsqrpt.sql is showing multiple executions plans for a single SQL statement. How is it possible that one SQL statement will have multiple Executions Plans within the same AWR report.
Below is the awrsqrpt's output for your reference.
WORKLOAD REPOSITORY SQL Report
Snapshot Period Summary
DB Name DB Id Instance Inst Num Release RAC Host
TESTDB 2157605839 TESTDB1 1 10.2.0.3.0 YES testhost1
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 32541 11-Oct-08 21:00:13 248 141.1
End Snap: 32542 11-Oct-08 21:15:06 245 143.4
Elapsed: 14.88 (mins)
DB Time: 12.18 (mins)
SQL Summary DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
Elapsed
SQL Id Time (ms)
51szt7b736bmg 25,131
Module: SQL*Plus
UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(ACCT_DR_BAL,
0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND TEST_ACC_NB = ACCT_ACC_NB(+)) WHERE
TEST_BATCH_DT = (:B1 )
SQL ID: 51szt7b736bmg DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> 1st Capture and Last Capture Snap IDs
refer to Snapshot IDs witin the snapshot range
-> UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(AC...
Plan Hash Total Elapsed 1st Capture Last Capture
# Value Time(ms) Executions Snap ID Snap ID
1 2960830398 25,131 1 32542 32542
2 3834848140 0 0 32542 32542
Plan 1(PHV: 2960830398)
Plan Statistics DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 25,131 25,130.7 3.4
CPU Time (ms) 23,270 23,270.2 3.9
Executions 1 N/A N/A
Buffer Gets 2,626,166 2,626,166.0 14.6
Disk Reads 305 305.0 0.3
Parse Calls 1 1.0 0.0
Rows 371,735 371,735.0 N/A
User I/O Wait Time (ms) 564 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 26 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | | | 1110 (100)| |
| 1 | UPDATE | TEST | | | | |
| 2 | TABLE ACCESS FULL | TEST | 116K| 2740K| 1110 (2)| 00:00:14 |
| 3 | TABLE ACCESS BY INDEX ROWID| ACCT | 1 | 26 | 5 (0)| 00:00:01 |
| 4 | INDEX RANGE SCAN | ACCT_DT_ACC_IDX | 1 | | 4 (0)| 00:00:01 |
Plan 2(PHV: 3834848140)
Plan Statistics DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 0 N/A 0.0
CPU Time (ms) 0 N/A 0.0
Executions 0 N/A N/A
Buffer Gets 0 N/A 0.0
Disk Reads 0 N/A 0.0
Parse Calls 0 N/A 0.0
Rows 0 N/A N/A
User I/O Wait Time (ms) 0 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 26 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | | | 2 (100)| |
| 1 | UPDATE | TEST | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 28 | 2 (0)| 00:00:01 |
| 3 | INDEX RANGE SCAN | TEST_DT_IND | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| ACCT | 1 | 26 | 4 (0)| 00:00:01 |
| 5 | INDEX RANGE SCAN | INDX_ACCT_DT | 1 | | 3 (0)| 00:00:01 |
Full SQL Text
SQL ID SQL Text
51szt7b736bm UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL, 0) +
NVL(ACCT_DR_BAL, 0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND PB
RN_ACC_NB = ACCT_ACC_NB(+)) WHERE TEST_BATCH_DT = (:B1 )Your input is highly appreciated.
Thanks for taking your time in answering my question.
RegardsOracle Lover3 wrote:
Dear experts,
awrsqrpt.sql is showing multiple executions plans for a single SQL statement. How is it possible that one SQL statement will have multiple Executions Plans within the same AWR report.If you're using bind variables and you've histograms on your columns which can be created by default in 10g due to the "SIZE AUTO" default "method_opt" parameter of DBMS_STATS.GATHER__STATS it is quite normal that you get different execution plans for the same SQL statement. Depending on the values passed when the statement is hard parsed (this feature is called "bind variable peeking" and enabled by default since 9i) an execution plan is determined and re-used for all further executions of the same "shared" SQL statement.
If now your statement ages out of the shared pool or is invalidated due to some DDL or statistics gathering activity it will be re-parsed and again the values passed in that particular moment will determine the execution plan. If you have skewed data distribution and a histogram in place that reflects that skewness you might get different execution plans depending on the actual values used.
Since this "flip-flop" behaviour can sometimes be counter-productive if you're unlucky and the values used to hard parse the statement leading to a plan that is unsuitable for the majority of values used afterwards, 11g introduced the "adaptive" cursor sharing that attempts to detect such a situation and can automatically re-evaluate the execution plan of the statement.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Problems in configuring iAS DAD for PL/SQL-oranl8.dll
I configured the DAD to use the pl/sql toolkit. I tried the sql connection. It seems to work correctly. But when I used to browse my objects on our schema, apache server pops up with an error "Entry point not found" saying The procedure entry point snlpcgtsrvbynm could not be located in the dll oranl8.dll.
The webserver ias9 is running on win2000 and the db is oracle8i release 3 running on win2000. This seems to be more of a problem of missing "Get Server by Name" function call in dynamic library. Does anyone face same problem or is there anyone can show me how to fix this. Is their any patch for this availabe?. If so please let me know.
Thanks
Nat!Exactly the same here. It hapened after I upgraded the database form 8.1.7.1.4 to 8.1.7.3.
The exact message I'm getting (on the server -> Apache.exe)is
'The procedure entry point nlepeget could not be located in the dynamic link library oranl8.dll'.
In the browser I get the folowing text:
'Proxy log On failed.
Please verify that you have specified correct connectivity information i.e.username, password & connect-string in the Database Access Descriptor
Error-Code:12538
Error TimeStamp:Wed, 13 Mar 2002 16:42:59 GMT
Database Log In Failed
TNS is unable to connect to destination. Invalid TNS address supplied or destination is not listening. This error can also occur because of underlying network transport problems.
Verify that the TNS name in the connectstring entry of the DAD for this URL is valid and the database listener is running. ' -
Error - session handle nor valid for ivi
Hi All
I have posted this question in existing thread also I guess.
I am trying to control TDK lambda power supply through LabVIEW using IVI drivers.
logical name set in a MAX is same as what I am giving in VI to InitializeWithOptions.vi . However I am ending up with this error "IviDCPwr Initialize With Options.vi<ERR>
Primary Error: (Hex 0xBFFA1190) The session handle is not valid."
Not sure why this error is coming.
Can anyone help me with this error?You might be using an IVI-COM driver and trying to use IVI-C Class driver VIs without having installed the IVI-COM adapters.
As mentioned in the knowledge base below The IVI-COM adapters are included with IVI Compliance Package 3.2 and later, but must be enabled in the installer's feature tree.
IVI-C Class Driver Support for IVI-COM Specific Drivers
http://digital.ni.com/public.nsf/allkb/5499F9DBD07522F686256F260066BA86?OpenDocument
duplicate post
http://forums.ni.com/t5/Instrument-Control-GPIB-Serial/TDK-Lambda-Power-Supply-error/m-p/3118149#M68... -
Help Configuring Transparent Gateway for Ms Sql Server
I have Installed Oracle 9.2.0.1.0 with Transparent Gateway for Ms Sql Server.
Followed the configuration furnished therein the Documents for Tnsnames.ora & Listener.ora.
Connection to Sql Server 2000 is NOT SUCCESSFUL. Trace File contents from Tg4sql is furnished below :
Oracle Corporation --- WEDNESDAY DEC 18 2002 22:32:50.625
Heterogeneous Agent Release
9.2.0.1.0
HS Agent diagnosed error on initial communication,
probable cause is an error in network administration
Network error 2: NCR-00002: NCR: Invalid usage
Note :- Sql Server & Oracle Server are on the same Machine running of Windows 2000 Server.
Am i missing something !!!
TIAPlease guide me, would appreciate your suggestions to solve this heck....
TIA -
Error in "niDMM Configure Measurement Digits.vi" - The session handle is not valid.
I get the error message below, when I run my LabVIEW program as an EXE, but no problem while running under LabVIEW development. Why is there any difference between the build EXE and the development environment (it uses the same VI's) and how can I debug this problem?
Running LabVIEW 2010 SP1.
Error message:
Error -1074130544 occurred at niDMM Configure Measurement Digits.vi
Possible reason(s):
The session handle is not valid.
TIA
Bent
Attachments:
ErrorMessage.png 7 KBDear All
I also got this problem while using GPIB to control my function generator(HP33120A).
Here's my error message
Mode.vi<ERR>Driver Status: (Hex 0xBFFA1190) The session handle is not valid.
And my code was from the modified example of driver as following:
please help me to figure it out. Thanks a lot!! -
Session handle is not valid in niDMM Configure Measurement Digits
I am trying to communicate to a NI-PXI4065 DMM and I am getting the error -1074130544 (niDMM Configure Measurement Digits.vi<ERR>The session handle is not valid) for some reason. Does it have to do with the instrument handle input going in the niDMM Configure Measurement Digits. vi or not?
I achieved the instrument handle after initializing the DMM with the niDMMIntialize.vi, so I believe the instrument handle that I am using is correct.
Thanks a lot to anyone who can help.Hi Gjoraas,
I'm curious about how your assigning the instrument handle- you'll want to make sure that it matches the name in Measurement and Automation Explorer (that name will be found in Devices and Interfaces»DAQmx Devices- you'll want to use the name in quotations). Otherwise, is it possible that you are running two programs or projects that both use your DMM? If so, you'll want to try only running one program at a time. My last suggestion would be to reset the device in Measurement and Automation Explorer, by right-clicking on it and selecting 'reset device'. Hope this helps, let me know how it goes. Have a great day!
aNItaB
Applications Engineer
National Instruments
Digital Multimeters
Maybe you are looking for
-
I have an iPod Nano, 2GB, less than 2 months old. The problem is that if I plug in lanyard headphones, or earbuds, start a song, then touch the case below the scroll wheel, the song stops. I can restart the song playing and touch the case near the bo
-
Batch Determination issue -reg
hi friends, while doing order confirmation at header level , when i went to goods movement , inside that i have selected the material to be issued to production which should undergo 261 movement which is batch managed - and clicked on batch determ
-
Problems with downloading camera raw 8.4
I can`t raw convert my Canon 6D files in Adobe bridge and photoshop CC5. I tried to download camera raw 8.4 and it seems to be succeeded, but it didn`t. What is the whole procedure to download camera raw 8.4? Knut Rune Johassnon
-
Bridge 5.1 (and Reader) suddenly disappeared?!
I'm the happy owner of an Adobe CS5.5 Production Premium suite - running on PC/Windows 7 - which has served me very well for a number of years. I always use Bridge to transfer pictures from my camera to the computer. On July 26th I came home from a s
-
Cafe Townsend... SOS
I am VERY novice to web design and I have problem in the tutorial, chapter 3 of the book "Getting started with Dreamweaver" at #1: Finding the Dreamweaver application folder, #2: Locate the Cafe Townsend and #3: Copy it to a local site folder I creat