Performance with SQL?
I have a table named "abc" which have folowing columns:
fname varchar2(30)
lname varchar2(30)
salary number(10,2)
and has ONLY one index on "fname" column.
There's about 100000 rows in it.
SQL1:
select * from abc
where salary>2000 and fname='PETER';
SQL2:
select * from abc
where fname='PETER' and salary>2000;
Question: Which is fast?
a. same performance
b. SQL2 is faster.
c. SQL1 is faster.
Here are two explain plans. The only difference is the order in the where clause. Notice the two plans are identical.
SQL> explain plan for
2 select *
3 from prd.account_master
4 where account_id = 1234567890
5 and principal_balance_amt = 12345.67;
Explained.
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 948 | 2 |
|* 1 | TABLE ACCESS BY INDEX ROWID| ACCOUNT_MASTER | 1 | 948 | 2 |
|* 2 | INDEX UNIQUE SCAN | PK_ACCOUNT_MASTER_ACCOUNT_ID | 3657K| | 1 |
Predicate Information (identified by operation id):
1 - filter("ACCOUNT_MASTER"."PRINCIPAL_BALANCE_AMT"=12345.67)
2 - access("ACCOUNT_MASTER"."ACCOUNT_ID"=1234567890)
SQL> explain plan for
2 select *
3 from prd.account_master
4 where principal_balance_amt = 12345.67
5 and account_id = 1234567890
6 ;
Explained.
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 948 | 2 |
|* 1 | TABLE ACCESS BY INDEX ROWID| ACCOUNT_MASTER | 1 | 948 | 2 |
|* 2 | INDEX UNIQUE SCAN | PK_ACCOUNT_MASTER_ACCOUNT_ID | 3657K| | 1 |
Predicate Information (identified by operation id):
1 - filter("ACCOUNT_MASTER"."PRINCIPAL_BALANCE_AMT"=12345.67)
2 - access("ACCOUNT_MASTER"."ACCOUNT_ID"=1234567890)
Similar Messages
-
B1 and SQL Server 2005 performance with 3.000.000 invoice-lines per year
I would like to know if SAP Business One with SQL Server 2005 could work with the following information:
- 40.000 Business Partners
- 40.000 Invoices per month
- Aprox. 3.000.000 invoice-lines per year
Of course it will be necessary to change some forms in B1.
What do you think?
Do you know any B1 customer working with that amout of data?> Hi,
>
> I think a good SQL2005 tuning (done by a good DBA)
> will improve performance. Number of records like that
> shouldn't hurt that kind of DB engine...
Hi,
I'm sure that MSSQL2005 can handle the amount of records & transactions in question. Even MSSQL 2000 can do it. However, any DB engine can be put on its knees with the combination of 2-tier application architecture and badly designed queries. B1 is a case in point. I wouldn't go into such a project without decent preliminary load testing and explicit commitment for support from SAP B1 dev team.
I have heard from implementation projects where B1 simply couldn't handle the amount of data. I've also participated in some presales cases for B1 where we decided not to take a project because we saw that B1 couldn't handle the amount of data (while the other features of B1 would have been more than enough for the customer). The one you're currently looking at seems like one of those.
Henry -
Performance issue with sql query. Please explain
I have a sql query
A.column1 and B.column1 are indexed.
Query1: select A.column1,A.column3, B.column2 from tableA , tableB where A.column1=B.column1;
query2: select A.column1,A.column3,B.column2,B.column4 from tableA , tableB where A.column1=B.column1;
1. Does both query takes same time? If not why?.
As they both go for same row with different number of columns. And Since the complete datablock is loaded in Database buffer cache. so there should not be extra time taken upto this time.
Please tell me if I am wrong.For me apart from required excessive bytes sent via SQL*Net to client as well bytes received via SQL*Net from client will cause to chatty network which will degrade the performance.
SQL> COLUMN plan_plus_exp FORMAT A100
SQL> SET LINESIZE 1000
SQL> SET AUTOTRACE TRACEONLY
SQL> SELECT *
2 FROM emp
3 /
14 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=616)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=616)
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
1631 bytes sent via SQL*Net to client
423 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
SQL> SELECT ename
2 FROM emp
3 /
14 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=154)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=154)
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
456 bytes sent via SQL*Net to client
423 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed Khurram -
Performance with dates in the where clause
Performance with dates in the where clause
CREATE TABLE TEST_DATA
FNUMBER NUMBER,
FSTRING VARCHAR2(4000 BYTE),
FDATE DATE
create index t_indx on test_data(fdata);
query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
My questions:
1) Why isn't the index t_indx used in Execution plan 1?
2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
Is that true for Execution plan 2 & 3?
3) Could some one explain what the filter & access predicate mean here?
Thanks in advance.
Execution Plan 1:
SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
COUNT(*)
283
Execution Plan
Plan hash value: 1486387033
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | 9 | | |
|* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
Predicate Information (identified by operation id):
2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
Note
- dynamic sampling used for this statement
Statistics
4 recursive calls
0 db block gets
1610 consistent gets
0 physical reads
0 redo size
412 bytes sent via SQL*Net to client
380 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Execution Plan 2:
SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
COUNT(*)
283
Execution Plan
Plan hash value: 1687886199
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 9 | | |
|* 2 | FILTER | | | | | |
|* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
259259259259259259)
3 - access("FDATE">=TRUNC(SYSDATE@!) AND
"FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
9)
Note
- dynamic sampling used for this statement
Statistics
7 recursive calls
0 db block gets
76 consistent gets
0 physical reads
0 redo size
412 bytes sent via SQL*Net to client
380 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows
Execution Plan 3:
SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
COUNT(*)
283
Execution Plan
Plan hash value: 1687886199
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 9 | | |
|* 2 | FILTER | | | | | |
|* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
23:59:59','DD-MON-YY hh24:mi:ss'))
3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
"FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
Note
- dynamic sampling used for this statement
Statistics
7 recursive calls
0 db block gets
76 consistent gets
0 physical reads
0 redo size
412 bytes sent via SQL*Net to client
380 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedHi,
user10541890 wrote:
Performance with dates in the where clause
CREATE TABLE TEST_DATA
FNUMBER NUMBER,
FSTRING VARCHAR2(4000 BYTE),
FDATE DATE
create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
Be careful; post the code you're actually running.
query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
My questions:
1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
If "better" means faster, you've already shown that one is about as good as the other.
Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
For clarity, I prefer:
WHERE fdate >= TRUNC (SYSDATE)
AND fdate < TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
Is that true for Execution plan 2 & 3?
3) Could some one explain what the filter & access predicate mean here?Sorry, I can't. -
How to measure the performance of sql query?
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i...
It ll be useful for me to write efficient query....
Thanks & Regardspsram wrote:
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
You can use the following approach to get a deeper understanding of the operations performed by each row source:
alter session set statistics_level=all;
alter session set timed_statistics = true;
select /* findme */ ... <your query here>
SELECT
SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
OBJECT_NAME,
CARDINALITY,
LAST_OUTPUT_ROWS,
LAST_CR_BUFFER_GETS,
LAST_DISK_READS,
LAST_DISK_WRITES,
FROM V$SQL_PLAN_STATISTICS_ALL P,
(SELECT *
FROM (SELECT *
FROM V$SQL
WHERE SQL_TEXT LIKE '%findme%'
AND SQL_TEXT NOT LIKE '%V$SQL%'
AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
ORDER BY LAST_LOAD_TIME DESC)
WHERE ROWNUM < 2) S
WHERE S.HASH_VALUE = P.HASH_VALUE
AND S.CHILD_NUMBER = P.CHILD_NUMBER
ORDER BY ID
/Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Loading huge file with Sql-Loader from Java
Hi,
I have a csv file with aprox. 3 and a half million records.
I load this data with sqlldr from within java like this:
String command = "sqlldr userid=" + user + "/" + pass
+ "@" + service + " control='" + ctlFile + "'";
System.out.println(command);
if (System.getProperty("os.name").contains("Windows")) {
p = Runtime.getRuntime().exec("cmd /C " + command);
} else {
p = Runtime.getRuntime().exec("sh -c " + command);
}it does what I want to, load the data to a certain table, BUT it takes too much time, Is there a faster way to load data to an oracle db from within java?
Thanks, any advice is very welcomeHave your DBA work on this issue - they can monitor and check performance of SQL*Loader
SQL*Loader performance tips [Document 28631.1]
SQL*LOADER SLOW PERFORMANCE [Document 1026145.6]
Master Note for SQL*Loader [Document 1264730.1]
HTH
Srini -
Dispatcher stopped and not able to connect with sql server
Hi ,
In one of test system disp+work is started and then within no time its stopped.
I am able to connect with sql server 2005 database but while starting sap dispatcher is stopping.
Here is the log of dev_wo
trc file: "dev_w0", trc level: 1, release: "700"
ACTIVE TRACE LEVEL 1
ACTIVE TRACE COMPONENTS all, MJ
B
B Thu Jan 05 07:24:02 2012
B create_con (con_name=R/3)
B Loading DB library 'C:\usr\sap\DE1\SYS\exe\uc\NTI386\dbmssslib.dll' ...
B Library 'C:\usr\sap\DE1\SYS\exe\uc\NTI386\dbmssslib.dll' loaded
B Version of 'C:\usr\sap\DE1\SYS\exe\uc\NTI386\dbmssslib.dll' is "700.08", patchlevel (0.72)
B New connection 0 created
M sysno 11
M sid DE1
M systemid 560 (PC with Windows NT)
M relno 7000
M patchlevel 0
M patchno 75
M intno 20050900
M make: multithreaded, Unicode, optimized
M pid 988
M
M kernel runs with dp version 217000(ext=109000) (@(#) DPLIB-INT-VERSION-217000-UC)
M length of sys_adm_ext is 572 bytes
M ***LOG Q0Q=> tskh_init, WPStart (Workproc 0 988) [dpxxdisp.c 1299]
I MtxInit: 30000 0 0
M DpSysAdmExtCreate: ABAP is active
M DpSysAdmExtCreate: VMC (JAVA VM in WP) is not active
M DpShMCreate: sizeof(wp_adm) 23936 (1408)
M DpShMCreate: sizeof(tm_adm) 3994272 (19872)
M DpShMCreate: sizeof(wp_ca_adm) 24000 (80)
M DpShMCreate: sizeof(appc_ca_adm) 8000 (80)
M DpCommTableSize: max/headSize/ftSize/tableSize=500/8/528056/528064
M DpShMCreate: sizeof(comm_adm) 528064 (1048)
M DpFileTableSize: max/headSize/ftSize/tableSize=0/0/0/0
M DpShMCreate: sizeof(file_adm) 0 (72)
M DpShMCreate: sizeof(vmc_adm) 0 (1440)
M DpShMCreate: sizeof(wall_adm) (38456/34360/64/184)
M DpShMCreate: sizeof(gw_adm) 48
M DpShMCreate: SHM_DP_ADM_KEY (addr: 07F90040, size: 4659000)
M DpShMCreate: allocated sys_adm at 07F90040
M DpShMCreate: allocated wp_adm at 07F91E40
M DpShMCreate: allocated tm_adm_list at 07F97BC0
M DpShMCreate: allocated tm_adm at 07F97BF0
M DpShMCreate: allocated wp_ca_adm at 08366E90
M DpShMCreate: allocated appc_ca_adm at 0836CC50
M DpShMCreate: allocated comm_adm at 0836EB90
M DpShMCreate: system runs without file table
M DpShMCreate: allocated vmc_adm_list at 083EFA50
M DpShMCreate: allocated gw_adm at 083EFA90
M DpShMCreate: system runs without vmc_adm
M DpShMCreate: allocated ca_info at 083EFAC0
M DpShMCreate: allocated wall_adm at 083EFAC8
X EmInit: MmSetImplementation( 2 ).
X MM global diagnostic options set: 0
X <ES> client 0 initializing ....
X Using implementation flat
M <EsNT> Memory Reset disabled as NT default
X ES initialized.
M
M Thu Jan 05 07:24:03 2012
M ThInit: running on host sugarland
M
M Thu Jan 05 07:24:04 2012
M calling db_connect ...
C Warning: Env(MSSQL_SERVER) [SUGARLAND\DE1] <> Prof(dbs/mss/server) [SUGARLAND]. Profile value will be used.
C Thread ID:708
C Thank You for using the SLOLEDB-interface
C Using dynamic link library 'C:\usr\sap\DE1\SYS\exe\uc\NTI386\dbmssslib.dll'
C dbmssslib.dll patch info
C patchlevel 0
C patchno 72
C patchcomment MSSQL: Thread check in DbSlDisconnect (969143)
C np:(local) connection used on SUGARLAND
C CopyLocalParameters: dbuser is 'de1'
C Using Provider SQLNCLI
C OpenOledbConnection: MARS property was set successfully.
C
C Thu Jan 05 07:24:19 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C Using Provider SQLNCLI
C OpenOledbConnection: MARS property was set successfully.
C
C Thu Jan 05 07:24:34 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C Using Provider SQLNCLI
C OpenOledbConnection: MARS property was set successfully.
C
C Thu Jan 05 07:24:49 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C failed to establish conn to np:(local).
C Retrying without protocol specifier: (local)
C Using Provider SQLNCLI
C OpenOledbConnection: MARS property was set successfully.
C
C Thu Jan 05 07:25:05 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C Using Provider SQLNCLI
C OpenOledbConnection: MARS property was set successfully.
C
C Thu Jan 05 07:25:21 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C Using Provider SQLNCLI
C OpenOledbConnection: MARS property was set successfully.
C
C Thu Jan 05 07:25:37 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C failed to establish conn. 0
B ***LOG BY2=> sql error 0 performing CON [dbsh#2 @ 1204] [dbsh 1204 ]
B ***LOG BY0=> <message text not available> [dbsh#2 @ 1204] [dbsh 1204 ]
B ***LOG BY2=> sql error 0 performing CON [dblink#3 @ 431] [dblink 0431 ]
B ***LOG BY0=> <message text not available> [dblink#3 @ 431] [dblink 0431 ]
M ***LOG R19=> ThInit, db_connect ( DB-Connect 000256) [thxxhead.c 1411]
M in_ThErrHandle: 1
M *** ERROR => ThInit: db_connect (step 1, th_errno 13, action 3, level 1) [thxxhead.c 10156]
M
M Info for wp 0
M
M stat = 4
M reqtype = 1
M act_reqtype = -1
M rq_info = 0
M tid = -1
M mode = 255
M len = -1
M rq_id = 65535
M rq_source = 255
M last_tid = 0
M last_mode = 0
M semaphore = 0
M act_cs_count = 0
M control_flag = 0
M int_checked_resource(RFC) = 0
M ext_checked_resource(RFC) = 0
M int_checked_resource(HTTP) = 0
M ext_checked_resource(HTTP) = 0
M report = > <
M action = 0
M tab_name = > <
M vm = V-1
M
M *****************************************************************************
M *
M * LOCATION SAP-Server sugarland_DE1_11 on host sugarland (wp 0)
M * ERROR ThInit: db_connect
M *
M * TIME Thu Jan 05 07:25:37 2012
M * RELEASE 700
M * COMPONENT Taskhandler
M * VERSION 1
M * RC 13
M * MODULE thxxhead.c
M * LINE 10354
M * COUNTER 1
M *
M *****************************************************************************
M
M PfStatDisconnect: disconnect statistics
M Entering TH_CALLHOOKS
M ThCallHooks: call hook >ThrSaveSPAFields< for event BEFORE_DUMP
M *** ERROR => ThrSaveSPAFields: no valid thr_wpadm [thxxrun1.c 720]
M *** ERROR => ThCallHooks: event handler ThrSaveSPAFields for event BEFORE_DUMP failed [thxxtool3.c 260]
M Entering ThSetStatError
M ThIErrHandle: do not call ThrCoreInfo (no_core_info=0, in_dynp_env=0)
M Entering ThReadDetachMode
M call ThrShutDown (1)...
M ***LOG Q02=> wp_halt, WPStop (Workproc 0 988) [dpnttool.c 327]
Please help me on this
Thanks
SrikanthHi Amit,
I restarted the system but dispatcher still in same stage.
Here is the log for dev_w0
========================================
Fri Jan 06 03:41:06 2012
C OpenOledbConnection: line 23391. hr: 0x8000ffff Login timeout expired
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Login timeout expired
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 2, sev 0), Named Pipes Provider: Could not open a connection to SQL Server [2].
C Procname: [OpenOledbConnection - no proc]
C sloledb.cpp [OpenOledbConnection,line 23391]: Error/Message: (err 0, sev 0), Invalid connection string attribute
C Procname: [OpenOledbConnection - no proc]
C failed to establish conn. 0
B ***LOG BY2=> sql error 0 performing CON [dbsh#2 @ 1204] [dbsh 1204 ]
B ***LOG BY0=> <message text not available> [dbsh#2 @ 1204] [dbsh 1204 ]
B ***LOG BY2=> sql error 0 performing CON [dblink#3 @ 431] [dblink 0431 ]
B ***LOG BY0=> <message text not available> [dblink#3 @ 431] [dblink 0431 ]
M ***LOG R19=> ThInit, db_connect ( DB-Connect 000256) [thxxhead.c 1411]
M in_ThErrHandle: 1
M *** ERROR => ThInit: db_connect (step 1, th_errno 13, action 3, level 1) [thxxhead.c 10156]
M
M Info for wp 0
M
M stat = 4
M reqtype = 1
M act_reqtype = -1
M rq_info = 0
M tid = -1
M mode = 255
M len = -1
M rq_id = 65535
M rq_source = 255
M last_tid = 0
M last_mode = 0
M semaphore = 0
M act_cs_count = 0
M control_flag = 0
M int_checked_resource(RFC) = 0
M ext_checked_resource(RFC) = 0
M int_checked_resource(HTTP) = 0
M ext_checked_resource(HTTP) = 0
M report = > <
M action = 0
M tab_name = > <
M vm = V-1
M
M *****************************************************************************
M *
M * LOCATION SAP-Server sugarland_DE1_11 on host sugarland (wp 0)
M * ERROR ThInit: db_connect
M *
M * TIME Fri Jan 06 03:41:06 2012
M * RELEASE 700
M * COMPONENT Taskhandler
M * VERSION 1
M * RC 13
M * MODULE thxxhead.c
M * LINE 10354
M * COUNTER 1
M *
M *****************************************************************************
M
M PfStatDisconnect: disconnect statistics
M Entering TH_CALLHOOKS
M ThCallHooks: call hook >ThrSaveSPAFields< for event BEFORE_DUMP
M *** ERROR => ThrSaveSPAFields: no valid thr_wpadm [thxxrun1.c 720]
M *** ERROR => ThCallHooks: event handler ThrSaveSPAFields for event BEFORE_DUMP failed [thxxtool3.c 260]
M Entering ThSetStatError
M ThIErrHandle: do not call ThrCoreInfo (no_core_info=0, in_dynp_env=0)
M Entering ThReadDetachMode
M call ThrShutDown (1)...
M ***LOG Q02=> wp_halt, WPStop (Workproc 0 3632) [dpnttool.c 327] -
Is it possible to use MERGE with sql*plus and if yes how?
Hello everybody,
I have an xls file and I have to load its data with sql*loader, but instead of "APPEND" operation stated in the control file I need MERGE operation.Here is what my control file contains:
LOAD DATA
INFILE 'C:\WORK\DSK_WH\LOAD_FIRST\sqlldr\data.csv'
BADFILE 'C:\WORK\DSK_WH\LOAD_FIRST\sqlldr\p_badfile.txt'
APPEND
INTO TABLE D_ACCOUNT_NAMES_TMP
FIELDS TERMINATED BY ";"
TRAILING NULLCOLS
(account_number , consignment, sub_consignment, consign_sub_consign, account_name_bg, account_number_2, consign_parent_2, account_name_bg_2, account_number_3, account_name_bg_3, account_number_4, account_name_bg_4, account_number_5 , ACCOUNT_NAME_BG_5 ).
How can I use merge in this case , instead of insert ?
Regards,
MariaI'm not sure if there is any MERGE thing in SQL*Loader, but you can have a backup table which gets loaded by SQL*Loader and then use MERGE statement from backup to the original table, if your data is not huge to become a performance barrier. This is just a suggestion, may be you can try some reading of SQL*Loader here.
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/part2.htm#436160
Cheers
Sarma. -
Which is better for performance Azure SQL Database or SQL Server in Azure VM?
Hi,
We are building an ASP.NET app that will be running on Microsoft Cloud which I think is the new name for Windows Azure. We're expecting this app to have many simultaneous users and want to make sure that we provide excellent performance to end users.
Here are our main concerns/desires:
Performance is paramount. Fast response times are very very important
We want to have as little to do with platform maintenance as possible e.g. managing OS or SQL Server updates, etc.
We are trying to use "out-of-the-box" standard features.
With that said, which option would give us the best possible database performance: a SQL Server instance running in a VM on Azure or SQL Server Database as a fully managed service?
Thanks, Samhello,
SQL Database using shared resources on the Microsft data centre. Microsoft balance the resource usage of SQL Database so that no one application continuously dominates any resource.You can try the
Premium Preview
for Windows Azure SQL Database which offers better performance by guaranteeing a fixed amount of dedicated resources for a database.
If you using SQL Server instance running in a VM, you control the operating system and database configuration. And the
performance of the database depends on many factors such as the size of a virtual machine, and the configuration of the data disks.
Reference:
Choosing between SQL Server in Windows Azure VM & Windows Azure SQL Database
Regards,
Fanny Liu
If you have any feedback on our support, please click here.
Fanny Liu
TechNet Community Support -
Hi All,
I would like to know what are the Drawbacks of using BizTalk Server Enterprise Edition 2009/2013 with SQL Server Standard Edition vs Enterprise Edition.
We are currently using Enterprise Edition.
1) Are there any adverse performance constraints? Microsoft mentions to use SQL Server Enterprise Edition for "Optimal Performance" but does not expand on the topic.
2) Any features missing?
2) Is there a supported way to downgrade SQL Enterprise to Standard for an exisiting BTS Environment?
Thanks!1) SQL Standard is limited to 4 CPU's and 64GB of memory while Enterprise is 8 CPU and 2TB or memory.
2) The only BizTalk feature dependency on a SQL Enterprise feature is BAM Real-Time Aggrigations which requires Analysis Services RTA.
3) Yes, but it's not a SQL downgrade really, you would move the BizTalk databases:
http://msdn.microsoft.com/en-us/library/aa559835.aspx -
Essbase Studio - Failed to Establish Connection With SQL Database Server
Hi all,
I am new to Hyperion and am having trouble deploying what seems a simple cube in Essbase Studio.
My environment is Windows 2003, EPM 11.1.1.2, SQL Server 2000.
I have the following two issues which may be related.
1. The EPM System Diagnostic tool says that Hyperion Foundation cannot connect to the SQL database.
Error:
Failed: Connection to database
Error: java.net.UnknownHostException: <server name: <server name>
Recommended Action:
Every other EPM application is able to connect to the database. I have tried re-configuring Foundation Services and checking the config files and nothing looks wrong. I would appreciate advice on how to fix this.
2 In Essbase Studio, I was able to connect to the database where the source data is, build the minischema and create dimensions and measures. But when I run the cube deployment wizard I get the error:
Message: Failed to deploy Essbase cube
Caused By: Failed to build Essbase cube dimension: (Time)
Caused By: Cannot incremental build. Essbase Error(1021001): Failed to Establish Connection With SQL Database Server. See log for more information
…ODBC Layer Error: [08001] ==> [[Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]SQL Server does not exist or access denied.]
I have checked the DSN as well as the connections set up within Studio, and they are all able to connect to the database. I am using the admin user for the Essbase server and have created a different user for the databases where the data resides and the shared services database. I tried using the same user all the way through and this didn’t help.
Can you please advise what else I can check or change to resolve this issue.
Also, can you work in EAS with cubes that are created in Studio? I tried looking at it and I got an error that the rule file couldn’t be accessed because it was created in Studio.
After the above failures, I tried creating a cube in EIS, with mixed success. I have been able to load the members and data without errors. But when I try to view the data in the cube, I am seeing field names for the measures instead of the data itself.
...Definitely more questions than answers at this stage of my learning.
Regards
MichelleMichelle,
I don't know if you found an answer to you question after so many months but I was hoping I could be helpful.
The issue you are experience happens often when the dimension that you are getting an error on, in this case TIME, is built from a snowflake lineage and there is a bad foreign key reference. This dimension is most like high-up in your outline build process for Essbase Studio and this prevents the build from happening usually early on.
Check your logs also. They are in Hyperion > Products > logs > essbase > essbasestudio.
You can also make your logs more verbose by setting a configuration variable in the essbase studio server.properties file but that should only be used for debugging as it really saps performance.
And, yes, you can of course edit an Essbase Studio deployed cube in EAS. However, any changes you make to the cube in EAS are subject to being wiped-out upon the next Essbase Studio deployment of that Applicaion/Database combo.
If you want to provide more detail, screenshots, etc. I would be glad to help where I can.
Cheers,
Christian
http://www.artofbi.com -
*** ERROR = CONNECT failed with SQL error '12154' after Oracle Upgrade
Hello Experts,
Recently I upgraded my oracle database from 11.2.0.3 to 11.2.0.4.The upgrade completed successfully but I am unable to start the instance.
R3trans -d failing with the error--
OCIServerAttach(OCI_DEFAULT) failed with -1=OCI_ERROR
OCIServerAttach(OCI_DEFAULT) failed with SQL error 12154:
ORA-12154: TNS:could not resolve the connect identifier specified
OCIServerAttach(con=0, svc=069C1630): Error 12154 attaching new srv=069C17D0
OCIHandleFree(con=0): Server handle srv=069C17D0 freed.
server_detach(con=0, svc=069C1630; srv=NULL, stale=2)
OCIHandleFree(con=0): Service svc=069C1630 freed (i=1).
*** ERROR => CONNECT failed with SQL error '12154'
-->oci_get_errmsg (con=0, rc=12154)
OCIErrorGet() -> SQL error code: 12154; buflen=66
OCIErrorGet(): error text ->
ORA-12154: TNS:could not resolve the connect identifier specified
ocica() -> SQL error code 12154,12154
DbSlConnect(con=0) -> orc=12154, rc=99=DBSL_ERR_DB
***LOG BY2=>sql error 12154 performing CON
***LOG BY0=>ORA-12154: TNS:could not resolve the connect identifier specified
I have checked listener is up and running properly.Entry is maintained in tnsnames.ora.
Also kernel is upgraded with latest path.Curent kernel version--742
Please suggest on the mentioned error.I am attaching the logs here.Thanks in advance.
Best Regards,
DebadityaHi Debadiya
Kindly check this SAP Notes
1204916 - Error: "ORA-12154" when connecting to Oracle from a Windows 64-bit platform
2153975 - Database connectivity fails with ORA-12154
556232 - Environment settings for R/3/Oracle on Windows
443867 - ORA-12154 Collective SAP note
BR
SS -
Slow Performance with Windows 7
I have a user who just moved to a Windows 7 desktop. Performance of said desktop is adequate. On a large report that queries the SQL server, it takes a very long time to display the results (10-20 minutes to complete). Smaller reports seem to run fine. It did not take this long on Windows XP. I am by no means a Crystal Reports guy ( I know very little about it). Just wondering if anyone has experienced this issue or has any suggestions on settings to tweak to optimize performance. We're using Crystal Reports IX with SQL Server 2005. Thanks.
Hello,
Crystal Reports XI is not supported on Windows 7.
You can upgrade to CR XI R2 for free and and then upgrade to Service Pack 6 and then it will be supported.
Use your XI keycode when you install.
This is the link to get the full build:
https://smpdl.sap-ag.de/~sapidp/012002523100011802732008E/crxir2_sp4_full_build.exe
Then get SP 5 and SP 6:
https://smpdl.sap-ag.de/~sapidp/012002523100013876392008E/crxir2win_sp5.exe
https://smpdl.sap-ag.de/~sapidp/012002523100015859952009E/crxir2win_sp6.exe
Thank you
Don -
High invalidations in v$sqlarea for 1 query tag with "SQL Analyze"
Hi All,
Hopefully post to the right forum, if not please do let me know. Thanks
I have one pre-production issue still don't have any clue how to move forward.
This is 2 RAC nodes in linux platform with Oracle 11.2.0.2
In the begininng this environment having a lot of performance issue, as huge "cursor: pin S wait on X", "latch: shared pool"
and "library cache:Mutex X" causing very bad performance in this environment. After oracle support suggest to disable few hidden paramter
and adjust some parameter, then it help to stablized the environment (according to oracle, the initial issue was caused by high version count).
But we still can find minimal "latch:shared pool" and "library cache:Mutex X" on top 5 wait event list on hourly AWR report.
This time oracle was saying it might caused by high reload and high invalidatiosn in sqlarea (not sure how true it is), luckily the event
did not caused the performance issue at this moment, but we're asking support how can we get rid of the "mutex/latch" event.
They give me one query to check the invalidations in v$sqlarea, and they suspect the high validation is causing by application.
select *
from v$sqlarea
order by invalidations DESC;
Weird thing is, there have one SQL tag with "SQL Analyze" always causing high invalidations. But we're not able to get more detail (base on SQL_ID)
in v$sql or v$session table. This SQL insert into v$sqlarea and remove within 1 or 2 seconds, hard to get more information.
And the statement is exactly the same, but don't know why SQL Analyze always checking on it.
This SQL is triggering by SYS user, and it is inserting into MLOG$ table (one of the application materialized log file)
insert into "test"."MLOG$_test1" select * from "test"."MLOG$_test1"
The v$sqlarea information as below, sometime the invalidations can hit more than 10,000
SQL_ID SQL_TEXT LOADS INVALIDATIONS
0m6dhq90rg82x /* SQL Analyze(632,0) */ insert into "test"."MLOG$_test" select * from "test"."MLOG$_test 7981 7981
{code}
Anyone have any idea how can i move forward for this issue? As Oracle is asking me to use SQLTXPLAIN get the detail?
Please share with me if you have any idea. Thanks in advance.
Regards,
KlngHi Dom,
We have checked there have no SQL Tuning enable for this SQL_ID. Below is the optimizer parameter in this environment, the hidden parameter was changed which suggest by oracle support.
NAME TYPE VALUE
_optimizer_adaptive_cursor_sharing boolean FALSE
_optimizer_extended_cursor_sharing_r string NONE
el
object_cache_optimal_size integer 102400
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.2
optimizer_index_caching integer 90
optimizer_index_cost_adj integer 10
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
plsql_optimize_level integer 2
SQL> select * from dba_sql_plan_baselines;
no rows selected
SQL>yeah we did run the ash, but the high invalidation did not caputre in the report. Actually this SQL tag with sql analyze it gone from v$sqlarea very fast (only 1 or 2 seconds only).
Thanks.
Regards,
Klng -
DB2 8.1 to Oracle 11g with SQL Developer 3.0
Hi,
I started migrating my DB2 8.1 to Oracle 11g, with SQL Developer 3.0
Basically, I need to migrate TABLES.
I followed these steps:
1) I created a new Oracle database, tablespaces, users, etc.
2) Then, I created both (DB2 and Oracle) connections into SQL Developer 3.0. All works fine.
3) I start capturing one table with the "Copy to Oracle" feature. Done with no errors.
But when I compare the table structure, I see the following problems in Oracle:
a) All fields are NULLABLE = YES. SQL Developer show this field property correctly in DB2: NULLABLE = NO, but not migrated the same!
SOLUTION: All I want is that SQL Developer simply migrating the same value who I have in the DB2. But how?
b) In DB2 I have one field property called COLUM DEFAULT. In Oracle this property is called DATA_DEFAULT. The SQL Developer show this field property correctly for the DB2 tables (for example: 0.0, ' ', etc), but don't migrated the same value! All fields are configured to DATA_DEFAULT = NULL.
SOLUTION: I think this occurs because NULLABLE is migrated with the value YES. Well, all I need is the same above...
NOTE: I tested the SWISSQL DATA MIGRATION software, and it works fine. All tables, field properties and data are migrated sucessfull. But this program is trial version!
Well, I think all of this are BUGS into SQL Developer.
Please, anyone to help me?
Regards,
YlramWelcome to the forum!
>
I just did right click in the procedure body and found [Debug, Compile for Debug, Compile, Run].
>
You listed a bunch of things but you didn't say what steps you actually performed and what the result was.
Did you 'Compile'the procedure? until you compile the procedure you can't debug it.
I just created a new procedure and when I select the body it displays in the 'Code' window on the right. But the 'Debug' icon is not enabled because the procedure was not compiled for debug.
When I compile it for debug the 'Debug' icon is now enabled.
Maybe you are looking for
-
Some fixed in the optional update. Hope this helps some with the issues. Again the update went smooth and clean for me. Issues that this update fixes This update package fixes the issues that are documented in the following Microsoft Knowledge Base (
-
New HP Photosmart 7520 won't print from photo tray
EDIT: It will feed from the phototray if I print the photo from the Windows Photo Viewer. However, it still refuses to print straight from the jpg, even though I explicitly specify in all the settings that I want to print a 4x6 from the photo tray.
-
Ipad keeps restarting/resetting. restored and now wont even activate
Today my ipad suddently started restarting/resetting every few minutes. I did a restore to factory settings and now it wont activate and continues to restart. Is there anything else to try other than bringing it in? thanks
-
Hi all, I am new on SAP. I need to extract data from SAP ECC, in particular, from some objects, (tables and extractors such as ANKA, CEPC, 0FI_GL_4), of the FI and CO modules. I would like to know if SAP has some web services just ready to read these
-
Unable to set count for Select Query for BizTalk SQL receive port
Hi All, Iam using BizTalk server 2009 classic SLQ adapter (using XML clause)for my integration to pull records from DB in to my application. As per the DB values iam unable to set the record count as this started giving me error below (first it worke