Sql access for large table
hi,
if a table has more than 6,000,000 records, any way to optimise to access so that can be faster.
i try not to use sql statement alot. i try to manipulate data in internal table but first time i also need to have select statement to copy to internal table.
any advice.
thanks
Tips
1)in select include all primary keys in where condition to fetch data
2)delare table without header line and with out occurs statement. and use work area to handle it
Ex:-
TYPES:BEGIN OF gty_kna1, " General Data in Customer Master
kunnr TYPE kna1-kunnr, " Payer Number
name1 TYPE kna1-name1, " Name1
telf1 TYPE kna1-telf1, " Communication
konzs TYPE kna1-konzs, " Corporate Group
END OF gty_kna1.
data:gs_kna1 TYPE gty_kna1, " General Data in Customer Master
gt_kna1 TYPE TABLE OF gty_kna1, " General Data in Customer Master
Note:
In a SELECT statement, only the fields (field-list) which are needed are selected in the order that they reside on the database, thus network load is considerably less. The number of fields can be restricted in two ways using a field list in the SELECT clause of the statement or by using a view defined in ABAP/4 Dictionary. The usage of view has the advantage of better reusability.
SELECT SINGLE is used instead of SELECT-ENDSELECT loop when the entire key is available. SELECT SINGLE requires one communication with the database system, whereas SELECT-ENDSELECT needs two
Always specify the conditions in the WHERE-clause instead of checking them with check-statements, the database system can then use an index (if possible) and the network load is considerably less. You should not check the conditions with the CHECK statement because the contents of the whole table must be read from the database files into DBMS cache and transferred over the network. If the conditions are specified in the where clause DBMS reads exactly the needed data.
Complex code is not embedded within a SELECT / ENDSELECT statement.
No complex WHERE clauses, since complex where clauses are poison for the statement optimizer in any database system.
For all frequently used SELECT statements, try to use an index. You always use an index if you specify (a generic part of) the index fields concatenated with logical ANDs in the Select statement's WHERE clause
When loading data into Internal table, INTO TABLE OR APPENDING TABLE is used instead of a SELECT/APPEND combination. It is always faster to use the INTO TABLE version of a Select statement than to use APPEND statements.
Use a select list with aggregate functions instead of checking and computing, when trying to find the maximum, minimum, sum and average value or the count of a database column.
Rewards if useful...............
Minal
Similar Messages
-
HS connection to MySQL fails for large table
Hello,
I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-00942: table or view does not exist
[Generic Connectivity Using ODBC][MySQL][ODBC 3.51
Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
(SQL State: S1T00; SQL Code: 2013)
ORA-02063: preceding 2 lines from MYSQL_RMG
I traced the HS connection and here is the result from the .trc file:
Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
Heterogeneous Agent Release
10.2.0.1.0
(0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
(0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
(0) IDENTIFIER_QUOTE_CHAR="",
(0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
(0) tasource name='MYSQL_RMG' type='ODBC'
(0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
(0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
(0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
(0) tokenSize='1000' noInsertParameterization='true'
noThreadedReadAhead='true'
(0) noCommandReuse='true'/></environment></binding></navobj>
(0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
(0) hoadtab(26); Entered.
(0) Table 1 - PROGRESSNOTES
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
(0) memory (SQL State: S1T00; SQL Code: 2008)
(0) (Last message occurred 2 times)
(0)
(0) hoapars(15); Entered.
(0) Sql Text is:
(0) SELECT * FROM "PROGRESSNOTES"
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
(0) server during query (SQL State: S1T00; SQL Code: 2013)
(0) (Last message occurred 2 times)
(0)
(0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
(0) log file for details.
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
(0) the log file for details.
(0) Closing log file at THU JUN 12 11:20:38 2008.
I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-02068: following severe error from MYSQL_RMG
ORA-28511: lost RPC connection to heterogeneous remote agent using
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
# HS init parameters
HS_FDS_CONNECT_INFO = MYSQL_RMG
HS_FDS_TRACE_LEVEL = ON
My SID_LIST_LISTENER entry is:
(SID_DESC =
(PROGRAM = HSODBC)
(SID_NAME = MYSQL_RMG)
(ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
Finally, here is my TNSNAMES.ORA entry for the HS connection:
MYSQL_RMG =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
(CONNECT_DATA =
(SID = MYSQL_RMG)
(HS = OK)
Your advice will be greatly appeciated,
Thanks,
Luis
Message was edited by:
lmconsiteFirst of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
indicates the Driver or the DB abends the connection due to a timeout.
Check out the wait_timeout mysql variable on the server and increase it. -
Gather table stats taking longer for Large tables
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
Does Table size actually matter for stats collection ?Max wrote:
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
09:40:05 SQL> desc user_tables
Name Null? Type
TABLE_NAME NOT NULL VARCHAR2(30)
TABLESPACE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(30)
IOT_NAME VARCHAR2(30)
STATUS VARCHAR2(8)
PCT_FREE NUMBER
PCT_USED NUMBER
INI_TRANS NUMBER
MAX_TRANS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
LOGGING VARCHAR2(3)
BACKED_UP VARCHAR2(1)
NUM_ROWS NUMBER
BLOCKS NUMBER
EMPTY_BLOCKS NUMBER
AVG_SPACE NUMBER
CHAIN_CNT NUMBER
AVG_ROW_LEN NUMBER
AVG_SPACE_FREELIST_BLOCKS NUMBER
NUM_FREELIST_BLOCKS NUMBER
DEGREE VARCHAR2(10)
INSTANCES VARCHAR2(10)
CACHE VARCHAR2(5)
TABLE_LOCK VARCHAR2(8)
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
PARTITIONED VARCHAR2(3)
IOT_TYPE VARCHAR2(12)
TEMPORARY VARCHAR2(1)
SECONDARY VARCHAR2(1)
NESTED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
FLASH_CACHE VARCHAR2(7)
CELL_FLASH_CACHE VARCHAR2(7)
ROW_MOVEMENT VARCHAR2(8)
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
SKIP_CORRUPT VARCHAR2(8)
MONITORING VARCHAR2(3)
CLUSTER_OWNER VARCHAR2(30)
DEPENDENCIES VARCHAR2(8)
COMPRESSION VARCHAR2(8)
COMPRESS_FOR VARCHAR2(12)
DROPPED VARCHAR2(3)
READ_ONLY VARCHAR2(3)
SEGMENT_CREATED VARCHAR2(3)
RESULT_CACHE VARCHAR2(7)
09:40:10 SQL> >
Does Table size actually matter for stats collection ?yes
Handle: Max
Status Level: Newbie
Registered: Nov 10, 2008
Total Posts: 155
Total Questions: 80 (49 unresolved)
why so many unanswered questions? -
How to get SQL script for generating table, constraint, indexes?
I'd like to get from somewhere Oracle tool for generating simple SQL script for generating table, indexes, constraint (like Toad) and it has to be Oracle tool but not Designer.
Can someone give me some edvice?
Thanks!
m.I'd like to get from somewhere Oracle tool for
generating simple SQL script for generating table,
indexes, constraint (like Toad) and it has to be
Oracle tool but not Designer.
SQL Developer is similar to Toad and is an Oracle tool.
http://www.oracle.com/technology/products/database/sql_developer/index.html -
Pagination query help needed for large table - force a different index
I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
SELECT members.*
FROM members,
SELECT RID, rownum rnum
FROM
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
The problem I have is this:
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
SELECT /*+ index(members, joindate_idx) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
SELECT /*+ first_rows(100) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
SELECT members.* -- Select all data from members table
FROM members, -- members table added to FROM clause
SELECT RID, rownum rnum
FROM
SELECT /*+ index(members, joindate_idx) */ rowid as RID -- Hint is ignored now that I am joining in the outer query
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowid -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
Thanks!Lakmal Rajapakse wrote:
OK here is an example to illustrate the advantage:
SQL> set autot traceonly
SQL> select * from (
2 select a.*, rownum x from
3 (
4 select a.* from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 )
9 where x >= 1100
10 /
101 rows selected.
Execution Plan
Plan hash value: 3711662397
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 1 | VIEW | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1200 | 506K| 192 (0)| 00:00:03 |
| 4 | TABLE ACCESS BY INDEX ROWID| EVENTS | 253M| 34G| 192 (0)| 00:00:03 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 1200 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("X">=1100)
2 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
443 consistent gets
0 physical reads
0 redo size
25203 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
SQL>
SQL>
SQL> select * from aoswf.events a, (
2 select rid, rownum x from
3 (
4 select rowid rid from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 ) b
9 where x >= 1100
10 and a.rowid = rid
11 /
101 rows selected.
Execution Plan
Plan hash value: 2308864810
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 201K| 261K (1)| 00:52:21 |
| 1 | NESTED LOOPS | | 1200 | 201K| 261K (1)| 00:52:21 |
|* 2 | VIEW | | 1200 | 30000 | 260K (1)| 00:52:06 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 253M| 2895M| 260K (1)| 00:52:06 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 253M| 4826M| 260K (1)| 00:52:06 |
| 6 | TABLE ACCESS BY USER ROWID| EVENTS | 1 | 147 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("X">=1100)
3 - filter(ROWNUM<=1200)
Statistics
8 recursive calls
0 db block gets
117 consistent gets
0 physical reads
0 redo size
27539 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
Lakmal (and OP),
Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter pga
NAME TYPE VALUE
pga_aggregate_target big integer 103M
SQL> create table t nologging as select * from all_objects where 1 = 2 ;
Table created.
SQL> create index t_idx on t(last_ddl_time) nologging ;
Index created.
SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
40617 rows created.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
PL/SQL procedure successfully completed.
SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME CREATED
47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
47672 ALL$OLAP2_CUBE_DIM_USES 28-JUL-2009 08:08:39
47681 ALL$OLAP2_CUBE_MEASURE_MAPS 28-JUL-2009 08:08:39
47682 ALL$OLAP2_FACT_LEVEL_USES 28-JUL-2009 08:08:39
47685 ALL$OLAP2_AGGREGATION_USES 28-JUL-2009 08:08:39
47692 ALL$OLAP2_CATALOGS 28-JUL-2009 08:08:39
47665 ALL$OLAPMR_FACTTBLKEYMAPS 28-JUL-2009 08:08:39
47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS 28-JUL-2009 08:08:39
47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS 28-JUL-2009 08:08:39
47669 ALL$OLAP9I2_HIER_DIMENSIONS 28-JUL-2009 08:08:39
47666 ALL$OLAP9I1_HIER_DIMENSIONS 28-JUL-2009 08:08:39
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> set autotrace traceonly
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
2 ;
11 rows selected.
Execution Plan
Plan hash value: 44968669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 180 (2)| 00:00:03 |
| 1 | SORT ORDER BY | | 1200 | 91200 | 180 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 3 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 6 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 7 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T".ROWID="T1"."RID")
3 - filter("RN">=1190)
4 - filter(ROWNUM<=1200)
Statistics
1 recursive calls
0 db block gets
348 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
343 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
11 rows selected.
Execution Plan
Plan hash value: 168880862
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 1 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 2 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 5 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T".ROWID="T1"."RID")
2 - filter("RN">=1190)
3 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
349 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
175 recursive calls
0 db block gets
388 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> set autotrace off
SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query. -
Tuning SQL Query for base tables
Hi,
Can any please help me to tune the below query...... there is full table scan for po_action_history table
SELECT DISTINCT
DECODE(hr.full_name,'Default Buyer, Purchasing','PCH',
'Default Buyer, Travel','TRV',
'Purchase Order, Rapid','RPO',
'Data Interchange, Electronic','EDI',fuser1.user_name) BUYER_ID,
poa.attribute1 BUYER_GROUP,
poa.attribute2 BUYER_SUBGROUP
FROM po.po_action_history ac1,
po.po_headers_all ph,
po.po_agents poa,
hr.per_all_people_f hr,
applsys.fnd_user fuser1
WHERE ph.po_header_id = ac1.object_id
AND ac1.object_type_code = 'PO'
AND ac1.action_code = 'APPROVE'
AND sequence_num =(SELECT MIN(ac2.sequence_num)
FROM po.po_action_history ac2,
applsys.fnd_user fuser2
WHERE ph.po_header_id = ac2.object_id
AND ac2.object_type_code = 'PO'
AND ac2.action_code = 'APPROVE'
AND ac2.last_updated_by = fuser2.user_id)
--AND ac1.last_updated_by = fuser1.user_id
AND ph.agent_id = poa.agent_id
AND ph.agent_id = hr.person_id
AND hr.person_id = fuser1.employee_id
AND hr.effective_end_date > sysdate
AND trunc(ac1.last_update_date) between '01-JAN-08' and '07-JAN-08'
ORDER BY poa.attribute1
Explain Plan
PLAN_TABLE_OUTPUT
Plan hash value: 3325319312
Id Operation Name Rows
Bytes Cost (%CPU) Time
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 27 |
2754 | 38963 (3) | 00:07:48 |
| 1 | SORT UNIQUE | | 27 |
2754 | 38962 (3) | 00:07:48 |
|* 2 | FILTER | | |
| | |
|* 3 | HASH JOIN | | 37 |
3774 | 36308 (3) | 00:07:16 |
PLAN_TABLE_OUTPUT
| 4 | TABLE ACCESS BY INDEX ROWID | FND_USER | 1 |
13 | 2 (0) | 00:00:01 |
| 5 | NESTED LOOPS | | 37 |
3293 | 36303 (3) | 00:07:16 |
| 6 | NESTED LOOPS | | 88 |
6688 | 36180 (3) | 00:07:15 |
| 7 | NESTED LOOPS | | 88 |
PLAN_TABLE_OUTPUT
3960 | 35916 (3) | 00:07:11 |
|* 8 | TABLE ACCESS FULL | PO_ACTION_HISTORY | 2110 |
71740 | 32478 (3) | 00:06:30 |
| 9 | TABLE ACCESS BY INDEX ROWID | PO_HEADERS_ALL | 1 |
11 | 2 (0) | 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | PO_HEADERS_U1 | 1 |
| 1 (0) | 00:00:01 |
PLAN_TABLE_OUTPUT
| 11 | SORT AGGREGATE | | 1 |
36 | | |
| 12 | NESTED LOOPS | | 1 |
36 | 6 (0) | 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| PO_ACTION_HISTORY | 1 |
31 | 6 (0) | 00:00:01 |
|* 14 | INDEX RANGE SCAN | PO_ACTION_HISTORY_N1 | 5 |
| 3 (0) | 00:00:01 |
PLAN_TABLE_OUTPUT
|* 15 | INDEX UNIQUE SCAN | FND_USER_U1 | 1 |
5 | 0 (0) | 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID | PER_ALL_PEOPLE_F | 1 |
31 | 3 (0) | 00:00:01 |
|* 17 | INDEX RANGE SCAN | PER_PEOPLE_F_PK | 1 |
| 2 (0) | 00:00:01 |
|* 18 | INDEX RANGE SCAN | FND_USER_N1 | 1 |
PLAN_TABLE_OUTPUT
| 1 (0) | 00:00:01 |
| 19 | TABLE ACCESS FULL | PO_AGENTS | 183 |
2379 | 4 (0) | 00:00:01 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter(TO_DATE('01-JAN-08')<=TO_DATE('07-JAN-08'))
3 - access("PH"."AGENT_ID"="POA"."AGENT_ID")
8 - filter("AC1"."ACTION_CODE"='APPROVE' AND "AC1"."OBJECT_TYPE_CODE"='PO'AND
TRUNC(INTERNAL_FUNCTION("AC1"."LAST_UPDATE_DATE"))>='01-JAN-08' AND
TRUNC(INTERNAL_FUNCTION("AC1"."LAST_UPDATE_DATE"))<='07-JAN-08')
10 - access("PH"."PO_HEADER_ID"="AC1"."OBJECT_ID")
PLAN_TABLE_OUTPUT
filter("SEQUENCE_NUM"= (SELECT MIN("AC2"."SEQUENCE_NUM") FROM "APPLSYS"."FND_USER"
"FUSER2","PO"."PO_ACTION_HISTORY" "AC2" WHERE "AC2"."OBJECT_ID"=:B1 AND "AC2"."ACTION_CODE"='APPROVE'
AND "AC2"."OBJECT_TYPE_CODE"='PO' AND "AC2"."LAST_UPDATED_BY"="FUSER2"."USER_ID"))
13 - filter("AC2"."ACTION_CODE"='APPROVE' AND "AC2"."OBJECT_TYPE_CODE"='PO')
14 - access("AC2"."OBJECT_ID"=:B1)
PLAN_TABLE_OUTPUT
15 - access("AC2"."LAST_UPDATED_BY"="FUSER2"."USER_ID")
17 - access("PH"."AGENT_ID"="PERSON_ID" AND "EFFECTIVE_END_DATE">SYSDATE@!)
filter("EFFECTIVE_END_DATE">SYSDATE@!)
18 - access("PERSON_ID"="FUSER1"."EMPLOYEE_ID")
ThanksHi,
any help for the above issue. -
hi there..i need ur helps
Here are my tables n column name for each tables
a) rep_arrngt
Name
REP_ARRNGT_ID
REP_ARRNGT_DESC
REP_ARRNGT_TYPE
ACCT_CAT
DEF_INDUSTRY_CODE
MEDIA_ID
LANGUAGE_ID
CURRENCY
PAGE_ID
b) bci_rep_arrng
Name
REP_ARRNGT_ID
BCI
SUB_SUM_CODE
DP_SUB_SUM_CODE
c) acct_rep_arrngt_link
Name
ACCT_NO
REP_ARRNGT_ID
DOC_TYPE
LAST_BILL_DATE
BILL_FREQUENCY
SCALING
EFF_FROM_DATE
EFF_TO_DATE
MEDIA_ID
Actually, i want to get the unique value for sub_sum_code according to the bci..
i already use join tables n here is my sql statement :
SELECT T1.SUB_SUM_CODE,T1.BCI,T1.REP_ARRNGT_ID,T2.REP_ARRNGT_ID,T3.REP_ARRNGT_ID FROM BCI_REP_ARRNG T1, REP_ARRNGT T2, ACCT_REP_ARRNGT_LINK T3 WHERE T1.REP_ARRNGT_ID=T2.REP_ARRNGT_ID AND T2.REP_ARRNGT_ID=T3.REP_ARRNGT_ID AND BCI='TTA00F06'
n my results is :
SUB_SUM_CODE BCI REP_ARRNGT_ID
TBGSR TTA00F06 R1
TBGSR TTA00F06 R1
TBGSR TTA00F06 R1
TBGSR TTA00F06 R1
I get the repeated results for sub_sum_code..
so, what i need to do if i want only 1 row results like this :
[u]SUB_SUM_CODE BCI REP_ARRNGT_ID
TBGSR TTA00F06 R1
i try to use group by, but i get the error..plz help meIf you only want "to get the unique value for sub_sum_code according to the bci" then why are you joining the tables in the first place? Not knowing PKs etc you could just use DISTINCT in your select statement.
SELECT DISTINCT T1.SUB_SUM_CODE,
T1.BCI
FROM BCI_REP_ARRNG T1
AND BCI='TTA00F06' -
Postgres' LIMIT .. OFFSET for large table
Hi!
I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
Currently, my 'workaround' looks like this (a bit more complex in reality):
1 SELECT * FROM (
2 SELECT
3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
5 FROM MSG_TABLE
6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
Thanks a lot in advance
AndréSee Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
The one comment in there that I entirely agree with
is that such large result sets are meaningless to the
human eye so I would question exactly what you are
trying to achieve. As Tom rightly says, nobody is
ever going to scroll down to rows 999001 - 999010,
even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
Thanks again for the great link.
André (who really starts to like Oracle and its community) -
ALV: how to save context space for large tables ?
Dear collegues,
We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
Question: do you have an idea, how to save context space for those ALV fields ?
Best regards,
Christian>We are displaying an ALV table that is quite large.
Do you mean quite large as in a large number of columns or as in a large number of rows (or both)? I assume that the problem is probably more related to a large number of rows. For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time. Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows? -
Please send an query for update trigger for those tables .
1: ap_suppliers
2: ap_supplier_sites_all
3: ap_supplier_contacts.
Thanks,
Chaitanya.Hi,
Actually ID and Data are different names in my API. Here I mentioned like that.
Am using Oracle 10g version.
Yes I am going to get the multiple values using this Ref Cursor. The same ref cursor is used in another API of my package.
In this API, the user actually enters Application_id and data. API should insert/update the version_master table with the inputs. See application_id is the foriegn key for app_master table.
The requirement is whenever they enters application_id and data, if application_id already present in the version_master table, then the data should be updated and we should get a status flag for this using Out variable(i.e. using ref cursor only we are showing the status). if the application_id is not present in the version_master table, new record should be inserted. The sequence will generate the version_id for version_master table.
But the Merge statement is working fine for update and insert also. Problem is am unable to see the success or failure through ref cursor. I don't know how exactly I can use this.
If data is NULL the operation should be failed. i.e. I should get Failure status here. But am not getting this.
Please remove the comments here and then check. If I use the NVL2 function I was able to get the status flag, i.e. S or F.
OPEN resultset_o FOR
SELECT NVL2 (data, 'S', 'F')
FROM version_master
WHERE application_id = application_id_i;
1.How the value of data being not null will determine Success of the api and how null failure
2.If the above select statement goes in to exception when others how I will no the failure
I have to achieve the above scenarios.
Please advice me. -
Create user with read access for all tables SAP SID .*
Hello all,
could you please help me ? I would like to grant select privilege on all tables SAP<SID>.* for newly created user.
I have created standard database user (not exclusive).
I`m able to grant select for individual tables, but I would like to grant select for this user on all SAP<SID>
schema in simplier way
But as far as I know, the schema`s owner name must be different then schema name.
Any idea please ?
Thank you.
Pavolcreate user <user_name> identified by <password> <options>;
grant read on all tables:-
CREATE OR REPLACE PROCEDURE GRANT_SELECT AS
CURSOR ut_cur IS
SELECT table_name
FROM user_tables;
RetVal NUMBER;
sCursor INT;
sqlstr VARCHAR2(250);
BEGIN
FOR ut_rec IN user_tabs_cur;
LOOP
sqlstr := 'GRANT SELECT ON '|| ut_rec.table_name
|| ' TO <user_name>';
sCursor := dbms_sql.open_cursor;
dbms_sql.parse(sCursor,sqlstr, dbms_sql.native);
RetVal := dbms_sql.execute(sCursor);
dbms_sql.close_cursor(sCursor);
END LOOP;
END grant_select;
Edited by: varun4dba on Jan 18, 2011 4:13 PM -
How to create a DSN-less connection to SQL Server for linked tables in Access
hey
i cant understand how i use that Function
if that information what you need
stLocalTableName: dbo_user_name
stRemoteTableName: user_name
stServer :sedo2015.mssql.somee.com
stDatabase :sedo2015
stUsername :sedo_menf_SQLLogin_1
stPassword :123456789
how will be that Function??
please write that Function to me
'//Name : AttachDSNLessTable
'//Purpose : Create a linked table to SQL Server without using a DSN
'//Parameters
'// stLocalTableName: Name of the table that you are creating in the current database
'// stRemoteTableName: Name of the table that you are linking to on the SQL Server database
'// stServer: Name of the SQL Server that you are linking to
'// stDatabase: Name of the SQL Server database that you are linking to
'// stUsername: Name of the SQL Server user who can connect to SQL Server, leave blank to use a Trusted Connection
'// stPassword: SQL Server user password
Function AttachDSNLessTable(stLocalTableName As String, stRemoteTableName As String, stServer As String, stDatabase As String, Optional stUsername As String, Optional stPassword As String)
On Error GoTo AttachDSNLessTable_Err
Dim td As TableDef
Dim stConnect As String
For Each td In CurrentDb.TableDefs
If td.Name = stLocalTableName Then
CurrentDb.TableDefs.Delete stLocalTableName
End If
Next
If Len(stUsername) = 0 Then
'//Use trusted authentication if stUsername is not supplied.
stConnect = "ODBC;DRIVER=SQL Server;SERVER=" & stServer & ";DATABASE=" & stDatabase & ";Trusted_Connection=Yes"
Else
'//WARNING: This will save the username and the password with the linked table information.
stConnect = "ODBC;DRIVER=SQL Server;SERVER=" & stServer & ";DATABASE=" & stDatabase & ";UID=" & stUsername & ";PWD=" & stPassword
End If
Set td = CurrentDb.CreateTableDef(stLocalTableName, dbAttachSavePWD, stRemoteTableName, stConnect)
CurrentDb.TableDefs.Append td
AttachDSNLessTable = True
Exit Function
AttachDSNLessTable_Err:
AttachDSNLessTable = False
MsgBox "AttachDSNLessTable encountered an unexpected error: " & Err.Description
End Functionthanks more thanks for you
look i add that code in form
it worked but i cant add recored why ??
Private Sub Form_Open(Cancel As Integer)
Call AttachDSNLessTable("dbo_user_name", "user_name", "sedo2015.mssql.somee.com", "sedo2015", "sedo_menf_SQLLogin_1", "123456789")
End Sub -
Sql statement for a table name with a space in between
Hi,
I just noticed that one of my tables for Access is consisted of two word. It is called "CURRENT CPL". How would I put this table name into an sql statement. When I did what I normally do, it only reads the CURRENT and thinks that's the table name.
Thanks
FengI just noticed that one of my tables for Access is
consisted of two word. It is called "CURRENT CPL".
How would I put this table name into an sql
statement. When I did what I normally do, it only
reads the CURRENT and thinks that's the table name.That is called a quoted identifier. The SQL (not java) for this would look like this....
select "my field" from "CURRENT CPL"
The double quote is the correct character to use. Note that quoted identifiers are case sensitive, normal SQL is not. -
Generating SQL Script for Existing Tables and DBs
Hello,
is it possible to generate automatically a SQL-Script from an existing table or oracle database ?
I want to export an existing table from an Oracle DB (g11) if its possible with the data.
Perhaps somebody could me explain how to to do this.
I am using the "SQL Developer 2.1" and the "enterprise manager konsole".
I'm a rookie in using this tools.
Thank you for any informations.
N. WylutzkiIf you want to export data, you should use the export utility. This is documented:
http://tinyurl.com/23b7on -
UCCX SQL access for third party application
We have a requirement from customer to allow read access to UCCX database. This account would be used to query the UCCX database and it will use as a service account for another system.
Please let me know how create DB account on UCCX.
UCCX version:7.1
UCCX database: MSDEHere you go:
Open the command line on your UCCX server and do the following:
List the available sql instances you can connect to (and their syntax)
osql -L
From that list you should see one that looks like:
SERVERNAME\CRSSQL
Connect to MSDE using your Windows local admin account (has admin rights in SQL)
osql -E -S SERVERNAME\CRSSQL
List the current logins in SQL
exec sp_helplogins
go
Create a user account and grant access to db_cra database:
use db_cra
go
exec sp_addlogin 'username','password'
go
exec sp_grantdbaccess 'username'
go
exec sp_addrolemember 'db_wallboard_read','username'
go
* Sorry, I don't have a 7x system anymore and cannot tell you what the role name is for admin rights. You could issue the following to get a list of other roles:
exec sp_helprole
go
Enabling Mixed Mode
Stop the Node Manager
Open Regedit and navigate to:
HKLM\Software\Microsoft\Microsoft SQL Server\CRSSQL\MSSQLServer\
Change LoginMode from 1 to 2
Start the Node Manager
*Remember to fix all HRC clients to support mixed mode (this is a manual task). See here for help on that: https://supportforums.cisco.com/message/3116762
Maybe you are looking for
-
I am using Acrobat Pro X. I have hundreds of account statements where I need to redact and bates stamp. Somehow I keep getting this error: There was a problem reading this document (16) The files open and print fine. There is no security settings
-
How to create report with Fixed Labels and Dynamic Columns
I have static column labels in left side of report and field in 3 columns coming dynamically from 3 different command queries. Please let me know how to create it..Following is the sample data for report.
-
Regarding Service User creation in PI 7.11
Dear friends, WE have succesfully installed PI 7.1 ehp1 on Dev server. We are in bit dellema that service users like PI service users, PIISUSER, PIDIRUSER, PIREPUSER, PIAFUSER, PILDUSER, PIRWBUSER, PIPPUSER, PI_JCO_RFC are not created after installin
-
HTML Page Question(s)
I can't for the life of me remember where to go to change the <TITLE> tag on Oracle Financials Apps 11.5.10.2 html pages. Someone please refresh my memory. thanks! :)
-
hy everyone well friends few days before i have made a option for video tht it should play only audio but now video in the settings but now i want it back to play video with audio i have done everything in setting but video is not appearing my curren