Manual statistics in BW 7.0
Hi expert,
I'm working on a BW 7.0 system.
I'd like to know if is possible execute the ODS statistics manually as an alternative to TOOL-->Settings for BI Statistics.
I should execute the statistics only for a single ODS and only when I want.
Thanks in advance.
Claudia
There are 2 elements in your question :
1) You need to implement the relevant standard statistics cubes and process chains to load the statistic data from SAP internal tables to these data targets. Then, you can activate / create reports on top of them to analyse the stat.
2) The custo you are talking about is needed to indicate to the system which Infoproviders need to be flagged for BI Stat. If you select your DSO in the list, the system will record the stat, and you'll then be able to extract the info into the Stat cubes and run reports
So to answer to your question globally, only flag your DSO in Tool > Settings BI Stat and execute the Stat process chain when you want.
Similar Messages
-
Hi Experts,
IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan for Daily/weekly??
Vinai Kumar GandlaHi Vikki,
Many systems rely solely on SQL Server to update statistics automatically(AUTO UPDATE STATISTICS enabled), however, based on my research, large tables, tables with uneven data distributions, tables with ever-increasing keys and tables that have significant
changes in distribution often require manual statistics updates as the following explanation.
1.If a table is very big, then waiting for 20% of rows to change before SQL Server automatically updates the statistics could mean that millions of rows are modified, added or removed before it happens. Depending on the workload patterns and the data,
this could mean the optimizer is choosing a substandard execution plans long before SQL Server reaches the threshold where it invalidates statistics for a table and starts to update them automatically. In such cases, you might consider updating statistics
manually for those tables on a defined schedule (while leaving AUTO UPDATE STATISTICS enabled so that SQL Server continues to maintain statistics for other tables).
2.In cases where you know data distribution in a column is "skewed", it may be necessary to update statistics manually with a full sample, or create a set of filtered statistics in order to generate query plans of good quality. Remember,
however, that sampling with FULLSCAN can be costly for larger tables, and must be done so as not to affect production performance.
3.It is quite common to see an ascending key, such as an IDENTITY or date/time data types, used as the leading column in an index. In such cases, the statistic for the key rarely matches the actual data, unless we update the Statistic manually after
every insert.
So in the case above, we could perform manual statistics updates by
creating a maintenance plan that will run the UPDATE STATISTICS command, and update statistics on a regular schedule. For more information about the process, please refer to the article:
https://www.simple-talk.com/sql/performance/managing-sql-server-statistics/
Regards,
Michelle Li -
MaxDB UpdAllStats - missing optimizer statistics for one name space
Hi experts,
every weekend the job UpdAllStats runs in the SAP systems hosted by us (weekdays just PrepUpdStats+UpdStats). Now we're facing the issue that in one system there are no optimizier statistics for all tables in one special name space - let's call it /XYZ/TABLE1 etc.
We randomly checked tables in that name space via DB20/DB50 and no optimizer statistics could be found. So we randomly checked other tables like MARA, VBAK etc. - all optimizer statistics up to date for those tables.
We even started the statistics refresh via DB20 manually for one of the tables - still no optimizer statistics appearing for this table.
I mean it's an update over all optimizer statistics - I rechecked note 927882 - FAQ: SAP MaxDB UPDATE STATISTICS and some others but couldn't find any reason for these tables being exluded. Especially I don't understand why the manual statistics refresh wouldn't work...
Does anybody have an idea why this could happen?
Thanks for your ideas in advance!
Regards
MarieHi again,
well it seems to be more of a visualisation problem I guess.
We figured out that in MaxDB Database Studio you can see the optimizier statistics but not in the SAP system itself.
We'll keep you up to date.
Best
Marie
Edit: it was really just a visualisation problem... DB Studio rhows the right values -
Automatic Optimizer Statistics Collection Enabled still tables not analyzed
Hello,
We have Oracle 11g R1 database. Our automatic Optimizer Statistics Collection settings are enabled, still I don't see the tables being analyzed, any suggestions if I am missing any settings. All tables do get analyzed if I do manual statistics gathering.
SQL> select CLIENT_NAME ,STATUS from DBA_AUTOTASK_CLIENT;
CLIENT_NAME STATUS
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor ENABLED
Thanks,
SKuser599845 wrote:
Hello,
We have Oracle 11g R1 database. Our automatic Optimizer Statistics Collection settings are enabled, still I don't see the tables being analyzed, any suggestions if I am missing any settings. All tables do get analyzed if I do manual statistics gathering.
SQL> select CLIENT_NAME ,STATUS from DBA_AUTOTASK_CLIENT;
CLIENT_NAME STATUS
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor ENABLED
Thanks,
SK
still I don't see the tables being analyzed, post SQL & results that lead you to this conclusion.
realize that statistics can be "collected" without updating LAST_ANALYZED column.
if data within table does not change, the nothing would be gained by "updating" statistics to same values as now/before. -
Conditions in Purchase Order - No scroll bar/Sales Tax value not in mmr
Hello,
I am stuck in a weird problem where I have created a Condition Calculation Schema, assigned it to Schema group and assigned that schema to vendor, but when i open my condition in the PO, it does not give me scroll bar i.e. I have calculation Schema with around 10+ conditions, but when I try to put all the conditions in PO at one time, the scroll bar doesnot come up and without it, I cannot view all my conditions in the PO. Its saving the conditions though as seen in report, but cannot display it in PO without the scroll bar. Any ideas?
Another assignment is that sales tax value and % should not increase the mmr value at time of GR, but when I do GR its adding the sales tax value to mmr value, my requirement is just Gross price be added to mmr, not sales tax value/%. I tried the Statistical checks in Calculation Schema, but to no vail, at GR the sales tax value is being added to the mmr record. Any ideas?
Lots of points awarded for answer/answers.
Any clue will help
Thank you/Afshad
Edited by: Afshad Irani on Jan 14, 2009 10:27 AM
Edited by: Afshad Irani on Jan 14, 2009 2:32 PM
Edited by: Afshad Irani on Jan 15, 2009 6:19 AMQ Another assignment is that sales tax value and % should not increase the mmr value at time of GR.
Ans:
Dear Afshad,
Reference to your question, you need to do few settings in your Condition Type and Pricing Schema, if you need Sales Tax value & % not to include in your Material value.
1 - In SPRO, Check that in your condition type, Control data 2 tab, Accruals check box should not be selected.
2 - In your Calculation Schema, against your condition types for Sales Tax % and Sales Tax Value, select the check box for Manual & Statistics.
3 - Also in your Calculation Schema, you should not select any account key in AccKey (Account Key) and in Accruals colomns.
If any one of these setting is not defined, the valuation price for your material will be increased due to the fact that your settings for Condition type and Calculation Schema is allowing the same to hit the value of your material directly.
Hope it works for you.
Regards
Jibran -
I have a problem when deleting rows from a table. When I created the database the following delete statement worked fine, but now it takes a lot to complete. To delete 600 rows it takes more than 10 minutes. Maybe the transaction log is full, but i don't know how to check the space assigned to it or where is it.
The delete operation is the following one.
DELETE FROM event_TTT WHERE event_TTT.event_id IN
(SELECT event.event_id FROM event WHERE event_TTT.event_id=event.event_id and event.MM_id=1328)Thank you for your information about statistics. After reading some papers and after checking that automatic statistics are being gathered in my system, I think that my tables do not fulfill the requirements to gather statistics manually ("statistics on tables which are significantly modified during the day"). Maybe I'm wrong. I'll continue reading about this.
Anyway, I've new data about my delete operation. I think it is very slow, but I'll give you the details:
DELETE FROM event_track e
WHERE EXISTS
SELECT
NULL
FROM event e1
WHERE e1.event_id = e.event_id
AND e1.pdep_mission_id = 1328
AND ROWNUM = 1
The is an index on e1.pdep_mission_id.
e1.event_id and e.event_id are primary keys.
Number of rows on e = 117000.
Number of rows on e1 = 120000
Number or rows to delete = 3500
This is the execution plan (with autotrace):
Plan hash value: 660928614
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 1 | 254 | 217K (1)| 00:31:20 |
| 1 | DELETE | EVENT_TRACK | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS FULL | EVENT_TRACK | 119K| 28M| 667 (3)| 00:00:06 |
|* 4 | COUNT STOPKEY | | | | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| EVENT | 1 | 9 | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | PK_EVENT271 | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter( EXISTS (SELECT 0 FROM "PDEP"."EVENT" "E1" WHERE ROWNUM=1 AND
"E1"."EVENT_ID"=:B1 AND "E1"."PDEP_MISSION_ID"=1305))
4 - filter(ROWNUM=1)
5 - filter("E1"."PDEP_MISSION_ID"=1305)
6 - access("E1"."EVENT_ID"=:B1)
Estadísticas
11207 recursive calls
26273 db block gets
24517966 consistent gets
23721872 physical reads
4316076 redo size
932 bytes sent via SQL*Net to client
1105 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
12 sorts (memory)
0 sorts (disk)
3518 rows processed
This operation took 2 hours and 5 minutes to complete the deletion of 3500 rows. Do I have to assume that this is correct or maybe I'm doing something wrong ??
Thank you in advance. -
Gathering statistics manually in 10g !!
Hi, all.
The database is 2 node RAC 10.2.0.2.0.
By default, 'GATHER_STATS_JOB' job is enabled and it takes statistics of
"ALL objects in a database" every 22:00 on a daily basis.
(objects that have changes more than 10% of the rows).
I found that the default job is causing library cache lock wait event by
invalidating objects too frequently in the shared pool
( especially in a RAC environment).
Therefore, I am considering taking statistics only for application schema
on a weekly basis by using "DBMS_STATS.GATHER_SCHEMA_STATS" procedure.
"EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('NMSUSER',DBMS_STATS.AUTO_SAMPLE_SIZE);"
takes statistics of all objects of NMSUSER schema.
I would like to take statistics for objects that have more than "10% changes"
of rows.
What is the option for that??
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref8115
Thanks and Regards.
Message was edited by:
user507290Be very careful taking statistics with default setting of method_opt => 'FOR ALL COLUMNS SIZE AUTO' in 10g.
Check the number of histograms you have (dba_tab_columns and dba_tab_histograms).
The results may not be what you expect ...
And they may be playing a part in your latch issues.
Cheers
Richard Foote -
Last.fm - manually generate & upload statistics?
Hi!
There are quite a few tools in the repos that seem to offer support to upload statistical data to a last.fm account while plaing music located in the filesystem, but...
- Does anyone know, how that's transmitted? Like how the "file" looks or what protocol/methods are available?
- How to manually upload "artist - title - played"?
- How to have whole directories of mp3's uploaded as "played" (or text files / lists)?
- Where to start figuring all that out?
( To put that into a context: I'm trying to improve my recommended channel here. )
Thanks!Thanks, so far so good...
write header:
echo "#AUDIOSCROBBLER/1.1\n#TZ/TC\n#CLIENT/<lastmsg0.1>" > .scrobbler.log
add lines for all mp3 files in directory / subdirectory:
while read file; do mp3info -p "%a\t%l\t%t\t%n\t%s\tL\t$TIMESTAMP\t$ID\t\n" "$file" >> .scrobbler.log; done < <(find ./ -name '*mp3')
... now:
1) Strange: the second line does work from command line, but not inside a bash script. I'm totally new to writing/using scripts, so maybe to anyone else it's obvious why it says: " genera.sh: line 4: syntax error near unexpected token `\<' "?
2) I'm wondering, if small differences in play time (seconds) have a big impact on recommendations quality (maybe half of the files result in an own entry noone else shares as played/loved because of +- 3-5 seconds)? Does lastfm correct that or am I supposed to?
3) Also the timestamp... is this used for anything or just "eyecandy"? I'm thinking of maybe just setting it to generate timestamps with a delay of random 22-33 minutes between each "play" backwards from the actual timestamp or something... something?
4) Less of a problem I guess, but non the less: Is there an easy AND basic approach to upload these things? Like with some post/http/ftp/hatever tool? Didn't look into that yet, but "qtscrob" sounds like "GUI for KDE and 100 dependencies included" and "Perl script to submit these" sounds like "a quick look into that will make me accidentally learn the basics of perl for the next 10 hours"...
5) Unimportant: This "MusicBrainz Track ID" actually IS unimportant, isn't it? Or is there an easy way to get it? (like a Program that can optionally put it into the id3 tag of all mp3's or a way to generate it from info that's already in there)
Last edited by whoops (2009-06-04 16:28:15) -
Performance Problems - Index and Statistics
Dear Gurus,
I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
please helpDear Mr Syed ,
Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
For <b>Re-organizing</b> degenerated indexes 3 options are available-
<b>1) DROP INDEX ..., and CREATE INDEX </b>
<b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
<b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
<b>Advantages- option 2</b>
1)Fast storage in a different table space possible
2)Creates a new index tree
3)Gives the option to change storage parameters without deleting the index
4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
I would still leave the Database tech team be the best to judge and take a call on these.
These modus operandi could be institutionalized for all fretful cubes & its indexes as well.
However,I leave the thoughts with you.
Hope it Helps
Chetan
@CP.. -
Error in creating oracle 10g EE db manually in Win XP SP2 OS
First of all. Sorry if I don't have a good English...
My operating system is Windows XP SP2
I'd created script files below for creating Oracle database manually:
dbcamin.bat
mkdir C:\oracle\product\10.2.0\admin\dbcamin\adump
mkdir C:\oracle\product\10.2.0\admin\dbcamin\bdump
mkdir C:\oracle\product\10.2.0\admin\dbcamin\cdump
mkdir C:\oracle\product\10.2.0\admin\dbcamin\udump
mkdir C:\oracle\product\10.2.0\admin\dbcamin\dpdump
mkdir C:\oracle\product\10.2.0\flash_recovery_area\dbcam in
mkdir C:\oracle\product\10.2.0\admin\dbcamin\pfile
mkdir C:\oracle\product\10.2.0\cfgtoollogs\emca\dbcamin
mkdir C:\oracle\product\10.2.0\flash_recovery_area
mkdir C:\oracle\product\10.2.0\oradata\dbcamin
set ORACLE_SID=dbcamin
C:\oracle\product\10.2.0\db_1\bin\oradim.exe -new -sid DBCAMIN -startmode manual -spfile
C:\oracle\product\10.2.0\db_1\bin\oradim.exe -edit -sid DBCAMIN -startmode auto -srvcstart system
C:\oracle\product\10.2.0\db_1\bin\sqlplus /nolog @C:\oracle\product\10.2.0\admin\dbcamin\scripts\db camin.sql
CreateDB.sql
connect SYS/dbcamin as SYSDBA
set echo on
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\Cre ateDB.log
startup nomount pfile="C:\oracle\product\10.2.0\db_1\database\init dbcamin.ora";
CREATE DATABASE dbcamin
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
DATAFILE 'C:\oracle\product\10.2.0\oradata\dbcamin\system01 .dbf' SIZE 300M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE 'C:\oracle\product\10.2.0\oradata\dbcamin\sysaux01 .dbf' SIZE 120M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE 'C:\oracle\product\10.2.0\oradata\dbcamin\temp01.d bf' SIZE 20M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE 'C:\oracle\product\10.2.0\oradata\dbcamin\undotbs0 1.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 ('C:\oracle\product\10.2.0\oradata\dbcamin\redo01. log') SIZE 51200K,
GROUP 2 ('C:\oracle\product\10.2.0\oradata\dbcamin\redo02. log') SIZE 51200K,
GROUP 3 ('C:\oracle\product\10.2.0\oradata\dbcamin\redo03. log') SIZE 51200K
USER SYS IDENTIFIED BY dbcamin USER SYSTEM IDENTIFIED BY dbcamin;
spool off
CreateDBCatalog.sql
connect SYS/dbcamin as SYSDBA
set echo on
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\Cre ateDBCatalog.log
@C:\oracle\product\10.2.0\db_1\rdbms\admin\catalog .sql;
@C:\oracle\product\10.2.0\db_1\rdbms\admin\catbloc k.sql;
@C:\oracle\product\10.2.0\db_1\rdbms\admin\catproc .sql;
@C:\oracle\product\10.2.0\db_1\rdbms\admin\catoctk .sql;
@C:\oracle\product\10.2.0\db_1\rdbms\admin\owminst .plb;
connect SYSTEM/dbcamin
@C:\oracle\product\10.2.0\db_1\sqlplus\admin\pupbl d.sql;
connect SYSTEM/dbcamin
set echo on
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\sql PlusHelp.log
@C:\oracle\product\10.2.0\db_1\sqlplus\admin\help\ hlpbld.sql helpus.sql;
spool off
spool off
CreateDBFiles.sql
connect SYS/dbcamin as SYSDBA
set echo on
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\Cre ateDBFiles.log
CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE 'C:\oracle\product\10.2.0\oradata\dbcamin\users01. dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
ALTER DATABASE DEFAULT TABLESPACE "USERS";
spool off
dbcamin.sql
set verify off
PROMPT specify a password for sys as parameter 1;
DEFINE sysPassword = dbcamin
PROMPT specify a password for system as parameter 2;
DEFINE systemPassword = dbcamin
host C:\oracle\product\10.2.0\db_1\bin\orapwd.exe file=C:\oracle\product\10.2.0\db_1\database\PWDdbc amin.ora password=dbcamin force=y
@C:\oracle\product\10.2.0\admin\dbcamin\scripts\Cr eateDB.sql
@C:\oracle\product\10.2.0\admin\dbcamin\scripts\Cr eateDBFiles.sql
@C:\oracle\product\10.2.0\admin\dbcamin\scripts\Cr eateDBCatalog.sql
@C:\oracle\product\10.2.0\admin\dbcamin\scripts\lo ckAccount.sql
@C:\oracle\product\10.2.0\admin\dbcamin\scripts\po stDBCreation.sql
lockAccount.sql
set echo on
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\loc kAccount.log
BEGIN
FOR item IN ( SELECT USERNAME FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM') )
LOOP
dbms_output.put_line('Locking and Expiring: ' || item.USERNAME);
execute immediate 'alter user ' || item.USERNAME || ' password expire account lock' ;
END LOOP;
END;
spool off
postDBCreation.sql
connect SYS/dbcamin as SYSDBA
set echo on
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\pos tDBCreation.log
connect SYS/dbcamin as SYSDBA
set echo on
create spfile='C:\oracle\product\10.2.0\db_1\dbs\spfiledb camin.ora' FROM pfile='C:\oracle\product\10.2.0\db_1\database\init dbcamin.ora';
shutdown immediate;
connect SYS/dbcamin as SYSDBA
startup ;
select 'utl_recomp_begin: ' || to_char(sysdate, 'HH:MIS') from dual;
execute utl_recomp.recomp_serial();
select 'utl_recomp_end: ' || to_char(sysdate, 'HH:MIS') from dual;
connect SYS/dbcamin as SYSDBA
spool C:\oracle\product\10.2.0\admin\dbcamin\scripts\pos tDBCreation.log
initdbcamin.ora
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
# NLS
nls_language="ENGLISH"
nls_territory="AMERICA"
# Miscellaneous
compatible=10.2.0.1.0
# Cursors and Library Cache
cursor_sharing=similar
open_cursors=300
# Archive
LOG_ARCHIVE_DEST_1='LOCATION=C:\oracle\product\10. 2.0\flash_recovery_area\dbcamin\ARCHIVELOG'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_START=TRUE
# Diagnostics and Statistics
BACKGROUND_DUMP_DEST=C:\oracle\product\10.2.0\admi n\dbcamin\bdump
CORE_DUMP_DEST=C:\oracle\product\10.2.0\admin\dbca min\cdump
TIMED_STATISTICS=TRUE
USER_DUMP_DEST=C:\oracle\product\10.2.0\admin\dbca min\udump
# Cache and I/O
db_block_size=4096
db_cache_size=25165824
db_file_multiblock_read_count=16
# System Managed Undo and Rollback Segments
undo_management=auto
undo_retention=120
undo_tablespace=UNDOTBS1
# Security and Auditing
audit_file_dest=C:\oracle\product\10.2.0\admin\dbc amin\adump
audit_trail=db
remote_login_passwordfile=EXCLUSIVE
# Database Identification
db_domain=""
db_name=dbcamin
instance_name=dbcamin
# File Configuration
control_files=("C:\oracle\product\10.2.0\oradata\d bcamin\control01.ctl", "C:\oracle\product\10.2.0\oradata\dbcamin\control0 2.ctl")
db_recovery_file_dest=C:\oracle\product\10.2.0\fla sh_recovery_area
db_recovery_file_dest_size=2147483648
# Processes and Sessions
processes=60
sessions=71
# Distributed, Replication and Snapshot
DB_DOMAIN=us.oracle.com
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
# Redo Log and Recovery
FAST_START_MTTR_TARGET=300
but I'd got some errors when I tried to start up or alter database open the database...
This is the message..
Oracle instance terminated. Disconnection forced.
when I tried to check the process inside the command prompt I list some of the errors like written below...
SQL> create or replace view v_$_lock as select * from v$_lock;
create or replace view v_$_lock as select * from v$_lock;
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 84 bytes of shared memory ("shared pool","select inst_id,addr,ksqlkadr...","Typecheck","opndef:qkex rAddMatching1")
SQL> grant select on v_$_lock to select_catalog_role;
grant select on v_$_lock to select_catalog_role;
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> grant select on v_$flashback_database_logfile to select_catalog_role;
grant select on v_$flashback_database_logfile to select_catalog_role
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 212 bytes of shared memory ("shared pool","select i.obj#,i.ts#,i.file#,...","sql area","KGHSC_ALLOC_BUF:buf")
SQL> create or replace public synonim gv$dlm_all_locks;
create or replace public synonim gv$dlm_all_locks
ERROR at line 1:
ORA-04031: unable to allocate 3904 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","kglsim object batch")
SQL> grant select on gv$dlm_all_locks to select_catalog_role;
grant select on gv$dlm_all_locks to select_catalog_role;
ERROR at line 1:
ORA-04031: unable to allocate 3904 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","kglsim object batch")
CREATE OR REPLACE PACKAGE dbms_registry_server IS
ERROR at line 1:
ORA-06554: package DBMS_STANDARD must be created before using PL/SQL
CREATE OR REPLACE PACKAGE BODY dbms_registry
ERROR at line 1:
ORA-06554: package DBMS_STANDARD must be created before using PL/SQL
SQL> BEGIN
2 dbms_registry.loading('CATALOG', 'Oracle Database Catalog Views',
3 'dbms_registry_sys.validate_catalog');
4 END;
5 /
BEGIN
*ERROR at line 1:
ORA-06553: PLS-213: package STANDARD not accessible
at last... what's wrong???what should I do???
TQ before and after.
GBUTry to add
SGA_TARGET=300mto your initdbcamin.ora file.
Bartek -
Could anyone provide the complete manual of SAP Query Creation
Hi,
Now I am learning to create SAP Query.
I viewed many questions and answers about SAP Query.
Some mentioned this,and some did that...
Could anyone provide a complete manual of SAP Query Creation for developers.
I can create SAP Queries step by step with it.http://help.sap.com/saphelp_46c/helpdata/en/35/26b413afab52b9e10000009b38f974/content.htm
http://www.thespot4sap.com/Articles/SAP_ABAP_Queries_Introduction.asp
Step-by-step guide for creating ABAP query
http://www.sappoint.com/abap/ab4query.pdf
ABAP query is mostly used by functional consultants.
SAP Query
Purpose
The SAP Query application is used to create lists not already contained in the SAP standard system. It has been designed for users with little or no knowledge of the SAP programming language ABAP. SAP Query offers users a broad range of ways to define reporting programs and create different types of reports such as basic lists, statistics, and ranked lists.
Features
SAP Query's range of functions corresponds to the classical reporting functions available in the system. Requirements in this area such as list, statistic, or ranked list creation can be met using queries.
All the data required by users for their lists can be selected from any SAP table created by the customer.
To define a report, you first have to enter individual texts, such as titles, and select the fields and options which determine the report layout. Then you can edit list display in WYSIWYG mode whenever you want using drag and drop and the other toolbox functions available.
ABAP Query, as far as I Believe, is the use of select statements in the ABAP Programming. This needs a knowledge of Open SQL commands like Select,UPdtae, Modify etc. This has to be done only by someone who has a little bit of ABAP experience.
To sum up, SAP queries are readymade programs given by SAP, which the user can use making slight modification like the slection texts, the tables from which the data is to be retrieved and the format in which the data is to be displayed.ABAP queries become imperative when there is no such SAP query existing and also when there is a lot of customizing involved to use a SAP Query directly
use either SQ02 ans SQ01
or SQVI tr code
for more information please go thru this url:
http://www.thespot4sap.com/Articles/SAP_ABAP_Queries_Create_The_Query.asp
http://goldenink.com/abap/sap_query.html
Please check this PDF document (starting page 352) perhaps it will help u.
http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCSRVQUE/BCSRVQUE.pdf
check the below link will be helpful for u
Tutorial on SQVI
once you create query system generates a report starting with AQZZ/SAPQUERY/ABAGENCY2======= assing this report to tr code for the same
useful
http://www.erpgenie.com/abap/code/abap47.htm
regards,
Prabhu
reward if it is helpful -
Manual Standby Database (10.2.0.2.0) on Windows 2003 R2
Hi,
We are setting up a standby database on a remote site for a simple oracle DB. As we already have a standby/master for another Oracle DB (from SAP) we want to stay as close as possible as what already exist.
For the SAP Oracle standby, we are copying manualy all archive to the stand by and apply them with brarchive. All is working fine.
For the new standby, we can not use brarchive as there is no SAP install on the standby but we stay with the "manual" copy of the archive from the master to the standby (using robocpy). It means all archive are on the standby (K:\oracle\oradata\archive).
The creation of the standby DB seems to be OK as i can open it, but i can't manage to apply de redo logs.
I'm quite new to oracle, so it's maybe a very basic issue, but i've already spent 3 days on it...
To start the DB, we lauch a bat script :
sqlplus /nolog @c:\backup\standby.sql
pause
the standby.sql:
connect /@TECDB01 as sysdba
startup nomount;
alter database mount standby database;
exit;
Then i connect to sqlplus and enter:
alter database recover managed standby database;
In another sqlplus session :
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
wich give me :
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
MR(fg) WAIT_FOR_GAP 1 45400 0 0
RFS IDLE 0 0 0 0
The sequence 45400 seems to be ok regarding the time of the backup restored on the standby.
The archive is well on the server, but it won't apply it.
The Alert_TECDB01.log :
Fri Oct 29 11:03:43 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =121
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.2.0.
System parameters with non-default values:
processes = 999
sga_target = 7214202880
control_files = I:\ORACLE\ORADATA\CNTRL\STANDBY.CTL, J:\ORACLE\ORADATA\CNTRL\STANDBY.CTL, K:\ORACLE\ORADATA\CNTRL\STANDBY.CTL
db_block_size = 8192
compatible = 10.2.0.2.0
log_archive_dest_1 = LOCATION=K:\oracle\oradata\archive
log_archive_dest_2 = SERVICE=TECDB01
log_archive_dest_state_1 = enable
log_archive_dest_state_2 = enable
standby_archive_dest = K:\oracle\oradata\archive
archive_lag_target = 1800
db_file_multiblock_read_count= 16
undo_management = AUTO
undo_tablespace = RBS
undo_retention = 10800
recyclebin = OFF
remote_login_passwordfile= EXCLUSIVE
db_domain = WORLD
dispatchers = (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.92))(DISPATCHERS=4)(CONNECTIONS=1000)
shared_servers = 100
local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=xxx.xxx.xxx.92)(PORT=1521))
session_cached_cursors = 300
utl_file_dir = \\srvuniway.vrithoff.srwt.tec-wl.be\hotspots
job_queue_processes = 10
audit_file_dest = I:\ORACLE\ADMIN\TECDB01\ADUMP
background_dump_dest = I:\ORACLE\ADMIN\TECDB01\BDUMP
user_dump_dest = I:\ORACLE\ADMIN\TECDB01\UDUMP
core_dump_dest = I:\ORACLE\ADMIN\TECDB01\CDUMP
db_name = TECDB01
open_cursors = 3000
pga_aggregate_target = 1086324736
PMON started with pid=2, OS id=4012
PSP0 started with pid=3, OS id=3856
MMAN started with pid=4, OS id=3580
DBW0 started with pid=5, OS id=1084
LGWR started with pid=6, OS id=576
CKPT started with pid=7, OS id=3516
SMON started with pid=8, OS id=508
RECO started with pid=9, OS id=3068
CJQ0 started with pid=10, OS id=2448
MMON started with pid=11, OS id=2840
MMNL started with pid=12, OS id=3024
Fri Oct 29 11:03:44 2010
starting up 4 dispatcher(s) for network address '(ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.92))'...
starting up 100 shared server(s) ...
Fri Oct 29 11:03:45 2010
alter database mount standby database
Fri Oct 29 11:03:51 2010
Setting recovery target incarnation to 2
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=118, OS id=3584
Fri Oct 29 11:03:51 2010
ARC0: Archival started
ARC1 started with pid=119, OS id=3688
Fri Oct 29 11:03:51 2010
ARC1: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
Fri Oct 29 11:03:51 2010
ARC0: Becoming the 'no FAL' ARCH
Fri Oct 29 11:03:51 2010
Successful mount of redo thread 1, with mount id 3987142355
Fri Oct 29 11:03:51 2010
ARC0: Becoming the 'no SRL' ARCH
Fri Oct 29 11:03:51 2010
ARC1: Becoming the heartbeat ARCH
Fri Oct 29 11:03:51 2010
Physical Standby Database mounted.
Completed: alter database mount standby database
Fri Oct 29 11:04:06 2010
alter database recover managed standby database
Fri Oct 29 11:04:06 2010
Managed Standby Recovery not using Real Time Apply
parallel recovery started with 7 processes
Media Recovery Waiting for thread 1 sequence 45400
Fetching gap sequence in thread 1, gap sequence 45400-45499
+FAL[client]: Error fetching gap sequence, no FAL server specified+
Fri Oct 29 11:04:37 2010
+FAL[client]: Failed to request gap sequence+
GAP - thread 1 sequence 45400-45499
DBID 3776455083 branch 670241032
+FAL[client]: All defined FAL servers have been attempted.+
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that is sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
Fri Oct 29 11:04:51 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 3452
RFS[1]: Identified database type as 'physical standby'
Fri Oct 29 11:04:51 2010
RFS LogMiner: Client disabled from further notification
The tecdb01_arc1_3688.trc :
Dump file i:\oracle\admin\tecdb01\bdump\tecdb01_arc1_3688.trc
Fri Oct 29 11:03:51 2010
ORACLE V10.2.0.2.0 - 64bit Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Windows NT Version V5.2 Service Pack 2
CPU : 8 - type 8664, 2 Physical Cores
Process Affinity : 0x0000000000000000
Memory (Avail/Total): Ph:7467M/9215M, PhPgF:2454M/10796M+
Instance name: tecdb01
Redo thread mounted by this instance: 1
Oracle process number: 119
Windows thread id: 3688, image: ORACLE.EXE (ARC1)
*** SERVICE NAME:() 2010-10-29 11:03:51.177
*** SESSION ID:(1088.1) 2010-10-29 11:03:51.177
kcrrwkx: nothing to do (start)
*** 2010-10-29 11:04:51.129
Redo shipping client performing standby login
*** 2010-10-29 11:04:51.176 64529 kcrr.c
Logged on to standby successfully
Client logon and security negotiation successful!
kcrrwkx: nothing to do (end)
*** 2010-10-29 11:05:51.285
kcrrwkx: nothing to do (end)
*** 2010-10-29 11:06:51.300
kcrrwkx: nothing to do (end)
The initTECDB01.ora :
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
# Archive
archive_lag_target=1800
log_archive_dest_1='LOCATION=K:\oracle\oradata\archive'
# Cache and I/O
db_block_size=8192
db_file_multiblock_read_count=16
# Cursors and Library Cache
open_cursors=3000
session_cached_cursors=300
# Database Identification
db_domain=WORLD
db_name=TECDB01
# Diagnostics and Statistics
background_dump_dest=I:\oracle\admin\TECDB01\bdump
core_dump_dest=I:\oracle\admin\TECDB01\cdump
user_dump_dest=I:\oracle\admin\TECDB01\udump
# File Configuration
control_files=("I:\oracle\oradata\cntrl\standby.ctl", "J:\oracle\oradata\cntrl\standby.ctl", "K:\oracle\oradata\cntrl\standby.ctl")
# Job Queues
job_queue_processes=10
# Miscellaneous
compatible=10.2.0.2.0
recyclebin=OFF
# Processes and Sessions
processes=999
# SGA Memory
sga_target=6880M
# Pools
#java_pool_size=150M
# Security and Auditing
audit_file_dest=I:\oracle\admin\TECDB01\adump
remote_login_passwordfile=EXCLUSIVE
# Shared Server
shared_servers=100
dispatchers="(ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.92))(DISPATCHERS=4)(CONNECTIONS=1000)"
#dispatchers="(PROTOCOL=TCP) (SERVICE=TECDB01XDB)"
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=1036M
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_retention=10800
undo_tablespace=RBS
local_listener="(ADDRESS=(PROTOCOL=TCP)(HOST=xxx.xxx.xxx.92)(PORT=1521))"
# NIDA - 28.10.2010 - redo apply
log_archive_dest_state_1=enable
log_archive_dest_2 = 'SERVICE=TECDB01'
log_archive_dest_state_2=enable
#standby_file_management=auto
standby_archive_dest=K:\oracle\oradata\archive
And the TNSNAMES.ora :
# tnsnames.ora Network Configuration File: C:\oracle\102\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
#this is the standby
TECDB01.VRITHOFF.SRWT.TEC-WL.BE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxx.xxx.xxx.92)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = TECDB01)
# This file is written by Oracle Services For MSCS
# on Sat Nov 08 10:44:27 2008
#this is the master
PRIMARY.VRITHOFF.SRWT.TEC-WL.BE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxx.xxx.xxx.246)(PORT = 1521))
(CONNECT_DATA =
(SID = TECDB01)
EXTPROC_CONNECTION_DATA.VRITHOFF.SRWT.TEC-WL.BE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = TECDB01))
(CONNECT_DATA =
(SERVICE_NAME = TECDB01)
Hope you have all information to bring me in the right way.
Regards,
NicolasHi,
The recover automatic is working fine, but I still have problems with the recover managed
Here is the Alert log :(the 46626 was there at 11:30)
Mon Nov 15 11:31:13 2010
alter database recover managed standby database using current logfile
Managed Standby Recovery starting Real Time Apply
parallel recovery started with 7 processes
Media Recovery Waiting for thread 1 sequence 46626
Mon Nov 15 16:36:01 2010
alter database recover managed standby database cancel
Mon Nov 15 16:36:05 2010
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Mon Nov 15 16:36:06 2010
Media Recovery user canceled with status 16037
ORA-16043 signalled during: alter database recover managed standby database using current logfile...
Mon Nov 15 16:36:07 2010
Completed: alter database recover managed standby database cancel
Mon Nov 15 16:36:37 2010
ALTER DATABASE RECOVER automatic standby database until time'2010-11-15:15:50:00'
Mon Nov 15 16:36:37 2010
Media Recovery Start
Managed Standby Recovery not using Real Time Apply
parallel recovery started with 7 processes
Mon Nov 15 16:36:39 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46626_0670241032.001
Mon Nov 15 16:36:45 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46627_0670241032.001
Mon Nov 15 16:37:11 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46628_0670241032.001
Mon Nov 15 16:37:30 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46629_0670241032.001
Mon Nov 15 16:37:48 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46630_0670241032.001
Mon Nov 15 16:37:59 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46631_0670241032.001
Mon Nov 15 16:38:15 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46632_0670241032.001
Mon Nov 15 16:38:28 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46633_0670241032.001
Mon Nov 15 16:38:47 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46634_0670241032.001
Mon Nov 15 16:39:34 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46635_0670241032.001
Mon Nov 15 16:40:43 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46636_0670241032.001
Mon Nov 15 16:42:03 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46637_0670241032.001
Mon Nov 15 16:43:18 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46638_0670241032.001
Mon Nov 15 16:44:38 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46639_0670241032.001
Mon Nov 15 16:45:45 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46640_0670241032.001
Mon Nov 15 16:46:37 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46641_0670241032.001
Mon Nov 15 16:47:48 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46642_0670241032.001
Mon Nov 15 16:49:07 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46643_0670241032.001
Mon Nov 15 16:50:04 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46644_0670241032.001
Mon Nov 15 16:51:13 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46645_0670241032.001
Mon Nov 15 16:52:16 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46646_0670241032.001
Mon Nov 15 16:53:07 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46647_0670241032.001
Mon Nov 15 16:54:28 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46648_0670241032.001
Mon Nov 15 16:55:47 2010
Media Recovery Log K:\ORACLE\ORADATA\ARCHIVE\ARC46649_0670241032.001
Mon Nov 15 16:56:35 2010
Incomplete Recovery applied until change 4037420604
Completed: ALTER DATABASE RECOVER automatic standby database until time'2010-11-15:15:50:00'
I don't catch why the system wait for a sequence that is available...
Regards,
Nico -
Question on 11g Manual Upgrade (Step #33)
In the Note 837570.1 - Complete Checklist for Manual Upgrades to 11gR2 (Doc ID 837570.1), under Step #33
Step 33
Upgrade Statistics Tables Created by the DBMS_STATS Package
If you created statistics tables using the DBMS_STATS.CREATE_STAT_TABLE procedure, then upgrade these tables by executing the following procedure:
EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('SYS','dictstattab');
In the example, 'SYS' is the owner of the statistics table and 'dictstattab' is the name of the statistics table. Execute this procedure for each statistics table.
How do we know which tables were previously created with DBMS_STATS.CREATE_STAT_TABLE prior to upgrade?”
Thank you very much.Hello and welcome to the forums.
The statistics tables created by DBMS_STATS, are used for exporting/importing statistics, these tables are not used by the optimizer. So if you are not planning to import statistics, you don't need to find (and update) these tables.
Anyway, stats tables have a basic table structure and can be found by the following query:
SELECT OWNER, TABLE_NAME FROM DBA_TAB_COLUMNS WHERE
COLUMN_NAME = 'STATID' AND DATA_TYPE= 'VARCHAR2' AND DATA_LENGTH=30Check if the listed tables have also these columns:
Name Null? Type
STATID VARCHAR2(90)
TYPE CHAR(3)
VERSION NUMBER
FLAGS NUMBER
C1 VARCHAR2(90)
C2 VARCHAR2(90)
C3 VARCHAR2(90)
C4 VARCHAR2(90)
C5 VARCHAR2(90)
N1 NUMBER
N2 NUMBER
N3 NUMBER
N4 NUMBER
N5 NUMBER
N6 NUMBER
N7 NUMBER
N8 NUMBER
N9 NUMBER
N10 NUMBER
N11 NUMBER
N12 NUMBER
D1 DATE
R1 RAW(32)
R2 RAW(32)
CH1 VARCHAR2(3000)Regards
Gokhan Atil -
Doubt regarding automatic statistics collection in Oracle 10g
I am using Oracle 10g in Linux
Does statistic collection for tables throughout the database happen automatically or should we manually analyze the tables using
Analyze command or DBMS_STATS package ?
AWR collects statistics(snapshots) every 1 hr but does it mean it collects only session and database related statistics and not the table related statistics?I am using Oracle 10g in Linux Version and os name and version?
AWR collects statistics(snapshots) every 1 hr butIt's performance related statistics. Read about data gathering and AWR.
Note that AWR is an extra licensable feature thru Management packs. -
Updating the statistics information on database
Hello,
It takes a long time to have the reports from MKPF table on database. I had a research through OSS db02>Detailed Analysis which says to update the table manually. I did it and it was OK. It also advised that I can do it periodically as a SQL job. When I tried it with SQL Query, the following message occurred:
The query I used : UPDATE STATISTICS MKPF [_WA_Sys_BUDAT_...]
Reply : Server: Msg 2706, Level 11, State 6, Line 1 Table 'MKPF' does not exist.
But when I tried it manually using Db02 transaction, there was no problem. Is it related with an authentication of database? SQL Enterprise user has got a full admin account. There is no problem except this.
Not: This process is valid for all tables not only for MKPF.
Best Regards
ismail KarayakaHello,
in newer releases of R/3 the tables resides in a special schema named <sid>. So you have to switch into the schema first, before you can run this statements successful. The schema name is is the lowercase of the SAP system id, so if your system is PRD the schema name will be prd:
setuser 'prd'
update statistics ......
With
setuser
you will switch back in the origianl schema (dbo).
Regards
Clas
Maybe you are looking for
-
How do I print the 2nd page of a playlist from ITunes?
I have fifty-some tunes on one of my playlists. Using the "Help"-provided instructions, only the first page prints; despite using the command "print page 2".
-
Mms glitch : incoming messages have gone haywire.
at first i thought it was my friends phone but not its happening with multiple people - anytime i get a large messages, more than a few sentences long - the text is just a bunch of random characters and symbols. its very frustrating. any direction wo
-
I just got a new iMac, running system 10.4.8 - anyone have any recommendation regarding MS Enterouge vs. the mac Mail program? Which is better, which has less bugs?
-
I have been running Lightroom since LRv1. We base our entire workflow around importing individual catalogs, convert to DNG, backup 3x and work from the original import. It has been going well. Howver we are recently having some import corruption issu
-
CPU temp hits 200 deg F - does it need repair?
I still have Apple care for my iMac G5. What needs to be fixed? Is it a logic board replacement or what? I've already had my logic board replaced (a year ago) because of a video problem. Is 200F considered too hot? This morning when I woke up my mach