Setting for the PSA Data
Hi,
I would like to know where the settings is made in such a way that the data resides in the PSA according to the defined Period of Time.
If we are deleting a request from the InfoProvider,it can be Reconstructed provided the request is present in the PSA.
I think by default the request presents for 30 days.
Is there any setting in such a way that the Number of Days can be increased or decreased.
If so,where is the setting being made.
Thanks,
Rajesh Janardanan
Create a weekly Process Chain for Deletion of PSA data.
Now in the Process Chain variant for PSA deletion enter these settings
1.For delta load keep the no. of days =14(requests of only last two weeks will be available for reload/reconstruction)
2.For full load keep the no. of days =2(requests of last two days will be residing in PSA when the chain is run)
to create the PSA Deletion variant you have to metion the PSA table name which you can get by entering the Concerned datasource name.
regards,
rocks
Similar Messages
-
What is the, iPad2 VPN setting for "transmit ALL data stream" equivalent in PC VPN?
What is the,
iPad2 VPN setting for "transmit ALL data stream" equivalent setting in PC VPN?
Thanks.
Or there is not equivalent part?
-namlowGreat question, Aerogoob.
The XY graph can be bound to a 1d array of "points", where each point is a cluster of two numerics (X and Y). To create a shared variable of this type, you can set the data type to "From Custom Control..." in the shared variable properties dialog. Of course, first you'll have to build the custom control of the correct type: array of cluster of two numerics.
If any of that doesn't make sense, please post back and we can walk you through it in more detail.
Also, just for completeness, the chart indicator can be bound to a scalar numeric or to an array of numerics. The graph indicator can only be bound to an array of numerics. -
SAP-BW Oracle tablespace for ODS/PSA data
Hi Experts,
Could you please give some knowledge on the below " Releasing table space " issue:
We r on BW 3.5 with Data base system as ORACLE 9.2.0.8.0.
We've deleted all ODS/PSA data ( /BIC/B0000590000 - Transfer Structure Application 8ZSDDLITM ( Its an ODS ) and even after refreshing the DB02 statistic, it's still the same , Space is not getting released.
The partitioning is set up with the field PARTNO. But this field has only one value:
SQL> select partno, count(*) from sapr3."/BIC/B0000590000"
2 group by partno
3 having count(*) > 1000000;
0 67616209
SQL> select count(*) from sapr3."/BIC/B0000590000";
67616209
All the records have a value 0 for the partno field, so each record is placed in the same partition. In this way partitioning is useless.
So , Do the space can be regained by reorganization?...Will this reorganization effects anything?...
I have also found some notes on this : 565176 , 733371
Please suggest ...
Regards,
KANTHDear Oliver,
Thanks for response and sorry for the late reply.
I have checked our ODS/PSA DDIC table ( /BIC/B0000590000 ). It has huge number of records ( More than 6 million ) although i have already deleted the PSA data from BW administration.
I did the RSRV for consistancy check but there is no inconsistancy i found Between PSA Partitions and SAP Administration Information## .
I think i will approach BASIS on this but what should i exactly request them. I am afraid of any reorganization from there side will result in problems..
Thanks a lot..
Regards,
KANTH -
DTP error: Lock NOT set for: Loading master data attributes
Hi,
I have a custom datasource from ECC which loads into 0CUST_SALES. I'm using DTP & transformation which has worked for loading data in the past for this infoobject. The infopackage loads with a green status but when i try to load data, the DTP fails with an error message at "Updating attributes for InfoObject 0CUST_SALES Processing Terminated" & the job log says: "Lock NOT set for: Loading master data attributes". I've tried reactivating everything but it didn't help. Does anyone know what this error means? We're on SP7. Thanks in advance!Hello Catherine
I have had this problem in the past (3.0B) --> the reason is that our system was too slow and could not crunch the data fast enough, therefore packets where loacking each other.
The fix: load the data into the PSA only, and then send it in background from the PSA to the info object. By doing this, only a background process will run, therefore locks cannot happen.
Fix#2: by a faster server (by faster, I mean more CPU power)
Now, maybe you have another issue with NW2004s, this was only my 2 cents quick thought
Good luck!
Ioan -
Using a Variable in SSIS - Error - "Command text was not set for the command object.".
Hi All,
I am using a OLE DB Source in my dataflow component and want to select SQL Query from the master table I have created variables v_Archivequery
String packageLevel (to store the query).
<Variable Name="V_Archivequery" DataType="String">
SELECT a.*, b.BBxKey as Archive_BBxKey, b.RowChecksum as Archive_RowChecksum
FROM dbo.ImportBBxFbcci a LEFT OUTER JOIN Archive.dbo.ArchiveBBxFbcci b
ON (SUBSTRING(a.Col001,1,4) + SUBSTRING(a.Col002,1,10)) = b.BBxKey
Where (b.LatestVersion = 1 OR b.LatestVersion IS NULL)
</Variable>
I am assigning this query to the v_Archivequery variable, "SELECT a.*, b.BBxKey as Archive_BBxKey, b.RowChecksum as Archive_RowChecksum
FROM dbo.ImportBBxFbcci a LEFT OUTER JOIN Archive.dbo.ArchiveBBxFbcci b
ON (SUBSTRING(a.Col001,1,4) + SUBSTRING(a.Col002,1,10)) = b.BBxKey
Where (b.LatestVersion = 1 OR b.LatestVersion IS NULL)"
Now in the OLE Db source, I have selected as Sql Command from Variable, and I am getting the variable, v_Archivequery .
But when I am generating the package and when running I am getting bewlo errror
Error at Data Flow Task [OLE DB Source [1]]: An OLE DB error has occurred. Error code: 0x80040E0C.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E0C Description: "Command text was not set for the command object.".
Can Someone guide me whr am going wrong?
Please let me know where am going wrong?
Thanks in advance.
Thankx & regards, Vipin jha MCPWhat happens if you hit Preview button in OLE DB Source Editor? Also you can use the same query by selecting SQL Command option and test.
Could you try set the Delay Validation = True at Package and re-run ?
If set the query in variable expression (not in value), then Set Evaluate As Expression = True.
-Vaibhav Chaudhari -
This is a variation on the issue mentioned in this
post
We are using SP 2010 Content Hub to manage our content types. On the content hub we've created a couple of exteranl lists, and then created some site columns as lookups against these lists. We then added the columns to one of our content types
and set it to publish.
After the publishing job executed, I tried adding the content type (which now appears on the subscriber sites) to one of the document libraries on one of the subscriber sites. When I did that it threw the following error:
Microsoft.SharePoint.WebControls.BusinessDataListConfigurationException: Id field is not set on the external data field
at Microsoft.SharePoint.SPBusinessDataField.CreateIdField(SPAddFieldOptions op)
at Microsoft.SharePoint.SPBusinessDataField.OnAdded(SPAddFieldOptions op)
at Microsoft.SharePoint.SPFieldCollection.AddFieldAsXmlInternal(String schemaXml, Boolean addToDefaultView, SPAddFieldOptions op, Boolean isMigration, Boolean fResetCTCol)
at Microsoft.SharePoint.SPContentType.ProvisionFieldOnList(SPField field, Boolean bRecurAllowed)
at Microsoft.SharePoint.SPContentType.ProvisionFieldsOnList()
at Microsoft.SharePoint.SPContentType.DeriveContentType(SPContentTypeCollection cts, SPContentType& ctNew)
at Microsoft.SharePoint.SPContentTypeCollection.AddContentTypeToList(SPContentType contentType)
at Microsoft.SharePoint.SPContentTypeCollection.AddContentType(SPContentType contentType, Boolean updateResourceFileProperty, Boolean checkName, Boolean setNextChildByte)
at Microsoft.SharePoint.SPContentTypeCollection.Add(SPContentType contentType)
at Microsoft.SharePoint.ApplicationPages.AddContentTypeToListPage.Update(Object o, EventArgs e)
at System.Web.UI.WebControls.Button.OnClick(EventArgs e)
at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument)
at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument)
at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) b55297ed-717f-466d-8bdc-297b20344d3f
I checked the external content type configuration and it did specify an "id column". Anyone know if what I am attempting to do is possible and if so, what special configurations are required?
ThanksThe issue is not External Content type or external list but the look up column.
It's not possible to publish a look up column via the Content Type Hub.
If you need to do this then an alternate way is to use a Managed Metadata column instead, otherwise you will have to implement this via a feature.
Varun Malhotra
=================
If my post solves your problem could you mark the post as Answered or Vote As Helpful if my post has been helpful for you. -
Multiple MM schedule lines for the same date
Hello All Gurus,
I am facing a problem where the MM schedule lines are divided into various quantites for the same date. For example on 02/11/2010 the total quantity is 120, it is appearing in the below form in ME38:
02/11/2010 - 40
02/11/2010 - 40
02/11/2010 - 40
Although this is not an error, but my client is not happy with this. Can someone tell me why I am getting this error? Is it because of the rounding value that we maintain in MRP 1 view of material master record? Or is there any other setting that I am missing?
Please let me know. I would be very much thankful to you.
Thanks and Regards,
UmakanthHi,
Before changing the setup, note that sometimes it is beneficial to split one demand into multiple delivery schedules. Especially if the demand is bigger than equipment type (container/truck etc.). If you want to manage inbound transportation for such delivery schedules, it is much simpler to do this when each delivery schedule is not bigger than one equipment.
Regards,
Dominik Modrzejewski -
[HELP] Error: "JDBC theme based FOI support is disabled for the given data"
Hi all,
I have already set up MapViewer version 10.1.3.3 on BISE1 10g OC4J server. I am current using JDK 1.6. I create a mvdemo/mvdemo user for demo data.
The MapViewer demo is running fine with some demo without CHART. But give this error with some maps that have CHART like: "Dynamic theme, BI data and Pie chart style", "Dynamic theme and dynamic Bar chart style". The error is:
----------ERROR------------
Cannot process the FOI response from MapViewer server. Server message is: <?xml version=\"1.0\" encoding=\"UTF-8\" ?> <oms_error> MAPVIEWER-06009: Error processing an FOI request.\nRoot cause:FOIServlet:MAPVIEWER-06016: JDBC theme based FOI support is disabled for the given data source. [mvdemo]</oms_error>.
----------END ERROR------
I searched many threads on this forum, some point me to have this allow_jdbc_theme_based_foi="true" in mapViewerConfig.xml and restart MapViewer.
<map_data_source name="mvdemo"
jdbc_host="localhost"
jdbc_sid="bise1db"
jdbc_port="1521"
jdbc_user="mvdemo"
jdbc_password="mvdemo"
jdbc_mode="thin"
number_of_mappers="3"
allow_jdbc_theme_based_foi="true"
/>
Error Images: [http://i264.photobucket.com/albums/ii176/necrombi/misc/jdbcerror.png|http://i264.photobucket.com/albums/ii176/necrombi/misc/jdbcerror.png]
I have this configuration, but no luck. Could anyone show me how to resolve this problem?
Rgds,
Dung NguyenOop, i managed to have this prob resolved!
My prob may come from this I use both scott and mvdemo schema for keeping demo data (import demo data).
Steps i made to resolve my prob are:
1) Undeploy MapViewer from Application Server Control (http://localhost:9704/em in my case)
2) Drop user mvdemo
3) Download mapviewer kit from Oracle Fusion Middleware MapViewer & Map Builder Version 11.1.1.2
4) Deploy MapViewer again
5) Recreate mvdemo and import demo data
6) Run mcsdefinition.sql, mvdemo.sql with mvdemo user (granted dba already)
7) Edit mapViewerConfig.xml
<map_data_source name="mvdemo"
jdbc_host="dungnguyen-pc"
jdbc_sid="bise1db"
jdbc_port="1521"
jdbc_user="mvdemo"
jdbc_password="!mvdemo"
jdbc_mode="thin"
number_of_mappers="3"
allow_jdbc_theme_based_foi="true"
/>
Save & Restart MapViewer
And now, all demos run fine. Hope this helpful for who meet prob like my case. -
My ipod touch4 may have crashed in trying to up date to the new ios. 4.3.4. I have not been successful in attempting to retore back to original setting for the past 10 hours. What do I do next.
Fix iOS Apps Stuck on “Waiting…” During Download & Install
Fix iPhone Apps Stuck "Waiting" During Installation -
JCo connection for the application data
Dear All,
Now, we configure BP for MSS. When we set JCo connection for the application data, from this link http://help.sap.com/saphelp_erp2005/helpdata/en/29/d7844205625551e10000000a1550b0/frameset.htm we selected the method "Ticket".
When we run the page in "Team" area, we found some errors as follow:
- You do not have the authorization to start service sap.com/pcui_gp~isr/IsrStatusoverview
- You do not have the authorization to start service sap.com/mssreccand/CandidateStatusOverview
- Pages about "Employee Information" display Blank iViews.
But if we set JCo connection for application data by using "User/Password" method (UserID have "SAP_ALL" profile in back-end system), the application work fine.
Kindy advise us which method should be used to set JCo connection (Ticket or User/Password).
If the method should be "User/Password", please advise authorization and user type for UserID that be used to map JCo connection.
Thank you very much,
Anek.Hi,
Go through the contents in,
http://help.sap.com/saphelp_ep60sp2/helpdata/en/8f/67d27676ace84080964d4c4223bb3c/content.htm
Hope this helps !
Regards
Srinivasan T -
BPS You have no authorization for the requested data
We are implementing Hierarchy node based security for our BPS.
When the user tries to display the planning layout, they get the error message "You have no authorization for the requested data "
I have given authorization to the relavant Infocubes, also checked the all the Authorization Relavant Info Objects and added theses Info Object to the custom authorization created in RSECADMIN.
Also added the info objects 0TCAACTVT, 0TCAIPROV, 0TCAVALID to the custom authorization.
In pfcg, this authorization has been added to S_RS_AUTH. I have also given activity 02, 03, 16 values and a * to planning areas, functions, packages, groups, levels, folders, ... to the objects R_AREA
R_BUNDLE
R_METHOD
R_PACKAGE
R_PARAM
R_PLEVEL
R_PM_NAME
R_PROFILE
But still we get the same error.
Has anyone encountered this problem? Can you please provide me some clues to resolve this issueThank you very much Grevaz, but that template does not help.
I did run both ST01 trace and BI RSECADMIN trace. RSECADMIN Trace shows the below authorization failure
Subselection (Technical SUBNR) 1
Supplementation of Selection for Aggregated Characteristics
No Check for Aggregation Authorization Required
Following Set Is Checked Comparison with Following Authorized Set Result Remaining Quantity
Characteristic Contents
0FUNDS_CTR
0TCAACTVT
SQL Format:
FUNDS_CTR BETWEEN '4012001000'
AND '4012001999'
AND TCAACTVT = '03'
Characteristic Contents
0FUNDS_CTR Node 1 I EQ #
I EQ :
0TCAACTVT I EQ 02
I EQ 03
Partially Authorized (Average) Characteristic Contents
0FUNDS_CTR
0TCAACTVT
SQL Format:
FUNDS_CTR > '4012001000'
AND FUNDS_CTR <= '4012001999'
AND NOT FUNDS_CTR IN ('4012001001','4012001002','4012001003','4012001004','4012001005','4012001006','4012001007','4012001008','4012001009','4012001010')
AND TCAACTVT = '03'
Value selection partially authorized. Check of remainder at end
Following Set Is Checked Comparison with Following Authorized Set Result Remaining Quantity
Characteristic Contents
0FUNDS_CTR
0TCAACTVT
SQL Format:
FUNDS_CTR > '4012001000'
AND FUNDS_CTR <= '4012001999'
AND NOT FUNDS_CTR IN ('4012001001','4012001002','4012001003','4012001004','4012001005','4012001006','4012001007','4012001008','4012001009','4012001010')
AND TCAACTVT = '03'
Characteristic Contents
0FUNDS_CTR Node 1 I EQ #
I EQ :
0TCAACTVT I EQ 02
I EQ 03
Not Authorized
All Authorizations Tested
Message EYE007: You do not have sufficient authorization
No Sufficient Authorization for This Subselection (SUBNR)
Following CHANMIDs Are Affected:
206 ( 0FUNDS_CTR )
Authorization Check Complete
We have created custom authorization and trying to restrict based on hierarchy node.
One point I observed is, when I give access to all nodes with a wildcard * in the custom authorization, then the error disappears and the layout is visble. But our point here is to try to restrict based on the nodes and we cannot give display access to all nodes. -
Quota is displayed twice in 2006 for the same date
When iam entering Leave Quota Through PA30 , PA61 ,PT_QTA00
for a particular Employee it will display the Quota Twice
Is there any way we can stop this ?
is there any way we can Put cap that double Entry for the same date ?
Quota is deducting Normally but it is Displaying Twice
Plz Help ExpertsThank You So much
i changed in the info type
SPRO>> Time Management>> Time Data Recording and Administration>> Specify System Reaction to Overlapping Time Infotypes
Infotype 2006
Time Cstr. Class 01
Reaction Indicator : select according to The Requirement
Right now Reaction Indicator is set to W
Make the Reaction Indicator to E and save the Entry
It worked for now
one thing iam not sure how does V_T554Y table work
Plz let me know
Reaction I ndicator
This indicator determines how the system reacts when you enter new infotype records which overlap with existing ones.
The specifications are as follows:
A - The old record is delimited, all collisions are displayed.
E - The system does not allow you to create the new record, and
displays all collisions.
W - You can create the new record, but the old record remains unchanged. All collisions are displayed.
N - As for W, but collisions are not displayed.
Edited by: jkhose on Jan 25, 2012 1:23 AM -
Looking for a manual for the Historical Data Viewer Lookout 6.1
Is there a manual on the Historical Data Viewer (Measurement and Automation Explorer)?
I have never logged any analog data, but want to log (to historical data) the battery voltage at a remote site during a power outage.
I have the object established and have a display showing the real-time battery voltage. I have selected logging and have set the resolution and the deviation.
I am stuck where it comes to the historical data viewer. I believe that I need to set up a trace to view the data, but can not find a manual for the Historical Data Viewer (Measurement and Automation Explorer) so that I can read up on it.
Does anyone have a link to a manual?
Thanks, AlanYou have to configure the Scale parameter of Pot in Lookout. The Historical Data Viewer just reads data from database the show it. It can't scale it.
But I can't find a way to add a reference line to the view. Maybe this can be a new feature.
Ryan Shi
National Instruments -
Could not start cache agent for the requested data store
Hi,
This is my first attempt in TimesTen. I am running TimesTen on the same Linux host (RHES 5.2) that running Oracle 11g R2. The version of TimesTen is:
TimesTen Release 11.2.1.4.0
Trying to create a simple cache.
The DSN entry for ttdemo1 in .odbc.ini is as follows:
+[ttdemo1]+
Driver=/home/oracle/TimesTen/timesten/lib/libtten.so
DataStore=/work/oracle/TimesTen_store/ttdemo1
PermSize=128
TempSize=128
UID=hr
OracleId=MYDB
DatabaseCharacterSet=WE8MSWIN1252
ConnectionCharacterSet=WE8MSWIN1252
Using ttisql I connect
Command> connect "dsn=ttdemo1;pwd=oracle;oraclepwd=oracle";
Connection successful: DSN=ttdemo1;UID=hr;DataStore=/work/oracle/TimesTen_store/ttdemo1;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=WE8MSWIN1252;DRIVER=/home/oracle/TimesTen/timesten/lib/libtten.so;OracleId=MYDB;PermSize=128;TempSize=128;TypeMode=0;OracleNetServiceName=MYDB;
(Default setting AutoCommit=1)
Command> call ttcacheuidpwdset('ttsys','oracle');
Command> call ttcachestart;
*10024: Could not start cache agent for the requested data store. Could not initialize Oracle Environment Handle.*
The command failed.
The following is shown in the tterrors.log:
15:41:21.82 Err : ORA: 9143: ora-9143--1252549744-xxagent03356: Datastore: TTDEMO1 OCIEnvCreate failed. Return code -1
15:41:21.82 Err : : 7140: oraagent says it has failed to start: Could not initialize Oracle Environment Handle.
15:41:22.36 Err : : 7140: TT14004: TimesTen daemon creation failed: Could not spawn oraagent for '/work/oracle/TimesTen_store/ttdemo1': Could not initialize Oracle Environment Handl
What are the reasons that the daemon cannot spawn another agent? FYI the environment variables are set as:
ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
ANT_HOME=/home/oracle/TimesTen/ttdemo1/3rdparty/ant
CLASSPATH=/home/oracle/TimesTen/ttdemo1/lib/ttjdbc5.jar:/home/oracle/TimesTen/ttdemo1/lib/orai18n.jar:/home/oracle/TimesTen/ttdemo1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/ttdemo1/3rdparty/jms1.1/lib/jms.jar:.
oracle@rhes5:/home/oracle/TimesTen/ttdemo1/info% echo $LD_LIBRARY_PATH
/home/oracle/TimesTen/ttdemo1/lib:/home/oracle/TimesTen/ttdemo1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
CheersSure thanks.
Here you go:
Daemon environment:
_=/bin/csh
DISABLE_HUGETLBFS=1
SYSTEM=TEST
INIT_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/init+ASM.ora
GEN_APPSDIR=/home/oracle/dba/bin
LD_LIBRARY_PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib
HOME=/home/oracle
SPFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
TNS_ADMIN=/u01/app/oracle/product/11.2.0/db_1/network/admin
INITFILE_DIR=/u01/app/oracle/backup/+ASM/initfile_dir
HTMLDIR=/home/oracle/+ASM/dba/html
HOSTNAME=rhes5
TEMP=/oradata1/tmp
PWD=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin
HISTSIZE=1000
PATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/bin:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/oci:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/jdbc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/odbc_drivermgr:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/proc:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/quickstart/sample_code/ttclasses/xla:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1/sdk:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant/bin:/usr/kerberos/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/bin/X11:/usr/X11R6/bin:/usr/platform/SUNW,Ultra-2/sbin:/u01/app/oracle/product/11.2.0/db_1:/u01/app/oracle/product/11.2.0/db_1/bin:.
GEN_ADMINDIR=/home/oracle/dba/admin
CONTROLFILE_DIR=/u01/app/oracle/backup/+ASM/controlfile_dir
ETCDIR=/home/oracle/+ASM/dba/etc
GEN_ENVDIR=/home/oracle/dba/env
DATAFILE_DIR=/u01/app/oracle/backup/+ASM/datafile_dir
BACKUPDIR=/u01/app/oracle/backup/+ASM
RESTORE_ARCFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_arcfiles.txt
TMPDIR=/oradata1/tmp
CVS_RSH=ssh
ARCLOG_DIR=/u01/app/oracle/backup/+ASM/arclog_dir
REDOLOG_DIR=/u01/app/oracle/backup/+ASM/redolog_dir
INPUTRC=/etc/inputrc
LOGDIR=/home/oracle/+ASM/dba/log
DATAFILE_LIST=/u01/app/oracle/backup/+ASM/datafile_dir/datafile.list
LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
PS1=rhes5:($ORACLE_SID)$
G_BROKEN_FILENAMES=1
SHELL=/bin/ksh
PASSFILE=/home/oracle/dba/env/.ora_accounts
LOGNAME=oracle
ORA_NLS10=/u01/app/oracle/product/11.2.0/db_1/nls/data
ORACLE_SID=mydb
APPSDIR=/home/oracle/+ASM/dba/bin
ORACLE_OWNER=oracle
RESTOREFILE_DIR=/u01/app/oracle/backup/+ASM/restorefile_dir
SQLPATH=/home/oracle/dba/bin
TRANDUMPDIR=/tran
RESTORE_SPFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_spfile.txt
RESTORE_DATAFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_datafiles.txt
ENV=/home/oracle/.kshrc
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
SSH_CONNECTION=50.140.197.215 62742 50.140.197.216 22
LESSOPEN=|/usr/bin/lesspipe.sh %s
TERM=xterm
GEN_ETCDIR=/home/oracle/dba/etc
SP_FILE=/u01/app/oracle/product/10.1.0/db_1/dbs/spfile+ASM.ora
ORACLE_BASE=/u01/app/oracle
ASTFEATURES=UNIVERSE - ucb
ADMINDIR=/home/oracle/+ASM/dba/admin
SSH_CLIENT=50.140.197.215 62742 22
TZ=GB
SUPPORT=oracle@linux
ARCHIVE_LOG_LIST=/u01/app/oracle/backup/+ASM/arclog_dir/archive_log.list
USER=oracle
RESTORE_TEMPFILES=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_tempfiles.txt
MAIL=/var/spool/mail/oracle
EXCLUDE=/home/oracle/+ASM/dba/bin/exclude.lst
GEN_LOGDIR=/home/oracle/dba/log
SSH_TTY=/dev/pts/2
RESTORE_INITFILE=/u01/app/oracle/backup/+ASM/restorefile_dir/restore_initfile.txt
HOSTTYPE=i386-linux
VENDOR=intel
OSTYPE=linux
MACHTYPE=i386
SHLVL=1
GROUP=dba
HOST=rhes5
REMOTEHOST=vista
EDITOR=vi
ORA_NLS33=/u01/app/oracle/product/11.2.0/db_1/ocommon/nls/admin/data
ODBCINI=/home/oracle/.odbc.ini
TT=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/
SHLIB_PATH=/u01/app/oracle/product/11.2.0/db_1/lib:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1//lib
ANT_HOME=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/ant
CLASSPATH=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/ttjdbc5.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/orai18n.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/3rdparty/jms1.1/lib/jms.jar:.
TT_AWT_PLSQL=0
NLS_LANG=AMERICAN_AMERICA
NLS_COMP=ANSI
NLS_SORT=BINARY
NLS_LENGTH_SEMANTICS=BYTE
NLS_NCHAR_CONV_EXCP=FALSE
NLS_CALENDAR=GREGORIAN
NLS_TIME_FORMAT=hh24:mi:ss
NLS_DATE_FORMAT=syyyy-mm-dd hh24:mi:ss
NLS_TIMESTAMP_FORMAT=syyyy-mm-dd hh24:mi:ss.ff9
ORACLE_HOME=
DaemonCWD = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info
DaemonLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/tterrors.log
DaemonOptionsFile = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttendaemon.options
Platform = Linux/x86/32bit
SupportLog = /home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/info/ttmesg.log
Uptime = 136177 seconds
Backcompat = no
Group = 'dba'
Daemon pid 8111 port 53384 instance ttimdb1
End of report -
CBO generating different plans for the same data in similar Environments
Hi All
I have been trying to compare an SQL from 2 different but similar environments build of the same hardware specs .The issue I am facing is environment A, the query executes in less than 2 minutes with plan mostly showing full table scans and hash join whereas in environment B(problematic), it times out after 2 hours with an error of unable to extend table space . The statistics are up to date in both environments for both tables and indexes . System parameters are exactly similar(default oracle for except for multiblock_read_count ).
Both Environment have same db parameter for db_file_multiblock_read_count(16), optimizer(refer below),hash_area_size (131072),pga_aggregate_target(1G),db_block_size(8192) etc . SREADTIM, MREADTIM, CPUSPEED, MBRC are all null in aux_stats in both environment because workload was never collected i believe.
Attached is details about the SQL with table stats, SQL and index stats my main concern is CBO generating different plans for the similar data and statistics and same hardware and software specs. Is there any thing else I should consider .I generally see environment B being very slow and always plans tend to nested loops and index scan whereas what we really need is a sensible FTS in many cases. One of the very surprising thing is METER_CONFIG_HEADER below which has just 80 blocks of data is being asked for index scan.
show parameter optimizer
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
**Environment**
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
Note: : There are slight difference in the no of records in the attached sheet.However, I wanted to tell that i have tested with exact same data and was getting similar results but I couldn't retain the data untill collecting the details in the attachment
TEST1 COMPARE TABLE LEVE STATS used by CBO
ENVIRONMENT A
TABLE_NAME NUM_ROWS BLOCKS LAST_ANALYZED
ASSET 3607425 167760 5/02/2013 22:11
METER_CONFIG_HEADER 3658 80 5/01/2013 0:07
METER_CONFIG_ITEM 32310 496 5/01/2013 0:07
NMI 1899024 33557 18/02/2013 10:55
REGISTER 4830153 101504 18/02/2013 9:57
SDP_LOGICAL_ASSET 1607456 19137 18/02/2013 15:48
SDP_LOGICAL_REGISTER 5110781 78691 18/02/2013 9:56
SERVICE_DELIVERY_POINT 1425890 42468 18/02/2013 13:54
ENVIRONMENT B
TABLE_NAME NUM_ROWS BLOCKS LAST_ANALYZED
ASSET 4133939 198570 16/02/2013 10:02
METER_CONFIG_HEADER 3779 80 16/02/2013 10:55
METER_CONFIG_ITEM 33720 510 16/02/2013 10:55
NMI 1969000 33113 16/02/2013 10:58
REGISTER 5837874 120104 16/02/2013 11:05
SDP_LOGICAL_ASSET 1788152 22325 16/02/2013 11:06
SDP_LOGICAL_REGISTER 6101934 91088 16/02/2013 11:07
SERVICE_DELIVERY_POINT 1447589 43804 16/02/2013 11:11
TEST ITEM 2 COMPARE INDEX STATS used by CBO
ENVIRONMENT A
TABLE_NAME INDEX_NAME UNIQUENESS BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR NUM_ROWS
ASSET IDX_AST_DEVICE_CATEGORY_SK NONUNIQUE 2 9878 67 147 12982 869801 3553095
ASSET IDX_A_SAPINTLOGDEV_SK NONUNIQUE 2 7291 2747 2 639 1755977 3597916
ASSET SYS_C00102592 UNIQUE 2 12488 3733831 1 1 3726639 3733831
METER_CONFIG_HEADER SYS_C0092052 UNIQUE 1 12 3670 1 1 3590 3670
METER_CONFIG_ITEM SYS_C0092074 UNIQUE 1 104 32310 1 1 32132 32310
NMI IDX_NMI_ID NONUNIQUE 2 6298 844853 1 2 1964769 1965029
NMI IDX_NMI_ID_NK NONUNIQUE 2 6701 1923072 1 1 1922831 1923084
NMI IDX_NMI_STATS NONUNIQUE 1 106 4 26 52 211 211
REGISTER REG_EFFECTIVE_DTM NONUNIQUE 2 12498 795 15 2899 2304831 4711808
REGISTER SYS_C00102653 UNIQUE 2 16942 5065660 1 1 5056855 5065660
SDP_LOGICAL_ASSET IDX_SLA_SAPINTLOGDEV_SK NONUNIQUE 2 3667 1607968 1 1 1607689 1607982
SDP_LOGICAL_ASSET IDX_SLA_SDP_SK NONUNIQUE 2 3811 668727 1 2 1606204 1607982
SDP_LOGICAL_ASSET SYS_C00102665 UNIQUE 2 5116 1529606 1 1 1528136 1529606
SDP_LOGICAL_REGISTER SYS_C00102677 UNIQUE 2 17370 5193638 1 1 5193623 5193638
SERVICE_DELIVERY_POINT IDX_SDP_NMI_SK NONUNIQUE 2 4406 676523 1 2 1423247 1425890
SERVICE_DELIVERY_POINT IDX_SDP_SAP_INT_NMI_SK NONUNIQUE 2 7374 676523 1 2 1458238 1461108
SERVICE_DELIVERY_POINT SYS_C00102687 UNIQUE 2 4737 1416207 1 1 1415022 1416207
ENVIRONMENT B
TABLE_NAME INDEX_NAME UNIQUENESS BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR NUM_ROWS
ASSET IDX_AST_DEVICE_CATEGORY_SK NONUNIQUE 2 8606 121 71 16428 1987833 4162257
ASSET IDX_A_SAPINTLOGDEV_SK NONUNIQUE 2 8432 1780146 1 1 2048170 4162257
ASSET SYS_C00116157 UNIQUE 2 13597 4162263 1 1 4158759 4162263
METER_CONFIG_HEADER SYS_C00116570 UNIQUE 1 12 3779 1 1 3734 3779
METER_CONFIG_ITEM SYS_C00116592 UNIQUE 1 107 33720 1 1 33459 33720
NMI IDX_NMI_ID NONUNIQUE 2 6319 683370 1 2 1970460 1971313
NMI IDX_NMI_ID_NK NONUNIQUE 2 6597 1971293 1 1 1970771 1971313
NMI IDX_NMI_STATS NONUNIQUE 1 98 48 2 4 196 196
REGISTER REG_EFFECTIVE_DTM NONUNIQUE 2 15615 1273 12 2109 2685924 5886582
REGISTER SYS_C00116748 UNIQUE 2 19533 5886582 1 1 5845565 5886582
SDP_LOGICAL_ASSET IDX_SLA_SAPINTLOGDEV_SK NONUNIQUE 2 4111 1795084 1 1 1758441 1795130
SDP_LOGICAL_ASSET IDX_SLA_SDP_SK NONUNIQUE 2 4003 674249 1 2 1787987 1795130
SDP_LOGICAL_ASSET SYS_C004520 UNIQUE 2 5864 1795130 1 1 1782147 1795130
SDP_LOGICAL_REGISTER SYS_C004539 UNIQUE 2 20413 6152850 1 1 6073059 6152850
SERVICE_DELIVERY_POINT IDX_SDP_NMI_SK NONUNIQUE 2 3227 660649 1 2 1422572 1447803
SERVICE_DELIVERY_POINT IDX_SDP_SAP_INT_NMI_SK NONUNIQUE 2 6399 646257 1 2 1346948 1349993
SERVICE_DELIVERY_POINT SYS_C00128706 UNIQUE 2 4643 1447946 1 1 1442796 1447946
TEST ITEM 3 COMPARE PLANS
ENVIRONMENT A
Plan hash value: 4109575732
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 13 | 2067 | | 135K (2)| 00:27:05 |
| 1 | HASH UNIQUE | | 13 | 2067 | | 135K (2)| 00:27:05 |
|* 2 | HASH JOIN | | 13 | 2067 | | 135K (2)| 00:27:05 |
|* 3 | HASH JOIN | | 6 | 900 | | 135K (2)| 00:27:04 |
|* 4 | HASH JOIN ANTI | | 1 | 137 | | 135K (2)| 00:27:03 |
|* 5 | TABLE ACCESS BY INDEX ROWID| NMI | 1 | 22 | | 5 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 131 | | 95137 (2)| 00:19:02 |
|* 7 | HASH JOIN | | 1 | 109 | | 95132 (2)| 00:19:02 |
|* 8 | TABLE ACCESS FULL | ASSET | 36074 | 1021K| | 38553 (2)| 00:07:43 |
|* 9 | HASH JOIN | | 90361 | 7059K| 4040K| 56578 (2)| 00:11:19 |
|* 10 | HASH JOIN | | 52977 | 3414K| 2248K| 50654 (2)| 00:10:08 |
|* 11 | HASH JOIN | | 39674 | 1782K| | 40101 (2)| 00:08:02 |
|* 12 | TABLE ACCESS FULL | REGISTER | 39439 | 1232K| | 22584 (2)| 00:04:32 |
|* 13 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 4206K| 56M| | 17490 (2)| 00:03:30 |
|* 14 | TABLE ACCESS FULL | SERVICE_DELIVERY_POINT | 675K| 12M| | 9412 (2)| 00:01:53 |
|* 15 | TABLE ACCESS FULL | SDP_LOGICAL_ASSET | 1178K| 15M| | 4262 (2)| 00:00:52 |
|* 16 | INDEX RANGE SCAN | IDX_NMI_ID_NK | 2 | | | 2 (0)| 00:00:01 |
| 17 | VIEW | | 39674 | 232K| | 40101 (2)| 00:08:02 |
|* 18 | HASH JOIN | | 39674 | 1046K| | 40101 (2)| 00:08:02 |
|* 19 | TABLE ACCESS FULL | REGISTER | 39439 | 500K| | 22584 (2)| 00:04:32 |
|* 20 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 4206K| 56M| | 17490 (2)| 00:03:30 |
|* 21 | TABLE ACCESS FULL | METER_CONFIG_HEADER | 3658 | 47554 | | 19 (0)| 00:00:01 |
|* 22 | TABLE ACCESS FULL | METER_CONFIG_ITEM | 7590 | 68310 | | 112 (2)| 00:00:02 |
Predicate Information (identified by operation id):
2 - access("METER_CONFIG_HEADER_SK"="METER_CONFIG_HEADER_SK")
3 - access("NETWORK_TARIFF_CD"="NETWORK_TARIFF_CD")
4 - access("SERVICE_DELIVERY_POINT_SK"="TMP"."SERVICE_DELIVERY_POINT_SK")
5 - filter("ROW_CURRENT_IND"='Y' AND ("NMI_STATUS_CD"='A' OR "NMI_STATUS_CD"='D'))
7 - access("ASSET_CD"="EQUIP_CD" AND "SAP_INT_LOG_DEVICE_SK"="SAP_INT_LOG_DEVICE_SK")
8 - filter("ROW_CURRENT_IND"='Y')
9 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")
10 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")
11 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
12 - filter("REGISTER_TYPE_CD"='C' AND (SUBSTR("REGISTER_ID_CD",1,1)='4' OR
SUBSTR("REGISTER_ID_CD",1,1)='5' OR SUBSTR("REGISTER_ID_CD",1,1)='6') AND "ROW_CURRENT_IND"='Y')
13 - filter("ROW_CURRENT_IND"='Y')
14 - filter("ROW_CURRENT_IND"='Y')
15 - filter("ROW_CURRENT_IND"='Y')
16 - access("NMI_SK"="NMI_SK")
18 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
19 - filter("REGISTER_TYPE_CD"='C' AND (SUBSTR("REGISTER_ID_CD",1,1)='1' OR
SUBSTR("REGISTER_ID_CD",1,1)='2' OR SUBSTR("REGISTER_ID_CD",1,1)='3') AND "ROW_CURRENT_IND"='Y')
20 - filter("ROW_CURRENT_IND"='Y')
21 - filter("ROW_CURRENT_IND"='Y')
22 - filter("ROW_CURRENT_IND"='Y' AND "CONROL_REGISTER"='X')
ENVIRONMENT B
Plan hash value: 2826260434
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 181 | 103K (2)| 00:20:47 |
| 1 | HASH UNIQUE | | 1 | 181 | 103K (2)| 00:20:47 |
|* 2 | HASH JOIN ANTI | | 1 | 181 | 103K (2)| 00:20:47 |
|* 3 | HASH JOIN | | 1 | 176 | 56855 (2)| 00:11:23 |
|* 4 | HASH JOIN | | 1 | 163 | 36577 (2)| 00:07:19 |
|* 5 | TABLE ACCESS BY INDEX ROWID | ASSET | 1 | 44 | 4 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 131 | 9834 (2)| 00:01:59 |
| 7 | NESTED LOOPS | | 1 | 87 | 9830 (2)| 00:01:58 |
| 8 | NESTED LOOPS | | 1 | 74 | 9825 (2)| 00:01:58 |
|* 9 | HASH JOIN | | 1 | 52 | 9820 (2)| 00:01:58 |
|* 10 | TABLE ACCESS BY INDEX ROWID| METER_CONFIG_HEADER | 1 | 14 | 1 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 33 | 116 (2)| 00:00:02 |
|* 12 | TABLE ACCESS FULL | METER_CONFIG_ITEM | 1 | 19 | 115 (2)| 00:00:02 |
|* 13 | INDEX RANGE SCAN | SYS_C00116570 | 1 | | 1 (0)| 00:00:01 |
|* 14 | TABLE ACCESS FULL | SERVICE_DELIVERY_POINT | 723K| 13M| 9699 (2)| 00:01:57 |
|* 15 | TABLE ACCESS BY INDEX ROWID | NMI | 1 | 22 | 5 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | IDX_NMI_ID_NK | 2 | | 2 (0)| 00:00:01 |
|* 17 | TABLE ACCESS BY INDEX ROWID | SDP_LOGICAL_ASSET | 1 | 13 | 5 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX_SLA_SDP_SK | 2 | | 2 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX_A_SAPINTLOGDEV_SK | 2 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS FULL | REGISTER | 76113 | 2378K| 26743 (2)| 00:05:21 |
|* 21 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 5095K| 63M| 20245 (2)| 00:04:03 |
| 22 | VIEW | | 90889 | 443K| 47021 (2)| 00:09:25 |
|* 23 | HASH JOIN | | 90889 | 2307K| 47021 (2)| 00:09:25 |
|* 24 | TABLE ACCESS FULL | REGISTER | 76113 | 966K| 26743 (2)| 00:05:21 |
|* 25 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 5095K| 63M| 20245 (2)| 00:04:03 |
Predicate Information (identified by operation id):
2 - access("SERVICE_DELIVERY_POINT_SK"="TMP"."SERVICE_DELIVERY_POINT_SK")
3 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK" AND
"SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
4 - access("ASSET_CD"="EQUIP_CD")
5 - filter("ROW_CURRENT_IND"='Y')
9 - access("NETWORK_TARIFF_CD"="NETWORK_TARIFF_CD")
10 - filter("ROW_CURRENT_IND"='Y')
12 - filter("ROW_CURRENT_IND"='Y' AND "CONROL_REGISTER"='X')
13 - access("METER_CONFIG_HEADER_SK"="METER_CONFIG_HEADER_SK")
14 - filter("ROW_CURRENT_IND"='Y')
15 - filter("ROW_CURRENT_IND"='Y' AND ("NMI_STATUS_CD"='A' OR "NMI_STATUS_CD"='D'))
16 - access("NMI_SK"="NMI_SK")
17 - filter("ROW_CURRENT_IND"='Y')
18 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")
19 - access("SAP_INT_LOG_DEVICE_SK"="SAP_INT_LOG_DEVICE_SK")
20 - filter((SUBSTR("REGISTER_ID_CD",1,1)='4' OR SUBSTR("REGISTER_ID_CD",1,1)='5' OR
SUBSTR("REGISTER_ID_CD",1,1)='6') AND "REGISTER_TYPE_CD"='C' AND "ROW_CURRENT_IND"='Y')
21 - filter("ROW_CURRENT_IND"='Y')
23 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
24 - filter((SUBSTR("REGISTER_ID_CD",1,1)='1' OR SUBSTR("REGISTER_ID_CD",1,1)='2' OR
SUBSTR("REGISTER_ID_CD",1,1)='3') AND "REGISTER_TYPE_CD"='C' AND "ROW_CURRENT_IND"='Y')
25 - filter("ROW_CURRENT_IND"='Y')Edited by: abhilash173 on Feb 24, 2013 9:16 PM
Edited by: abhilash173 on Feb 24, 2013 9:18 PMHi Paul,
I misread your question initially .The system stats are outdated in both ( same result as seen from aux_stats) .I am not a DBA and do not have access to gather system stats fresh.
select * from sys.aux_stats$
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS NULL COMPLETED
SYSSTATS_INFO DSTART NULL 02-16-2011 15:24
SYSSTATS_INFO DSTOP NULL 02-16-2011 15:24
SYSSTATS_INFO FLAGS 1 NULL
SYSSTATS_MAIN CPUSPEEDNW 1321.20523 NULL
SYSSTATS_MAIN IOSEEKTIM 10 NULL
SYSSTATS_MAIN IOTFRSPEED 4096 NULL
SYSSTATS_MAIN SREADTIM NULL NULL
SYSSTATS_MAIN MREADTIM NULL NULL
SYSSTATS_MAIN CPUSPEED NULL NULL
SYSSTATS_MAIN MBRC NULL NULL
SYSSTATS_MAIN MAXTHR NULL NULL
SYSSTATS_MAIN SLAVETHR NULL NULL
Maybe you are looking for
-
Photoshop CS6 not closing process photoshop.exe after shutdown
Oke I have this strange problem with processes that are not stopping Because it is not stopping i cant open photoshop on a later time hours later because the program thiks it is running on the background somewhere i cant find it. After terminating th
-
Before, when reading PDF files, I was able to copy and paste part of the book. Now it is not possible. I can copy but when pasting gets nothing. Antes, quando lia arquivos PDF, eu conseguia copiar e colar parte do livro. Agora isso não é possível. Eu
-
hello guys , knew into programming generally , and got a problem with my sun application server starting. i cant connect to the appserver on my local host and 4848 an if i try starting the server from the command prompt asadmin start domain1 it shows
-
ICC profile description is invalid
I loaded PSE 8 on my Dell M1530 running Vista. Whenever I try to start PS8 I get "could not initialize Photoshop because the ICC profile description is invalid" and the program quits. I've tried removing and reinstalling PSE8 and it still errors.
-
Orcle inventory-is it integrated and does it capture best busness practics
Hello all, I have a basic but imp qn , for which I have tried to find a comprehensive answer but in vain I would like to know what features in oracle apps make you say that it is integrated and captures best business practices.. provide your views or