Lots of WAITS on database
my database is 10g and when we fire sqls then it shows lots of waits.
we are selecting records from partition tables and we are using ASM (normal redundancy) i have collected some data
it shows
Metrics "Database Time Spent Waiting (%)" is at 71.60994 for event class "Configuration"
and whne i see graph it was so high and there is high thing is "ps deq credit :send blkd"
i did not understand what is this.
we are using parallel degree by default on partiotion tables.
shall i do
1)we have ASM_POWER_LIMIT is 1 and shall i do it to other value like 5 or 8.
2)we have created local indexes on partotion tables ,and i noticed many times it uses full table scan.shall i have to ON degree of parallelism on indexes too.
please reply
thanks
with such limited information it's impossible to give valuable suggestion.
10g is not version, it's marketing label. Post your version in 4 decimal places.
You might want to run a statspack report or AWR report and post result here.
Similar Messages
-
Database performing Very slow - Lots of wait events
My database is on Oracle10g on Sun 5.10
The users are complaining about database is very slow.
I analyzed the indexes & later on rebuild them, hardly it has only 5% performance improvement.
http://i812.photobucket.com/albums/zz43/sadeel00/untitled1.jpg
http://i812.photobucket.com/albums/zz43/sadeel00/untitled2.jpg
ADDM has no recommendations.Duplicate post - Database performing Very slow - Lots of wait events
Srini -
Hi,
Can any one help me on this warning, i got this warning on ORACLE ENTERPRISE MANAGER
Waits by Wait Class
Database Time Spent Waiting (%)
Metrics "Database Time Spent Waiting (%)" is at 69.64818 for event class "Commit"
Below are our environment details:
RDBMS: 10.2.0.4
Oracle E-Business suite: 12.0.6
OS: Oracle Sun 10
Please suggest.
Thank you.Please see the following docs.
How to Disable Alerts for The Database Time Spent Waiting (%) Metric? (Doc ID 1500074.1)
"Database Time Spent Waiting (%)" at 100 for Event Class "Other" when Database is not Under Load (Doc ID 1526552.1)
Thanks,
Hussein -
Application wait for database (auto_start problem)
RAC CRS version 10.2.0.4
2 Linux nodes
On both nodes Oracle RAC database instances are starting automatically.
We need also application automatic start on second node.
So at least one DB instance (and listener) should be started before APP.
It is impossible to configure dependencies to accomplish this task.
DB and APP tries to startup at the same time. DB is down and APP gets lot of errors.
Timeout also is not an option, as it will cause delay at failover.
Tried manual APP startup, but...
When auto_start = never, another problem appears, resources do not get started in case of node failure.
Ideas?
GashaHello,
Is this APP a custom Application? Assuming the application is called myapp, you can use the steps documented in
http://www.oracle.com/technology/products/database/clusterware/pdf/TWP-Oracle-Clusterware-3rd-party.pdf
You can replace the Appserver with your application. If this is not what you meant, kindly rephrase your question so we can understand the requirement
Thanks,
Anil Nair
Global Technical Lead/ Oracle Clusterware and RAC
Oracle Support -
We have a web application which manages digital documents,
its main table is named "DOC_ADJUNTO", this table has a column "DOCUMENTO" of type BLOB and some other columns.
Also there is an Oracle Text Index for the BLOB column of the table "DOC_ADJUNTO"; its definition as follow:
CREATE INDEX IDX_DOC_ADJUNTO_DOCUMENTO ON DOC_ADJUNTO (DOCUMENTO) INDEXTYPE IS CTXSYS.CONTEXT PARAMETERS('Sync (on commit)') NOPARALLEL;
Just a week ago, a new table "CMP3$111448" appeared in database.
The table "CMP3$111448" replicates the structure of table "DOC_ADJUNTO" and its Oracle Text Index, so it is spending a lot of space for the tablespace and backups for whole database.
One difference between the two tables is that "DOC_ADJUNTO" table has 136.782 rows, and "CMP3$111448" only has 107.380 rows.
I have the following questions about this fact:
a) Why the table "CMP3$111448" appeared?
b) Can I just drop the table "CMP3$111448" to reclaim back the space? ¿Will not materialize a negative effect?
c) If I drop the table "CMP3$111448", will a similar table appear again? If yes, how can I avoid it happens?
I really would appreciate a little of feedback before I take an action to get back the space.
RegardsI've got a similar mystery table that appears structurally identical to a table with some LOB columns. Same owner. The shadow/copy/mystery table has far fewer rows.
CMP3$160860 has 12193 rows, the "real" table, fatal_error_log, has 109589 rows.
I also would like to drop this table to reclaim space ... but I wonder if it is somehow being used by Oracle in some transient shadow kind of way. Maybe it is left over from an incomplete shutdown/startup where this table wasn't completely flushed or something.
We're also on 11.2.0.4 RDBMS. Running on Sun Solaris. Not using Toplink nor any java features (also of course Oracle is probably using some java in the db.)
If you learn anything about it, please share!
Thanks. -
Lots of waiting... Mid 2010 Mac Mini
I've got a new Mac Mini, with 2.4GHZ and 8 Gig of ram.... I spend lots and lots and lots of time waiting. Internet, saving files etc. It's a very clean machine without much on it, but really, it's a bit of a dog. The older Mac Mini was faster for many, many many tasks..... Any clue where to look?
BTW, using 5 gig N network was unusable, had to go back to 2.4. That helped with network hangups. I don't think any of the web stuff is the router, as I have an XP machine with no delays on the 5G N setup. But really, when I save a file in Numbers, I get delays that shouldn't be there. When I boot iTunes, it takes forever compared to the old machine.
If this is the future, I'm worried.....
MattI think you have to break down the perceived slowness into manageable pieces.
First off, 30 seconds for software update to check is normal. So let's eliminate that as an issue.
Second, if you are dealing with newly established hard drives, spotlight indexing can really slow the Mini down during the initial index.
Third, don't ignore the network as being a source of slowness. I would eliminate Wifi as an issue by using ethernet cable to connect to your router until you get the slowness sorted out.
If ethernet cable doesn't make things better, then you need to see if there is a DNS issue. Sometimes manually adding the OpenDNS servers clears things up.
Other times, turning off (or on) IPv6 improves network issues.
You don't say which version your old Mini was. But if you are running old PPC apps, that could also being injecting slowness.
Finally, if you have needed files mounted on remote volumes and there is a slowdown in your LAN or WAN (if mounted on iDisk, for example), this could block you Mini's performance. -
Administrative wait for database during delete obsolete.
Hi Folks,
I am using Oracle 11g with RMAN catalog database.I was just curious why deletion of crosscheck database/obsolete backups generate administrative wait in the database.Correct me if I am wrong
The detail of the backups is in database catalog.
A process verifies the physical location and some detail about of backup files from physical location.This process runs under the instance catalog database.
It may be possible it needs to check the control file
But the wait is as long as the above command runs.
I could not find anything from documentation.
Detail about working is appreciated.978082 wrote:
It may be possible it needs to check the control file
Because the recovery catalog obtains its metadata from the target control file (and not the other way round), if RMAN needs to update metadata it will first write to control files in addition to recovery catalog.
Please read Resynchronizing the Recovery Catalog Before Control File Records Age Out in http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmcatdb.htm#BRADV89687. -
Hi frnds,
As, I'm beginner to performance tuning I dont know
What action do i need to take?
I mean how to read the output which I given below.
this is the output suffering buffer busy waits.
Could anyone please tell me
CLASS TOTAL_WAITS TOTAL_TIME
data block 93303 58711
unused 0 0
system undo header 12 232
undo header 7847 6636
3rd level bmb 0 0
save undo header 0 0
bitmap index block 0 0
file header block 0 0
free list 0 0
undo block 68 207
segment header 422 399
extent map 0 0
2nd level bmb 0 0
system undo block 0 0
sort block 0 0
save undo block 0 0
1st level bmb 1 17
bitmap block 0 0
Thanks, Muhammed Thameem. SHello,
"Buffer busy waits" is contention for a buffer (representing a specific
version of a database block) within the Buffer Cache. So, in essence
it is block contention and thus it is most likely something to do with
the design of the tables and indexes supporting the application. A
built-in bottleneck. On indexes, it could be the age-old problem of
insertions into an index on a column with a monotonically-ascending
data value (i.e. timestamps or sequence numbers) which tends to cause
contention on the highest leaf node of the index. On tables, it might
have to do with many concurrent insertions into a table in a
freelist-managed tablespace where the table has only one freelist. It
could also be due to a home-grown implementation of sequence-number
generators (i.e. small table with one row, one column in which contains
the "last value" of a sequence, etc) which lots of people use to avoid
not being "portable across databases" which they think means not using
Oracle sequences (yadda yadda yadda).
I'd look for any SQL statement in the "SQL sorted by Elapsed Time"
section of the AWR report which exhibits high elapsed time but
relatively low CPU time, indicating a lot of wait time. Of course,
there are something like 800 possible wait events in current releases
of Oracle, of which "buffer busy waits" is only one, so this is just
inference and not a direct causal connection to your problem. But,
once I find such statements I'd check to see if they are
accessing/manipulating tables within the CUBS_DATA tablespace, and then
use "select * from table(dbms_xplan.display_awr('sql-id'))" to
get the execution plan(s), and then look for something ineffective
within the execution plan. You might find the script "sqlhistory.sql" helpful
here as well, to get a "historical perspective" on the execution of the
SQL statements over time, in case the buffer busy waits peaked at some
point in the past
Please refer to:
http://www.pubbs.net/201003/oracle/51925-understanding-awr-buffer-waits.html
Also
http://www.remote-dba.net/oracle_10g_tuning/t_buffer_busy_waits.htm
kind regards
Mohamed -
Import WCS 7.0.164.0 Database into WCS 7.0.172.0
We are experiencing some issues while upgrading from WCS 7.0.164 to 7.0.172. For some reason, everytime we try to import the 7.0.164 database, the 7.0.172 restore process aborts with:
Waiting for database to become idle
To Version 7.0.172.0
Invoking upgrade_general.sql
From Version 7.0.164.0
Updating Schema. This may take a while.
java.sql.SQLException: [Solid JDBC 04.50.0176] Data source rejected establishment of connection
Failed to export Hibernate schema
Hiblog.txt shows the following:
NFO 2011-07-05 10:55:03,143 [main] (TableMetadata.java:43) - indexes: [$$rrmauditdot11bconfig_primarykey_10795, rrmauditdot11bconfig_eventindex, fk254609e9ebca074c]
WARN 2011-07-05 10:55:03,175 [main] (JDBCExceptionReporter.java:77) - SQL Error: 25214, SQLState: 08003
ERROR 2011-07-05 10:55:03,175 [main] (JDBCExceptionReporter.java:78) - [Solid JDBC 04.50.0176] Connection not open
ERROR 2011-07-05 10:55:03,175 [main] (SchemaUpdate.java:165) - could not complete schema update
org.hibernate.exception.JDBCConnectionException: could not get table metadata: RrmConfig
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:74)
at com.cisco.server.persistence.hibernate.dialect.SolidDialect$2.convert(Unknown Source)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:29)
at org.hibernate.tool.hbm2ddl.DatabaseMetadata.getTableMetadata(DatabaseMetadata.java:105)
at org.hibernate.cfg.Configuration.generateSchemaUpdateScript(Configuration.java:948)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:140)
at com.cisco.server.persistence.util.PersistenceUtil.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.DBAdmin.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.MigrateOldDbToNewDb.upgradeDatabase(Unknown Source)
at com.cisco.packaging.SelectDirAndRestoreDb.restoreDatabase(Unknown Source)
at com.cisco.packaging.DBAdmin.restoreDB(Unknown Source)
at com.cisco.packaging.DBAdmin.runMain(Unknown Source)
at com.cisco.packaging.DBAdmin.main(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.zerog.lax.LAX.launch(DashoA10*..)
at com.zerog.lax.LAX.main(DashoA10*..)
Caused by: java.sql.SQLException: [Solid JDBC 04.50.0176] Connection not open
at solid.jdbc.SolidError.s_Create(Unknown Source)
at solid.jdbc.SolidConnection.s_AddAndThrowError(Unknown Source)
at solid.jdbc.SolidConnection.s_Chk(Unknown Source)
at solid.jdbc.SolidResultSet.s_Chk(Unknown Source)
at solid.jdbc.SolidResultSet.close(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyResultSet.close(NewProxyResultSet.java:2999)
at org.hibernate.tool.hbm2ddl.DatabaseMetadata.getTableMetadata(DatabaseMetadata.java:101)
... 15 more
WARN 2011-07-05 10:55:03,175 [main] (NewProxyStatement.java:835) - Exception on close of inner statement.
java.sql.SQLException: [Solid JDBC 04.50.0176] Connection not open
at solid.jdbc.SolidError.s_Create(Unknown Source)
at solid.jdbc.SolidConnection.s_AddAndThrowError(Unknown Source)
at solid.jdbc.SolidConnection.s_Chk(Unknown Source)
at solid.jdbc.SolidStatement.s_Chk(Unknown Source)
at solid.jdbc.SolidStatement.close(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyStatement.close(NewProxyStatement.java:831)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:171)
at com.cisco.server.persistence.util.PersistenceUtil.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.DBAdmin.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.MigrateOldDbToNewDb.upgradeDatabase(Unknown Source)
at com.cisco.packaging.SelectDirAndRestoreDb.restoreDatabase(Unknown Source)
at com.cisco.packaging.DBAdmin.restoreDB(Unknown Source)
at com.cisco.packaging.DBAdmin.runMain(Unknown Source)
at com.cisco.packaging.DBAdmin.main(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.zerog.lax.LAX.launch(DashoA10*..)
at com.zerog.lax.LAX.main(DashoA10*..)
WARN 2011-07-05 10:55:03,175 [main] (NewPooledConnection.java:425) - [c3p0] A PooledConnection that has already signalled a Connection error is still in use!
WARN 2011-07-05 10:55:03,175 [main] (NewPooledConnection.java:426) - [c3p0] Another error has occurred [ java.sql.SQLException: [Solid JDBC 04.50.0176] Connection not open ] which will not be reported to listeners!
java.sql.SQLException: [Solid JDBC 04.50.0176] Connection not open
at solid.jdbc.SolidError.s_Create(Unknown Source)
at solid.jdbc.SolidConnection.s_AddAndThrowError(Unknown Source)
at solid.jdbc.SolidConnection.s_Chk(Unknown Source)
at solid.jdbc.SolidStatement.s_Chk(Unknown Source)
at solid.jdbc.SolidStatement.close(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyStatement.close(NewProxyStatement.java:831)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:171)
at com.cisco.server.persistence.util.PersistenceUtil.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.DBAdmin.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.MigrateOldDbToNewDb.upgradeDatabase(Unknown Source)
at com.cisco.packaging.SelectDirAndRestoreDb.restoreDatabase(Unknown Source)
at com.cisco.packaging.DBAdmin.restoreDB(Unknown Source)
at com.cisco.packaging.DBAdmin.runMain(Unknown Source)
at com.cisco.packaging.DBAdmin.main(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.zerog.lax.LAX.launch(DashoA10*..)
at com.zerog.lax.LAX.main(DashoA10*..)
ERROR 2011-07-05 10:55:03,175 [main] (SchemaUpdate.java:177) - Error closing connection
java.sql.SQLException: [Solid JDBC 04.50.0176] Connection not open
at solid.jdbc.SolidError.s_Create(Unknown Source)
at solid.jdbc.SolidConnection.s_AddAndThrowError(Unknown Source)
at solid.jdbc.SolidConnection.s_Chk(Unknown Source)
at solid.jdbc.SolidStatement.s_Chk(Unknown Source)
at solid.jdbc.SolidStatement.close(Unknown Source)
at com.mchange.v2.c3p0.impl.NewProxyStatement.close(NewProxyStatement.java:831)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:171)
at com.cisco.server.persistence.util.PersistenceUtil.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.DBAdmin.beginUpdateSchema(Unknown Source)
at com.cisco.packaging.MigrateOldDbToNewDb.upgradeDatabase(Unknown Source)
at com.cisco.packaging.SelectDirAndRestoreDb.restoreDatabase(Unknown Source)
at com.cisco.packaging.DBAdmin.restoreDB(Unknown Source)
at com.cisco.packaging.DBAdmin.runMain(Unknown Source)
at com.cisco.packaging.DBAdmin.main(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.zerog.lax.LAX.launch(DashoA10*..)
at com.zerog.lax.LAX.main(DashoA10*..)
WARN 2011-07-05 10:56:03,518 [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2] (BasicResourcePool.java:1841) - com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@ea58e3 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: [Solid JDBC 04.50.0176] Unable to connect to data source
at solid.jdbc.SolidConnection.s_AddAndThrowError(Unknown Source)
at solid.jdbc.SolidConnection.serverConnect(Unknown Source)
at solid.jdbc.SolidConnection.<init>(Unknown Source)
at solid.jdbc.SolidDriver.connect(Unknown Source)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
OS is Windows Server 2003 Enterprise, 32bit, PAE Enabled
The strange thing is, we had a case open and the TAC-Engineer was actually able to import the database, since it looked like a issue with our virtual machine we agreed to close the case and continue testing on another server. However after moving to a proper server the problem still exists.
Also it doesnt matter if we try a direct upgrade install with automated migration or a fresh install with manual db import, its always the same error during the hibernate export.
I think i will reopen the case later but maybe there is a more simple solution to that problem?
Thanks a lot
Kind regards
ChristianHi Jorge,
It looks like such a level of granularity may not be achievable yet.
Should you need such a feature to be evaluated for potentially being included in future software releases, I would recommend to contact your local Cisco account team.
They should be able to follow up and submit a product enhancement request (PER).
Business units will evaluate PERs and, if feasible for the market, they may consider implementing such an enhancement in future versions of the SW/HW.
Regards,
Fede -
SQL*Net message from client - huge wait in trace file
Dear All,
I am facing a performance issue in a particular operation ( which was completed in 21 Minutes earlier). Now the same operation takes more than 35 Minutes. I took a trace for those session ( 10046 level 12 trace ) and found Lot of waits in SQL*Net message from client.
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQLNet message from client 611927 10.00 1121.35*
I copied only the highest wait in the tkprof output.
And I found from the tkprof and even in raw trace file this event waits more time after excuting
SELECT sysdate AS SERVERDATE from dual;
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 115 0.00 0.00
SQLNet message from client 115 10.00 724.52*
Please help me to find out why this wait taking long time, especially on the above query..
Regards,
VinodhVinodh Kumar wrote:
Hi,
This is what available in the trace file
PARSING IN CURSOR #2 len=38 dep=0 uid=60 oct=3 lid=60 tim=7052598842 hv=3788189359 ad='7d844fa0'
*"SELECT sysdate AS SERVERDATE FROM dual"*
END OF STMT
PARSE #2:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=7052598839
BINDS #2:
EXEC #2:c=0,e=42,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=7052599002
WAIT #2: nam='SQL*Net message to client' ela= 1 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=7052599058
FETCH #2:c=0,e=15,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=7052599110
*** 2012-01-02 17:07:30.364
WAIT #2: nam='SQL*Net message from client' *" ela= 10007957"* driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=7062607120Please find the last line WAIT -- in the complete trace after executing this query
In awr report , this query taken less than a sec for more than 2000 executions.
Regards,
VinodhGood idea to check the raw trace file. It is important to notice that this particular wait event appears after the fetch of the result from the database. The client receives the SYSDATE from the database server, and then the client performs some sort of action for about 10 seconds before submitting its next request to the database. The SQL statement that immediately follows and immediately preceeds this section of the trace file might provide clues regarding what caused the delay, and where that delay resides in the client side code. Maybe a creative developer added a "sleep for 10 seconds" routine when intending to sleep for 10ms? Maybe the client CPU is close to 100% utilization?
Charles Hooper
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
What is happening in Database?
Hi All,
I am new to SAP BW. Technically I am very much confused about the Info Object , Info Cube, Info Provider, PSA, DSO. I want to know Physically what is happening in the database after creation of above things. Kindly let me know some reference videos or books or any links. Thanks for your support in advance.
Thanks & Regards
ShivaHi,
BW - Business Information Warehouse.This is the place where we store lots of information in database and provide the consolidated and cleansed data to the queries in the form of information when reporting.
so during all these process we come across the mentioned objects to fullfil the thing.
InfoObject : Infoboject is the smallest available information in sap bw.i.e..the information (name,amount,country)can be stored in database in the form of Infoobjects.These are place holders to store such information in database.These Infoobjects are mainly classified into Keyfigures,Characterstics.apart from these we have like time characterstics to store time etc...
Infocube : Normally the data can be presented for reporting in many dimensions(like customer,material,region,time etc..)using this infocube.Infocube is structured in such a way...data can be stored in the form of multidimentional star schema in the database.that means the database can be used to store data in most efficient way using this star shema.whole this structure is called as infocube in sap bw.
InfoProvider : these are nothing but either Infocubes or DSOs or multiproviders etc..this is term used to mention when this infocube or DSO provides data for reporting.
PSA : Persistent Storage Area : normally we can bring data into BW from different sources.what ever may be the source after data entered into bw,it will be stored here temporarily.from here the data can be moved to infoproviders(DSO,Infocube etc...).The main purpose of storing data in PSA is,here the data will be stored exactly in the way how it is provided in source system.here we can modify data (if necessary) before transferring into Infoproviders.
DSO (Data Sotre Object) or ODS (Operational Data Store) : This is also another object where we store cleansed and consolidated data in detailed form.this is two dimensional tabular structure.the data is stored in this way in the data base.
and as you are new to sap bw..you have to go bit long way to understand exactly how to deal with each of these objects in real time.please go thorugh the URL...
https://help.sap.com to know in more detail...
hope this is clear for you.
Regards
Ramsunder -
Database.pl error in Ciscoworks LMS 3.2
Dear Friends,
I am getting the following error only for database.pl when i run selftest under Common-->Services-->Admin:
database.pl
FAIL Self Test Fail to query dfmEpm.DbVersion, Error: Database server not found (DBD: login failed)
Self Test Fail to query dfmEpm.SYSTABLE, Error: Database server not found (DBD: login failed)
Please find enclosed the output of pdshow.
Can you please advise the recommended action for this?
Thanks a lot
GautamThe DFM EPM database is down. This could be indicative of a corrupt database, or simply a damaged transcation log. Shutdown Daemon Manager, then delete NMSROOT/databases/dfmEpm/dfmEpm.log if it exists. Then run:
NMSROOT/objects/db/win32/dbsrv10 -f NMSROOT/databases/dfmEpm/dfmEpm.db
Then restart Daemon Manager. If the EPMDbEngine process is still down, then you will need to reinitialize your EPM database, or restore LMS from a known good backup. To reinitialize the database, run:
NMSROOT/bin/perl NMSROOT/bin/dbRestoreOrig.pl dsn=dfmEpm dmprefix=EPM -
States in the thread dump "Waiting on monitor"
that is normal, it is all the execute threads waiting for requests
Rob Baldassano wrote:
Hi.
I am trying to do some troubleshooting of the weblogic server,
Currently we are using WebLogic 6.1 SP 2.
In the thread dump we are seeing a lot of "waiting on monitor" messages for the
individual threads.
The application that we are running uses native threads to run the processes that
it needs to.
My question is this.
Is this "Waiting for monitor" message normal?
What could cause these messages?
is there a way to get around them, besides recoding the application? -
1) 'BAPI_PRODORD_CREATE'
2) CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
EXPORTING
wait = 'X'.
3) BAPI_PRODORD_CHECK_MAT_AVAIL
Sometimes it works fine, other times the BAPI_PRODORD_CHECK_MAT_AVAIL passes an error that says the "Order does not exist."
I tried a "Wait up to 10 seconds". That makes the processing go extremely slow and even that doesn't make it work 100%.
Any suggestions?From what I found,
1) the commit_wait is automatically ran in many BAPI's, but not all. It is oftern duplication to add it yourself
2) The commit_wait logic appears to end on the application server. Once the application server passes the commit to the database then the application server thinks it is done and continues processing the program. In reality the database is still processing and the record is still locked in the database. This means the commit and wait code is not a 100% guarantee.
Solution:
1) Function module ENQUEUE_READ (or transaction SM12). This checks to see if the database lock is in place. Set a break-point in the code and in a separate window run one of these. You will see the locks. Now you know the lock condition add some ABAP code to Do loop at reading the results on the relevant lock. Exit and continue processing when the locks are removed. See my comments in step 2 about a wait a few seconds between loops and providing an exit time to avoid infinite loops. I think solution 1 is better than solution 2, even though I implemented solution 2.
or
2) loop on a select from the database (you need to know the table being updated by the bapi to do this). exit the loop when the record is inserted. This is what I did since I hadn't found the solution in step 1 when I implemented my fix. Don't put a wait statement in the loop though. The wait is taxing on the database because it executes a commit. I used "get time field" to handle a 2 second wait between database reads. I also used this to exit after 30 seconds to avoid an infinite loop.
Below is some sample code of solution 2. I used it in between a BAPI production order create and BAPI check production order material availability. I was getting an error saying the production order hadn't been created in the material availability check when in fact it had been created but just not committed to the database.
DO.
Only select from DB every two seconds
CLEAR do_time3.
do_time3 = do_time1 - do_time2.
IF do_time3 GT gc_000002.
* Determine if confirmation has committed to database
SELECT SINGLE aufnr FROM resb INTO lv_aufnr WHERE aufnr = g_aufnr AND
xloek = space AND
xwaok = gc_x AND
kzear = gc_x AND
matnr IN r_range.
IF sy-subrc EQ 0.
EXIT.
ENDIF.
ENDIF.
GET TIME FIELD do_time1.
do_time2 = do_time2 + gc_000002.
do_exit = ( do_time1 - do_time ).
IF do_exit GT gc_000030.
EXIT.
ENDIF.
CLEAR do_exit.
ENDDO. -
Hai gurus,
I had created a first operation as in process inspection for a halb material,where i had given control key
as QM01 and assigned MIC and and sampling procedure in routing, when i release the production order
inspection lot is not created, pls guide me where i had went wrong.
regards,
sekar chandHai Mr Tejas,
Thanks for your valuable ideas and my reply as follows.
1) maintain in Material Master -
>QM View--->Inspection Set up as
03 In-process insp. for production order - Maintained as above
2)In addition, you need to maintain under Detailed Information on inspection type on same tab page:
Tick the check box Post to inspection stock and then enter.
Now on main screen, you will find Inspection setup check box is ticked, further you need to maintain
- post to inspection stock check box not available under detailed information of inspection type, but available at main menu which i had activated.
3)Under Procurement Data tab,
Tick in check box : QM proc. active
and respective control key in QM Control Key field.
- i had done as per above reply, but inspection lot not created only production order is created with
status as ILAS inspection lot assigned
waiting for your reply
with regards,
sekar chand
Edited by: sekar chand on Nov 25, 2008 7:29 AM
Edited by: sekar chand on Nov 25, 2008 7:30 AM
Maybe you are looking for
-
Problems saving a file in Adobe Reader 11 after highlighting
Hi, Has anyone come across a problem where Adobe Reader 11 seems to have problems in saving certain files after highlighting some text? One of our processes involves checking information and to do this we open up a saved PDF file in Adobe Reader 11 a
-
Appearance of NYT Homepage on Safar
The home page for the New York Times is wrong in Safari. The content in the text boxes are way off to the right. It's been like this for a few months now. I'm not sure what to do about it. In other Mac browsers like Camino, Flock and Firefox, it look
-
Pages Panel - pages on wrong side of gutter
I've inherited a file that might have been created in InD 4.0. I'm working in 5.0. When I insert or delete pages in the pages panel, they're going to the right side of the gutter. In some cases I can manually move them to the left, but some won't let
-
Sometimes when I'm in an InfoPackage under the Processing tab I see either: ALE Inbox and InfoObject (i.e. 0sales_grp) PSA and then the InfoObject (i.e. 0sales_off) Why the inconsistency of processing? Do I care?
-
Good day folks, I'm looking for some advice regarding my Macbook and what is an apparent memory leak I'm experiencing. I've been getting a lot of 'beachball' activity in the last couple of months and after weeks of phone calls with Apple and a visit