Data Store Objects
Hi ,
What is mean by the term " Data Store Objects "
What is the difference Between Data Store Objects and An info cube ?
Kumar
hi,
the basic differenece between ODS and infocube is that ODS has overwrite facility while info cube have additive facility.
cube has 16 dimension while ods has three dimension.
for more information check ouyt this link
http://help.sap.com/saphelp_nw2004s/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm
<b>under modelling</b>
Similar Messages
-
Error while activating Data Store Object
Hi Guru's,
When I try to activate a data store object i get the error message :
The creation of the export DataSource failed
No authorization to logon as trusted sys tem (Trusted RC=2).
No authorization to logon as trusted sys tem (Trusted RC=2).
Error when creating the export DataSource and dependent Program ID 4SYPYCOPQ94IXEGA3739L803Z retrieved for DataStore object ZODS_PRAHi,
you are facing a issue with your source system 'myself', check and repair it. Also check if the communication user (normally ALEREMOTE) has all permissions needed.
kind regards
Siggi -
Error while Data Store Object activation
Dear Guru's
I'm new to BI 7.......
I'm trying to load data to Data Store Object from PSA through DTP and it was successful upto newdata table. But while activating the request to active table the total status of the request turned as red and in DETAILS all the messages were in green.
Kindly let me know what could be the problem.
Surely points will be awarded
thanks in advance.....
with regards,
ViswaDear KK,
Here I printed the log pls go theough it..
Job started
Step 001 started (program RSDELPART1, variant &0000000000000, user ID 160624)
Delete is running: Data target ZIC_EX1, from 57 to 57
FB RSM1_CHECK_DM_GOT_REQUEST called from PRG RSDELPART1; row 000875
Request '57'; DTA 'ZIC_EX1'; action 'D'; with dialog 'X'
Leave RSM1_CHECK_DM_GOT_REQUEST in row 70; Req_State ''
Overall status 'Red' (user 160624)
Incorrect data could be visible in Reporting (see long text)
Status transition 2 / 3 to 9 / 9 completed successfully
SQL: 15.05.2007 13:26:26 160624
ALTER TABLE "/BIC/B0000161000" TRUNCATE PARTITION
"/BIC/B00001610000000000002"
SQL-END: 15.05.2007 13:26:26 00:00:00
Request DTPR_6K1ZWLCJLY9ZF4NLHC91N1J10 deleted from PSA;REQICODS entry also deleted
Request DTPR_6K1ZWLCJLY9ZF4NLHC91N1J10 not found in IC ZIC_EX1;CODS also deleted
Request DTPR_6K1ZWLCJLY9ZF4NLHC91N1J10 deleted from PSA;EQICODS entry also deleted
Overall status 'Deleted' (user 160624)
Delete is finished: Data target ZIC_EX1, from 57 to 57
Job finished
with regards
Viswa -
BW Analytical Authorisations and Data Store Objects
Hello All
I am in the proces of trying to figure out how BW Analytical authorisations work as I have to build some authrisations for a new a new BW project.
I understand the concept of BW Analytical authorisations. I have created an object linked to heirarchies via an info provider, and assigned it to a user and it works great. The problem is that I then went and ran a generation for heirarchies and I specified the Z info provider my analytical authorisation object was linked to. Now I find that all usrs on the system have access to my object and I need to remove this. Even new users on the system automatically get this access.
I have read note 1052242 which explains that I can remove the authorisations using data store objects (DSOs). The thing is that I do not know how to maintain these DSOs..
Can anyone help with this. Once I know how to maintain the DSO I can add in the required D_E_L_E_T_E entry and re-run the genration and hopefully this will solve my problem.
Thank You In Advance
Best RegardsHi Anwar,
if your question is how to update data into a DSO, then I recommend you read the documentation.
http://help.sap.com/saphelp_nw70/helpdata/en/f9/45503c242b4a67e10000000a114084/frameset.htm
You require basic BW knowledge for that.
If your background is more ABAP then think about making the DSO a DSO for direct update.
That way you do not need BW knowledge and you can use ABAP instead to modify the data in the DSO.
These Function modules of the API can be used:
● RSDRI_ODSO_INSERT: Inserts new data (with keys not yet in the system).
● RSDRI_ODSO_MODIFY: inserts data having new keys; for data with keys already in the system, the data is changed.
● RSDRI_ODSO_UPDATE: changes data with keys in the system
● RSDRI_ODSO_DELETE_RFC: deletes data
More information about these Function Module is here
http://help.sap.com/saphelp_nw70/helpdata/en/c0/99663b3e916a78e10000000a11402f/frameset.htm
However, if that doesn't solve your original problem with the authorizations, here are some useful links that I found helpful when implementing BW Analysis Authorizations.
SDN area for Analysis Authorizations
http://wiki.sdn.sap.com/wiki/display/BI/AuthorizationinSAPNWBI#AuthorizationinSAPNWBI-Differencebetweenrssmandrsecadmin
Marc Bernard session
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/ac7d7c27-0a01-0010-d5a9-9cb9ddcb6bce
SAP release note for new Analysis Authorizations
http://help.sap.com/saphelp_nw04s/helpdata/en/80/d71042f664e22ce10000000a1550b0/frameset.htm
Best,
Ralf -
Planning Area to Data Store Object (ODS)
Hi,
The data source is the Planning area and the target is the ODS object. The transfer rule mapping is 1 to 1 for all key figures, there is no routine used. After executing the info package and data transfer, I am checking the value in ODS update table.
One key figure for a CVC is always showing less value in ODS update table than the planning area value. I checked the extractor check in RSA3, it shows 10 records for a selection condition, but the ODS update table shows only 9 records for the same selection conditions. 1 particular record is not updated in ODS upate table.
Kindly advise to overcome this issue.
thanks and regards
MurugesanHi,
Did you compare the data in PSA? All 10 or less and how about Keys for your DSO? Try making all Key and load.
Why do you need DSO for Planning? Aren't you aggregating ?
@AK -
Delete a request from the Data Store Object,
Hi Experts,
We recently upgraded our BW3.5 to NW2004s and I tried to load data into my DataStore. I am trying to delete an invalid request and it does not allow me to delete it.
Can someone help me with this?
Thanks in advanceTry deleting this request from the PSA as well.
On DataSource -> rc -> Manage
Find the request number and delete it.
Also go and delete it from DataStore Object.
Next time when you run DTP you wont have this request.
Hope this helps. -
Error when Activating the data store object
Hi All,
I have to get the following below errors when i was activating the ods.
Transfer structure prefix for source systemt T90CLNT is not defined
Error when creating the export datasource and dependent objects
could anyone help me on this issue.
Regards,
Naveen.
Edited by: naveen naveen on Feb 2, 2009 6:35 AMHi Naveen,
Check the Source System connection and Run program RS_TRANSTRU_ACTIAVATE_ALL in SE38 and give your Source system ID and Infosource. If you have infosource. or replicate your datasource.
and also Check the BW self System whether it is active or not. then try activating DSO
hope this helps
Regards,
Daya Sagar
Edited by: Daya Sagar on Feb 2, 2009 11:13 AM -
Data Store Object Activation failure
Dear Friends,
I have a DSO which gets updated directly from flatfile. DTP seems fine, but activation of data in DSO is failing continuously. The error message is as below,
"Error when using the dynamic structures during SID determination"
I cross checked data in file and masterdata, everything seems correct. Even I deleted the request and repeated the load couple of times, but failed to get any positive result. Could anyone please tell me how to solve this?
Thanks and Regards
NeoHi Neo,
We have one setting in DSO level "SID" Generation check that check box is selected or not. if it's selected uncheck that and reload and try again.
And also check any special charcters in that file you find the error in ativation log only.
Hope it's help you.
Thanks and Regards,
Venkat. -
Dynamically built query on execution How to save the data in Object Type
Hi,
In pl/sql I am building and executing a query dynamically. How can I stored the output of the query in object type. I have defined the following object type and need to store the
output of the query in it. Here is the Object Type I have
CREATE OR REPLACE TYPE DEMO.FIRST_RECORDTYPE AS OBJECT(
pkid NUMBER,
pkname VARCHAR2(100);
pkcity VARCHAR2(100);
pkcounty VARCHAR2(100)
CREATE OR REPLACE TYPE DEMO.FIRST_RECORDTYPETAB AS TABLE OF FIRST_RECORDTYPE;Here is the query generated at runtime and is inside a LOOP
--I initialize my Object Type*
data := new FIRST_RECORDTYPETAB();
FOR some_cursor IN c_get_ids (username)
LOOP
x_context_count := x_context_count + 1;
-- here I build the query dynamically and the same query generated is
sql_query := 'SELECT pkid as pid ,pkname as pname,pkcity as pcity, pkcounty as pcounty FROM cities WHERE passed = <this value changes on every iteration of the cursor>'
-- and now I need to execute the above query but need to store the output
EXECUTE IMMEDIATE sql_query
INTO *<I need to save the out put in the Type I defined>*
END LOOP;
How can I save the output of the dynamically built query in the Object Type. As I am looping so the type can have several records.
Any help is appreciated.
Thankshai ,
solution for Dynamically built query on execution How to save the data in Object Type.
Step 1:(Object creation)
SQL> ED
Wrote file afiedt.buf
1 Create Or Replace Type contract_details As Object(
2 contract_number Varchar2(15),
3 contrcat_branch Varchar2(15)
4* );
SQL> /
Type created.
Step 2:(table creation with object)
SQL> Create Table contract_dtls(Id Number,contract contract_details)
2 /
Table created.
Step 3:(execution Of procedure to insert the dynamic ouput into object types):
Declare
LV_V_SQL_QUERY Varchar2(4000);
LV_N_CURSOR Integer;
LV_N_EXECUTE_CURSOR Integer;
LV_V_CONTRACT_BR Varchar2(15) := 'TNW'; -- change the branch name by making this as input parameter for a procedure or function
OV_V_CONTRACT_NUMBER Varchar2(15);
LV_V_CONTRACT_BRANCH Varchar2(15);
Begin
LV_V_SQL_QUERY := 'SELECT CONTRACT_NUMBER,CONTRACT_BRANCH FROM CC_CONTRACT_MASTER WHERE CONTRACT_BRANCH = '''||LV_V_CONTRACT_BR||'''';
LV_N_CURSOR := Dbms_Sql.open_Cursor;
Dbms_Sql.parse(LV_N_CURSOR,LV_V_SQL_QUERY,2);
Dbms_Sql.define_Column(LV_N_CURSOR,1,OV_V_CONTRACT_NUMBER,15);
Dbms_Sql.define_Column(LV_N_CURSOR,2,LV_V_CONTRACT_BRANCH,15);
LV_N_EXECUTE_CURSOR := Dbms_Sql.Execute(LV_N_CURSOR);
Loop
Exit When Dbms_Sql.fetch_Rows (LV_N_CURSOR)= 0;
Dbms_Sql.column_Value(LV_N_CURSOR,1,OV_V_CONTRACT_NUMBER);
Dbms_Sql.column_Value(LV_N_CURSOR,2,LV_V_CONTRACT_BRANCH);
Dbms_Output.put_Line('CONTRACT_BRANCH--'||LV_V_CONTRACT_BRANCH);
Dbms_Output.put_Line('CONTRACT_NUMBER--'||OV_V_CONTRACT_NUMBER);
INSERT INTO contract_dtls VALUES(1,CONTRACT_DETAILS(OV_V_CONTRACT_NUMBER,LV_V_CONTRACT_BRANCH));
End Loop;
Dbms_Sql.close_Cursor (LV_N_CURSOR);
COMMIT;
Exception
When Others Then
Dbms_Output.put_Line('SQLERRM--'||Sqlerrm);
Dbms_Output.put_Line('SQLERRM--'||Sqlcode);
End;
step 4:check the values are inseted in the object included table
SELECT * FROM contract_dtls;
Regards
C.karukkuvel -
Business Objects(BOs) & Data Transfer Objects(DTOs)-both needed ?
In a J2EE system...
I know that "Business Objects" (BOs) are basically value objects (VOs) ...lots of getters and setters and some business logic. These are basically to model nouns in the system. Eg Student BO.
I know that "Data Transfer Objects" (DTOs) are value objects (VOs)....with getters and setters... with the purpose of avoiding multiple method calls...to avoid overhead...which effects performance. eg it's better to pass a Student DTO then say...pass the student ID and student name and student age etc etc.
Main question : Should a system have both ? If yes, why do I need a StudentBO.java and then another StudentDTO.java....when they are so similar ?...when both are basically VOs ? Can't I just use BOs to serve as DTOs ?
Thanks.Hi,
I've started using BO's and DTO's since 3 months .With my experiece i understand we nned both of them.
The BusinessObject represents the data client. It is the object that requires access to the data source to obtain and store data.
DTO
This represents a Transfer Object used as a data carrier. The DataAccessObject may use a Transfer Object to return data to the client. The DataAccessObject may also receive the data from the client in a Transfer Object to update the data in the data source.
From this i want to tell you that We are not gonna do any operation on BO's but we do operations on DTO
Ashwin -
Are both Data Source and Data Store the same in BI? If not can someone explain what each one of these terms mean.
Thanks for the helpData Source or Persistent Staging Area is a transparent database table or initial store in BI. In this table the requested data is saved unchanged from the Source System.
DataStore Objects are primary physical database storage objects used in BI. They are designed to store very detailed transactional level records.
Thanks -
Error when adding multiple source data stores in ODI Interface
I am trying to create an ODI Interface with couple of source tables and one target table. Say for example I am using the following data structure in my target table.
Order (Target Table)
order id
product id
customer id
address id
warehouse id
shipment id
for the above target table i need to extract data from each of the following source tables.
orderitem
product
customer
address
warehouse
shipment
Total 6 source tables i need to join however and load data for target table.
When I drag source data stores in to ODI interface mapping tab It throws the following Null Pointer Error. Due to this error I am not able to map target table with all the source tables.
Please suggest me what could be the reason for the error.
Error:
java.lang.NullPointerException
at oracle.odi.interfaces.interactive.support.clauseimporters.ClauseImporterDefault.importClauses(ClauseImporterDefault.java:81)
at oracle.odi.interfaces.interactive.support.actions.InterfaceActionAddSourceDataStore.performAction(InterfaceActionAddSourceDataStore.java:124)
at oracle.odi.interfaces.interactive.support.InteractiveInterfaceHelperWithActions.performAction(InteractiveInterfaceHelperWithActions.java:845)
at oracle.odi.interfaces.interactive.support.InteractiveInterfaceHelperWithActions.performAction(InteractiveInterfaceHelperWithActions.java:821)
at oracle.odi.ui.OdiSdkEntityFactory.dropSourceDataStore(OdiSdkEntityFactory.java:523)
at oracle.odi.ui.etlmodeler.diag.dragdrop.DiagramNodeDropHandler.dropObjects(DiagramNodeDropHandler.java:150)
at oracle.diagram.framework.dragdrop.handler.DelegateChooserDropHandler.dropSelected(DelegateChooserDropHandler.java:386)
at oracle.modeler.dnd.ModelerTCDropHandler.access$001(ModelerTCDropHandler.java:69)
at oracle.modeler.dnd.ModelerTCDropHandler$3.run(ModelerTCDropHandler.java:288)
at oracle.modeler.dif.GraphicAdder.addImpl(GraphicAdder.java:387)
at oracle.modeler.dif.GraphicAdder.addAndLayoutImpl(GraphicAdder.java:372)
at oracle.modeler.dif.GraphicAdder.addSelectAndLayout(GraphicAdder.java:348)
at oracle.modeler.dnd.ModelerTCDropHandler.dropSelected(ModelerTCDropHandler.java:284)
at oracle.diagram.framework.dragdrop.handler.DelegateChooserDropHandler.drop(DelegateChooserDropHandler.java:150)
at oracle.diagram.framework.dragdrop.DefaultDropPlugin.drop(DefaultDropPlugin.java:115)
at oracle.modeler.dnd.ModelerDropPlugin.drop(ModelerDropPlugin.java:100)
at oracle.diagram.framework.dragdrop.DropTargetHelper.drop(DropTargetHelper.java:188)
at oracle.diagram.framework.dragdrop.ManagerViewDragAndDropController$MyDropTargetListener.drop(ManagerViewDragAndDropController.java:802)
at java.awt.dnd.DropTarget.drop(DropTarget.java:434)
at sun.awt.dnd.SunDropTargetContextPeer.processDropMessage(SunDropTargetContextPeer.java:519)
at sun.awt.dnd.SunDropTargetContextPeer$EventDispatcher.dispatchDropEvent(SunDropTargetContextPeer.java:832)
at sun.awt.dnd.SunDropTargetContextPeer$EventDispatcher.dispatchEvent(SunDropTargetContextPeer.java:756)
at sun.awt.dnd.SunDropTargetEvent.dispatch(SunDropTargetEvent.java:30)
at java.awt.Component.dispatchEventImpl(Component.java:4487)
at java.awt.Container.dispatchEventImpl(Container.java:2099)
at java.awt.Component.dispatchEvent(Component.java:4460)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4577)
at java.awt.LightweightDispatcher.processDropTargetEvent(Container.java:4312)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4163)
at java.awt.Container.dispatchEventImpl(Container.java:2085)
at java.awt.Window.dispatchEventImpl(Window.java:2478)
at java.awt.Component.dispatchEvent(Component.java:4460)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)Hello,
does somebody have a solution for this issue?
I am new to ODI and have the same problem. I am using ODI Designer Standalone Edition Version 11.1.1.
Maybe there is something wrong with my configuration, maybe I am doing something wrong while creating the mapping graphically. It does not help to save and re-open the mapping. Also it does not help to re-reverse engineer the objects. Also it does not seem to have something to do with the number of objects joined (some objects seem to always cause this error, with some the error starts when there are 10 or more objects on the map before they are added).
One possible workaround i have found is to do it all on the quick-edit tab by one by one creating the joins and mappings (+source or lookup joins and filter mapping). It seems to work with my configuration, but drag&drop functionality on the mapping tab would sometimes be faster, more convenient and just more human like.
Br,
Jaanus -
Reclaiming memory when using concurrent data store
Hello,
I use the concurrent data store in my python application and I'm noticing that
the system memory usage increases and is never freed when the application
is done. The following python code is a unit test that simulates my app's workload:
##########BEGIN PYTHON CODE##################
"""TestCases for multi-threaded access to a DB.
#import gc
#gc.enable()
#gc.set_debug(gc.DEBUG_LEAK)
import os
import sys
import time
import errno
import shutil
import tempfile
from pprint import pprint
from random import random
try:
True, False
except NameError:
True = 1
False = 0
DASH = '-'
try:
from threading import Thread, currentThread
have_threads = True
except ImportError:
have_threads = False
import unittest
verbose = 1
from bsddb import db, dbutils
class BaseThreadedTestCase(unittest.TestCase):
dbtype = db.DB_UNKNOWN # must be set in derived class
dbopenflags = 0
dbsetflags = 0
envflags = 0
def setUp(self):
if verbose:
dbutils._deadlock_VerboseFile = sys.stdout
homeDir = os.path.join(os.path.dirname(sys.argv[0]), 'db_home')
self.homeDir = homeDir
try:
os.mkdir(homeDir)
except OSError, e:
if e.errno <> errno.EEXIST: raise
self.env = db.DBEnv()
self.setEnvOpts()
self.env.open(homeDir, self.envflags | db.DB_CREATE)
self.filename = self.__class__.__name__ + '.db'
self.d = db.DB(self.env)
if self.dbsetflags:
self.d.set_flags(self.dbsetflags)
self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE)
def tearDown(self):
self.d.close()
self.env.close()
del self.d
del self.env
#shutil.rmtree(self.homeDir)
#print "\nGARBAGE:"
#gc.collect()
#print "\nGARBAGE OBJECTS:"
#for x in gc.garbage:
# s = str(x)
# print type(x),"\n ", s
def setEnvOpts(self):
pass
def makeData(self, key):
return DASH.join([key] * 5)
class ConcurrentDataStoreBase(BaseThreadedTestCase):
dbopenflags = db.DB_THREAD
envflags = db.DB_THREAD | db.DB_INIT_CDB | db.DB_INIT_MPOOL
readers = 0 # derived class should set
writers = 0
records = 1000
def test01_1WriterMultiReaders(self):
if verbose:
print '\n', '-=' * 30
print "Running %s.test01_1WriterMultiReaders..." % \
self.__class__.__name__
threads = []
for x in range(self.writers):
wt = Thread(target = self.writerThread,
args = (self.d, self.records, x),
name = 'writer %d' % x,
)#verbose = verbose)
threads.append(wt)
for x in range(self.readers):
rt = Thread(target = self.readerThread,
args = (self.d, x),
name = 'reader %d' % x,
)#verbose = verbose)
threads.append(rt)
for t in threads:
t.start()
for t in threads:
t.join()
def writerThread(self, d, howMany, writerNum):
#time.sleep(0.01 * writerNum + 0.01)
name = currentThread().getName()
start = howMany * writerNum
stop = howMany * (writerNum + 1) - 1
if verbose:
print "%s: creating records %d - %d" % (name, start, stop)
for x in range(start, stop):
key = '%04d' % x
#dbutils.DeadlockWrap(d.put, key, self.makeData(key),
# max_retries=12)
d.put(key, self.makeData(key))
if verbose and x % 100 == 0:
print "%s: records %d - %d finished" % (name, start, x)
if verbose:
print "%s: finished creating records" % name
## # Each write-cursor will be exclusive, the only one that can update the DB...
## if verbose: print "%s: deleting a few records" % name
## c = d.cursor(flags = db.DB_WRITECURSOR)
## for x in range(10):
## key = int(random() * howMany) + start
## key = '%04d' % key
## if d.has_key(key):
## c.set(key)
## c.delete()
## c.close()
if verbose:
print "%s: thread finished" % name
d.sync()
del d
def readerThread(self, d, readerNum):
time.sleep(0.01 * readerNum)
name = currentThread().getName()
for loop in range(5):
c = d.cursor()
count = 0
rec = c.first()
while rec:
count += 1
key, data = rec
self.assertEqual(self.makeData(key), data)
rec = c.next()
if verbose:
print "%s: found %d records" % (name, count)
c.close()
time.sleep(0.05)
if verbose:
print "%s: thread finished" % name
del d
def setEnvOpts(self):
#print "Setting cache size:", self.env.set_cachesize(0, 2000)
pass
class BTreeConcurrentDataStore(ConcurrentDataStoreBase):
dbtype = db.DB_BTREE
writers = 10
readers = 100
records = 100000
def test_suite():
suite = unittest.TestSuite()
if have_threads:
suite.addTest(unittest.makeSuite(BTreeConcurrentDataStore))
else:
print "Threads not available, skipping thread tests."
return suite
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
#print "\nGARBAGE:"
#gc.collect()
#print "\nGARBAGE OBJECTS:"
#for x in gc.garbage:
# s = str(x)
# print type(x),"\n ", s
##########END PYTHON CODE##################Using the linux command 'top' prior to and during the execution of
the python script above, I noticed that a considerable amount of memory
is used up and never reclaimed when it ends.If you delete the db_home,
however, the memory is reclaimed.
Am I conjuring up the bsddb concurrent db store incorrectly somehow?
I'm using python 2.5.1 and the builtin bsddb module.
Thanks,
Gerald
Message was edited by:
user590005
Message was edited by:
user590005I think I am seeing what you are reporing, but I need to check further into
the reason for this.
Running your program and monitoring with Top/vmstat before/after the test, and
after deleting db_home is:
BEFORE RUNNING PYTHON TEST:
++++++++++++++++++++++++++
top - 17:00:17 up 7:00, 6 users, load average: 0.07, 0.38, 0.45
Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
Cpu(s): 3.6% us, 0.7% sy, 0.0% ni, 95.5% id, 0.0% wa, 0.2% hi, 0.0% si
Mem: 1545196k total, 1407100k used, 138096k freeTerminal, 20700k buffers
Swap: 2040212k total, 168k used, 2040044k free, 935936k cached
[swhitman@swhitman-lnx python]$ vmstat
procs -----------memory---------- ---swap-- -----io---- system ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 160 247096 20860 833604 0 0 31 22 527 675 7 1 91 1
AFTER RUNNING PYTHON TEST:
++++++++++++++++++++++++++
top - 17:02:00 up 7:02, 6 users, load average: 2.58, 1.36, 0.80
Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
Cpu(s): 3.7% us, 0.5% sy, 0.0% ni, 95.8% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 1545196k total, 1508156k used, 37040k free, 20948k buffers
Swap: 2040212k total, 168k used, 2040044k free, 1035788k cached
[swhitman@swhitman-lnx python]$ vmstat
procs -----------memory---------- ---swap-- -----io---- system ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 160 143312 21120 935784 0 0 31 25 527 719 7 1 91 1
AFTER RUNNING PYTHON TEST & DB_HOME IS DELETED:
++++++++++++++++++++++++++++++++++++++++++++++
But I think DB_ENV->close
top - 17:02:48 up 7:02, 6 users, load average: 1.22, 1.17, 0.76
Tasks: 111 total, 1 running, 109 sleeping, 0 stopped, 1 zombie
Cpu(s): 8.8% us, 0.5% sy, 0.0% ni, 90.5% id, 0.0% wa, 0.2% hi, 0.0% si
Mem: 1545196k total, 1405236k used, 139960k free, 21044k buffers
Swap: 2040212k total, 168k used, 2040044k free, 934032k cached
[swhitman@swhitman-lnx python]$ vmstat
procs -----------memory---------- ---swap-- -----io---- system ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 160 246208 21132 833852 0 0 31 25 527 719 7 1 91 1
So the Top/vmstat Memory Usage Summary is:
before test after test after rm db_home/*
Top 1407100k 1508156k 1405236k
mem used
vmstat
free/cache 247096/833604 143312/935784 246208/833852 -
TT0846: Data store connection invalid or not current
In one of our environments we started getting TT0846 error randomly.
Everytime we get this error we have to restart our application to reconnect to datasore.
When we checked the error logs, we found the following:
08:32:00.56 Err : : 20084: 20089/0x198ede80: XXX: fstat returned with info uid=670 gid=673 mode=drwxr-xr-x size=12288
08:32:00.56 Err : : 20084: 20089/0x198ede80: XXX: fstat returned with info uid=670 gid=673 mode=-rw------- size=0
08:32:00.56 Err : : 20084: 20089/0x198ede80: Log flusher encountered error 906: TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2031, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate". fslsn = 2031.0, disklsn = 2030.534409216, ckptInfo.existLFN = 2028, newestlfn = 2030.
08:32:01.56 Err : : 20084: 20089/0x198ede80: Log flusher reports success: previously-reported error 906 (TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2031, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate") no longer pending.
08:40:01.67 Err : : 20084: 20089/0x198ede80: XXX: Dbhdr group: dba Logpath:/mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2032
08:40:01.67 Err : : 20084: 20089/0x198ede80: XXX: fstat returned with info uid=670 gid=673 mode=drwxr-xr-x size=12288
08:40:01.67 Err : : 20084: 20089/0x198ede80: XXX: fstat returned with info uid=670 gid=673 mode=-rw------- size=0
08:40:01.67 Err : : 20084: 20089/0x198ede80: Log flusher encountered error 906: TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2032, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate". fslsn = 2032.0, disklsn = 2031.536557568, ckptInfo.existLFN = 2030, newestlfn = 2031.
08:40:01.67 Err : REP: 26549: UAT_DSN_DS:misc.c(247): TT16046: Failed to force log
08:40:01.67 Err : REP: 26549: UAT_DSN_DS:misc.c(247): TT722: TT0722: Log flusher reports error 906 (TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2032, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate") -- file "logflusher.c", lineno 6136, procedure "sbLogBufOSErrorPush"
08:40:01.67 Err : REP: 26549: UAT_DSN_DS:receiver.c(1241): TT16160: Failed to flush log records. Replication Agent exiting; but will be restarted by TimesTen daemon
08:40:01.75 Warn: REP: 26549: UAT_DSN_DS:receiver.c(2870): TT16060: Failed to read data from the network. TimesTen replication agent is stopping
08:40:02.68 Err : : 20084: 20089/0x198ede80: Log flusher reports success: previously-reported error 906 (TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2032, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate") no longer pending.
08:40:02.91 Err : : 20084: repagent says it has failed to start: Failed to flush log records. Replication Agent exiting; but will be restarted by TimesTen daemon
08:48:02.62 Err : : 20084: 20089/0x198ede80: XXX: Dbhdr group: dba Logpath:/mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2033
08:48:02.62 Err : : 20084: 20089/0x198ede80: XXX: fstat returned with info uid=670 gid=673 mode=drwxr-xr-x size=12288
08:48:02.62 Err : : 20084: 20089/0x198ede80: XXX: fstat returned with info uid=670 gid=673 mode=-rw------- size=0
08:48:02.62 Err : : 20084: 20089/0x198ede80: Log flusher encountered error 906: TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2033, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate". fslsn = 2033.0, disklsn = 2032.534587392, ckptInfo.existLFN = 2031, newestlfn = 2032.
08:48:02.62 Err : : 20084: 20428/0x2aac1c17e850: sbXactCommit: Unable to sync log to disk. Errors/warnings follow.
08:48:02.62 Err : : 20084: 20428/0x2aac1c17e850: TT0722: Log flusher reports error 906 (TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2033, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate") -- file "logflusher.c", lineno 6136, procedure "sbLogBufOSErrorPush"
08:48:02.62 Err : : 20084: 20428/0x2aac1c17e850: *** 20428: (Error 722): TT0722: Log flusher reports error 906 (TT0906: Cannot change mode on log file /mtsuatlog/timesten/UAT_DSN_logdir/UAT_DSN_ds.log2033, error Operation not permitted -- file "logfile.c", lineno 813, procedure "sbLogFileCreate") -- file "logflusher.c", lineno 6136, procedure "sbLogBufOSErrorPush"
08:48:02.62 Err : : 20084: 20428/0x2aac1c17e850: *** 20428: -- file "logflusher.c", lineno 6136, procedure "sbLogBufOSErrorPush"
08:48:02.63 Err : : 20084: 20428/0x2aac1c17e850: Data store marked invalid [xact.c:/st_timesten_11.2.1/3:sbXactCommit:6597] PID 20428 (timestenorad) CONN 13 (Refresher(S,60000)) Context 0x2aac1c17e850
08:48:03.57 Warn: : 20084: 2649/0x40b2590: Forced Disconnect /timesten/UAT_DSN_datastore/UAT_DSN_ds
08:48:03.57 Warn: : 20089: Stopping subdaemon HistGC thread for /timesten/UAT_DSN_datastore/UAT_DSN_ds because db is invalid.
08:48:03.57 Warn: : 20089: subd not sending crs notification, no valid socket
08:48:03.57 Warn: : 20089: Stopping subdaemon Log Marker thread for /timesten/UAT_DSN_datastore/UAT_DSN_ds because db is invalid.
08:48:03.57 Warn: : 20089: subd not sending crs notification, no valid socket
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:transmitter.c(9660): TT16127: Failed to read transaction logs
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:transmitter.c(9660): TT994: TT0994: Data store connection terminated. Please reconnect. -- file "dbAPI.c", lineno 9656, procedure "sb_dbLogReadQ"
08:48:03.57 Warn: : 20084: 2649/0x3f58850: Forced Disconnect /timesten/UAT_DSN_datastore/UAT_DSN_ds
08:48:03.57 Warn: : 20084: 2649 ----------: Disconnecting from an old instance
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:meta.c(604): TT16012: Data store is invalid. Replication Agent exiting but may be restarted by TimesTen daemon (depending on restart policy)
08:48:03.57 Warn: REP: 2649: UAT_DSN_DS:receiver.c(2870): TT16060: Failed to read data from the network. TimesTen replication agent is stopping
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:repagent.c(1237): TT16012: Data store is invalid. Replication Agent exiting but may be restarted by TimesTen daemon (depending on restart policy)
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:misc.c(247): TT16046: Failed to force log
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:repagent.c(3364): TT16005: Failed to disconnect from datastore '/timesten/UAT_DSN_datastore/UAT_DSN_ds' for 'TRANSMITTER' thread
08:48:03.57 Warn: : 20084: 20089/0x19901490: Forced Disconnect /timesten/UAT_DSN_datastore/UAT_DSN_ds
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:misc.c(247): TT994: TT0994: Data store connection terminated. Please reconnect. -- file "dbAPI.c", lineno 5166, procedure "sb_dbLogFlush"
08:48:03.57 Warn: : 20084: 20089 ----------: Disconnecting from an old instance
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:repagent.c(3364): TT846: TT0846: Data store connection invalid or not current -- file "dbAPI.c", lineno 3178, procedure "sb_dbDisconnect()"
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:repagent.c(1237): TT16012: Data store is invalid. Replication Agent exiting but may be restarted by TimesTen daemon (depending on restart policy)
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:repagent.c(3364): TT16005: Failed to disconnect from datastore '/timesten/UAT_DSN_datastore/UAT_DSN_ds' for 'LOGFORCE' thread
08:48:03.57 Err : REP: 2649: UAT_DSN_DS:repagent.c(3364): TT846: TT0846: Data store connection invalid or not current -- file "dbAPI.c", lineno 3178, procedure "sb_dbDisconnect()"
08:48:03.57 Warn: : 20084: 20089/0x19914aa0: Forced Disconnect /timesten/UAT_DSN_datastore/UAT_DSN_ds
08:48:03.57 Warn: : 20084: 20089 ----------: Disconnecting from an old instance
What I could gather from this is that Replication agent tried to change permissions of transaction logs and was unable to and the datastore was marked invalid and disconnected.
But I have not been able to find a reason for this error occurring, the environment has not been touched, there have been changes made to database objects but can they lead to this error.
If not, what is causing the error and how can we resolve it.
Thanks, your help is much appreciated.You have a permission problem. For some reason your logfiles have the owner/group 'timesten:timesten' but based on the information from ttVersion and from the permissions on the checkpoint file they should be 'timesten:db'a. Also, the permissions shoiuld be rw-rw---- but they are set to rw-------. There could be several reasons for this, the most likely are:
1. Incorrect permissions set on the directory that holds the transaction log files.
2. TimesTen daemon processes running with incorrect userid/group.
3. Instance administrator user (timesten) no longer has group membership of the protection group (dba).
For (1), this would be 'user error'. For (2) and (3), this could only happen if permissions on TimesTen install files have been manually changed (something that should not be done without a very clear understanding of how the permissiosn need to be set) or the uid/gid for the instance administrator user has been changed at the O/S level after installing TimesTen or the 'timesten' user has been removed from the 'dba' group.
I would suggest that you check all these and see what may have been done at the O/S level.
Chris -
Deepak,
Can you throw some light on the 'Data access Objects" pattern? and how it can be used with Entity beans? I am trying to apply this to the following scenario and having a hard time.
We need users to be transparent of the data retrieval from various sources like oracle database, flat files or legacy storage. If I use the data access object pattern it is easier to use java classes in place of entity beans.
Thanks
Shreyas KamatShreyas,
[For those not familiar with our Data Access Object Pattern, the
beta version of the pattern is available on JDC (needs JDC login) at:
http://developer.java.sun.com/developer/restricted/patterns/DataAccessObject.html]
Data Access Objects (or DAOs) are objects that hide the database
implementation from data clients. Data clients are any objects
that need to retrieve data from the data source. And the data source
could be anything that contains data, not necessarily only RDBMSs.
For example, an external system could be a data source.
Now using DAOs with Entity beans is applicable only in bean-managed
persistence (BMP) scenario. In BMP, the entity beans are responsible
to provide the data load and store implementation in the ejbLoad()
and ejbStore() methods of the bean implementation. Without using
the DAOs, the entity bean class will contain all the JDBC code (assuming
that the data source is an RDBMS), SQL, etc. This makes the
entity bean class bloated and difficult to manage when changes
are made to the data logic. In addition, this tightly couples the
data source implementation with the entity bean implementation.
By using DAOs, the ejbStore() and ejbLoad() methods are much
simpler. Also, it is easier to change from one datasource implementation
to another by replacing the DAOs. Further flexibility is possible
by employing the DAO factory strategy as described in the DAO pattern
in our catalog.
Coming back to you point about making the data access transparent
to the clients; it is possible to achieve this by using DAO and
DAO factory strategy. In the current version of the pattern,
included in the book, we provide sample code to show how
to design DAO classes and to apply DAO factory strategy.
On your last point about using java classes instead of entity beans, this
is not the intention of the DAO pattern. Entity beans serve a different
purpose in the architecture as coarse-grained transactional components.
entity beans use DAOs in BMP implementations, but are not replaced
by DAOs. However, DAO classes are reusable. The same DAO that is
used by an entity bean to retrieve some data in one application scenario,
can be reused by a servlet in another application scenario
that needs the same data, but does not use entity beans.
So bottom line is that DAOs address the need to data access and
manipulation and work together with entity beans in BMP implementations.
thanks,
-deepak
Maybe you are looking for
-
Webcrawler issue: Downloads different file than the one visible in browser
Hi Everyone, I am using a webcrawler to download webpages from a site. But it seems to be downloading different HTML files from the ones I can see in the browser. Someone suggested that I might try and setRequestProperty() to fake a browser. Could an
-
Fiori Launchpad suddenly got empty
Hello , The fiori launchpad was working fine until today (the UI add on was patched couple of weeks ago to SP11. We are trying to integrate it into portal and today i went ahead and implemented the noclickjacking fiorilaunchpad.html Tested again and
-
I'am 85 years old, and Siri as gone so quite I can hardly hear it, how can I get it's sound back up?. R.A.B.
-
How can i change my security questions if i dont know them?
I don't remember my security questions so I can't buy anything from iTunes unless I answer the security questions. How can I change them?
-
My understanding was that an LAP that has lost connectivity to its WLC should continue to work... At least as long as it is not rebooted. I have had a situation where a WLC went down and we lost wireless connectiviy completely. Can anyone shed shome