Tlogs in 10.3
In weblogic 8.1 we get the tlogs under the serverName dir by default .
But where are the tlogs generated for the 10.3 under which dir?
any suggestions will be appreciated
In the default persistent store.
If you change nothing then:
<DOMAIN_HOME>\servers\<SERVER_NAME>\data\store\default\
See:
http://download.oracle.com/docs/cd/E14571_01/web.1111/e13731/trxcon.htm#i1048638
http://download.oracle.com/docs/cd/E14571_01/apirefs.1111/e13952/taskhelp/jta/ConfigureTheDefaultStore.html
Similar Messages
-
Hi
I am getting the following error while creating TLOG. Any suggestion in solving this problem would be of great help. (If I do not mention any block size while creating log, log is getting created without any problem.)
TLOGSIZE in UBBCONFIG is 600
Regards,
R. Thangamurugesan
============== COMMANDS EXECUTED ===========
tmadmin
crdl -z/speedcash/tuxadm/samba/devices/TLOG -b635
crlog -m SITE1============================================
=================== ERRORS =================
100647.uxmad02!tmadmin.1602.1.-2: 12-15-2004: Tuxedo Version 8.1
100647.uxmad02!tmadmin.1602.1.-2: TMADMIN_CAT:1330: INFO: Command: crdl -z/speedcash/tuxadm/samba/devices -b635
100713.uxmad02!tmadmin.1602.1.-2: TMADMIN_CAT:1330: INFO: Command: crdl -z /speedcash/tuxadm/samba/devices/TLOG -b635
100726.uxmad02!tmadmin.1602.1.-2: TMADMIN_CAT:1330: INFO: Command: crlog -m SITE1
100726.uxmad02!tmadmin.1602.1.-2: LIBTUX_CAT:297: ERROR: tlogcreate: gpcrtbl: no space can be allocated for disk table or for VTOC/UDL
101118.uxmad02!tmadmin.1602.1.-2: TMADMIN_CAT:1330: INFO: Command: crlog -m SITE1
============================================Hi Roopeshdubey,
I have created TLOG with block size 1000, now it is getting created..
Please have a look at the other issue raised by me in this forum with subject "Tuxedo Connectivity problem with Sybase", any suggestions on this issue would be very much helpful too.
Thanks & Regards,
R. Thangamurugesan -
Best way to delete large number of records but not interfere with tlog backups on a schedule
Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules. There is a list of tables that need a lot of records purged from them. What would be a good approach to use for deleting the old records?
Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
Approach #1
A one-time delete that did everything. Delete all the old records, in batches of say 50,000 at a time.
After each run through all the tables for that DB, execute a tlog backup.
Approach #2
Create a job that does a similar process as above, except dont loop. Only do the batch once. Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
Note:
Some of these (well, most) are going to have relations on them.Hi shiftbit,
According to your description, in my opinion, the type of this question is changed to discussion. It will be better and
more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state.
For more information about deleting a large number of records without affecting the transaction log.
http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
Hope it can help.
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
Is there a TLOG? What's wrong?
Hi,
How can I find out if my application has a TLOG device configured.
In the dmconfig I see that the parameter DMTLOGDEV is set.
Often I see this following error in the ULOG -
LIBGWT_CAT:1042: ERROR: Can't open domain log
What could be going on? Will I get this error if no TLOG is configured?
thanks
-Atulcheck for TLOGDEVICE in tuxconfig file and DMTLOGDEV in dmconfig file
if they are there then you must have a TLOG and you shoild create a TLOG.
"kench" <[email protected]> wrote:
>
thanks for the response.
When I do tmunloadcf I do see in the machines section
TLOGNAME="TLOG"
TLOGSIZE=100
Should I go in and issue the command ?
crdl -d domain_id
thanks
"roopesh" <[email protected]> wrote:
The parameter you should check is DMTLOGDEV and not DMTLOGNAME.DMTLOGNAME
will
be set by tuxedo by default even if you do not specify it in dmconfig
file but
if you have defined DMTLOGDEV then you have to have a device created
otherwise
it will report error.
similarly in tuxconfig file the corresponding parameter is TLOGDEVICE
and not
TLOGNAME.
"roopesh" <[email protected]> wrote:
You need to findout if you are using transactions across domains.
if you are using then you need domains TLOG otherwise just
unsetting the DMTLOGNAME from dmconfig file should solve the problem.
to check that your application is using the TLOGs or not you can
do a tmunloadcf|grep TLOGNAME .if you get something that means youare
using
it.
Thanks
Roopesh
"kench" <[email protected]> wrote:
Hi,
How can I find out if my application has a TLOG device configured.
In the dmconfig I see that the parameter DMTLOGDEV is set.
Often I see this following error in the ULOG -
LIBGWT_CAT:1042: ERROR: Can't open domain log
What could be going on? Will I get this error if no TLOG is configured?
thanks
-Atul -
Hello,
Usually,the tlog backup goes to tape but it's giving some error.I took the backup on disk and then doing tlog shrink but it's not happening.
Please advise.
Best regards,
VishalHello,
Usually,the tlog backup goes to tape but it's giving some error.I took the backup on disk and then doing tlog shrink but it's not happening.
Please advise.
Best regards,
Vishal
You might need to take log backup twice to actually be able to shrink the log file.please run DBCC LOGINFO(DB_NAME) and see if status column last value is 0 or not unless it is zero you wont be able to shrink.
You can also use below query to see what is holding your log from truncating'.If it is log backup you need to take log backup.
select name,log_reuse_wait_desc from sys.databases where name='db_name
Problem with tape log backup is there are 2 options to take log backup one just log backup (plain) and other take log backup and truncate logs hope you have selected second option.Else please take log backup using TSQL
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
JTA - many tlogs being produced but with 0 bytes
Hi there,
We have an old application running on WebLogic 6.1. At the moment we experiencing hundreds of 'myserver_xxxx.tlog' files (with a zero size) being created every day which is causing disk directory problems. The Weblogic documentation (well 8.1, anyway) implies that we have JTA transactions that never complete.
Would any one know why this would be happening? Usually during garbage collection the logs if not needed get deleted.Also just to let you know I am doing all these operations in ABAP and not in XI/PI.
ok...just one Q...are you currently using XI/PI for this scenario?...atleast the ID part...
Can I know how do we configure the IR objects..
In IR you will need:
1) DataType at the Sender ---> XI side ...i.e. for the log file...this structure should be identical to the structure in the log file....if your log file is a CSV file (not in a XML format)then you need to configure File Content Conversion in the sender File Channel
2) Create a Message Type...this will include the DT that you created above..
3) Create an OutBound Asynchronous Message Interface (service interface in PI)
4) Create a DataType at the XI ---> Receiver...i.e. the folder where you need to post the log file...If you want that the log file should the same format as in the Sender Folder (CSV/ Flat) then you need to have a File Content Cpnversion in the Receiver File Channel.
5) A mapping program seems to be optional in your case...not required...
This completes your IR part....
In ID...
1) There is an object called Receiver Determination....in this you can configure a condition to check if your message has a node or not....you can use this condition editor to check if your file has the root node or not....if not then it means that the file is blank...i.e no log data....in such a case a you can get an error in XI (in transaction SXMB_MONI)...
Though i tried to explain what objects you need...i recommend you also have a look at these blogs to get a pictorial view:
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
Regards,
Abhishek. -
Hello everybody,
Somebody knows when POS DM decides to use the T LOG tables (T LOG short) instead T LOG (T LOG long)?
Due to a high volume of loading information in T LOG tables we need to use the T LOG but I don't know how to redirect the incoming information.
Thanks in advanceIn the TLOG table fields the information is loaded as follows:
Header transaction fileds + size of following field + deep structure for the detail
If the size of deep structure (detail of transaction) exceed the 32K (the size is informed in the size field) the information is load in the tlogl structure, else the information is loaded in the tlogs structure.
Óscar de Dios
Edited by: Consultoria on Sep 5, 2008 11:04 AM -
Tlog files in startup directory
Running wls 6.1, I get the following files in the directory where
weblogic is running (which is the parent directory of the config dir):
drwxrwxr-x 3 farnaz staff 512 Aug 28 17:33 tmp_ejbcosmo7003/
-rw-rw-r-- 1 farnaz staff 4 Aug 28 17:32 tse.0000.tlog
-rw-rw-r-- 1 farnaz staff 4 Aug 28 17:32 tse.heur.0000.tlog
where tse is the name of my weblogic server.
Is there a way to configure the server so that this files are written to
a directory I want instead of startup dir?farnaz <[email protected]> wrote:
Running wls 6.1, I get the following files in the directory where
weblogic is running (which is the parent directory of the config dir):
drwxrwxr-x 3 farnaz staff 512 Aug 28 17:33 tmp_ejbcosmo7003/
-rw-rw-r-- 1 farnaz staff 4 Aug 28 17:32 tse.0000.tlog
-rw-rw-r-- 1 farnaz staff 4 Aug 28 17:32 tse.heur.0000.tlog
where tse is the name of my weblogic server.
Is there a way to configure the server so that this files are written
to
a directory I want instead of startup dir?check out my post about:
Running two instances on same machine WL6.0/WL6.1(.tlog files)
it will solve this problem...
Eric -
Hi,
I'm new to Tuxedo. I need to see .tlog file and I'm not able to start
the admin console.
Where can I find loadtlog / dumptlog utilities?
Or how can I view .tlog files?
Thanks in advance.
AnaghaAnagha,
The TLOG file is used to store information about transactions currently in
the second phase of commit (or the most recent such transaction(s) if no
transactions are currently in the second phase of commit.) The TLOG is used
internally by Tuxedo to recover transactions to a consistent state following
a machine failure, but customers usually do not need to look at it. It is
possible to dump the TLOG in text format using the tmadmin dumptlog
subcommand, but the only time I can think of that it would be necessary to
do so is when migrating the transaction log to a backup machine as explained
at http://e-docs.bea.com/tuxedo/tux91/ada/admigt.htm .
The ULOG file is the log file that customers are usually interested in. It
is located in $APPDIR/ULOG.`date +%m%d%y` (unless the ULOGPFX configuration
file parameter has been used to change the location.) The ULOG is in plain
text format.
Ed
<anagha joshi> wrote in message news:[email protected]..
Hi,
I'm new to Tuxedo. I need to see .tlog file and I'm not able to start
the admin console.
Where can I find loadtlog / dumptlog utilities?
Or how can I view .tlog files?
Thanks in advance.
Anagha -
Modify the data in POSDM table /POSDW/TLOGS
Hi Experts,
I'm looking for a way to modify the data in table /POSDW/TLOGS ,
I've Modified the Totals Data by updating column "TRANSTURNOVER"
this modified the totals ,
now i need to modify the line item and articles values ,
any advice will be appreciated and rewarded .
Thanks ,
MohamedThis is a question for OSS.
Rob -
Hello, my new error when i try to boot my tuxedo configuration is
320 ERROR: BB TLOGSIZE differs from number of pages in TLOG file
Description
An error has occurred opening the transaction log because the actual size of the
TLOG differs from the size maintained by the BB.
What can i do to change this ??
where could we change the size of the TLOG ? and those of the BB ?
ThanksTo add to Frank's excellent statements below:
The Transaction Log is a specific area within a VTOC, or Volume Table of
Contents. First you create the VTOC using the crdl command inside tmadmin.
This is created with a certain number of pages, lets say 1000. Never specify a
VTOC of less than 100, that's a minimum size. I use 1000 as a default. Then
the Transaction Log is created inside the VTOC using the crlog command inside
tmadmin. You must also provide a size for this, say 500 pages. This
Transaction Log is then given a name, such as TLOG.
In your UBBCONFIG, you can also specify the size of the Transaction Log, which
defaults to 100 (TLOGSIZE=100).
The specific error Didier is getting is because the size of the Transaction Log
specified in the UBBCONFIG is different (probably larger) than the size the
Transaction Log was actually created with by the crlog command. Normally crlog
gets this from the compiled UBBCONFIG file tuxconfig. However, if you've
rebuild your tuxconfig, or are using a VTOC created by another configuration,
you can get this error.
Hope this helps.
Frank Clarijs wrote:
Size of TLOG is defined by parameter TLOGSIZE, which is related to
the maximum number of transactions from your server. (TLOGSIZE
must be smaller than MAXGTT or your application will not boot
properly after a crash).
You cannot change the size of an existing TLOG. Stop the application,
use the tmadmin command dslog (on the master machine, for all
machines that have the problem), then start again.
The error message and the procedure are normally well documented
on the documentation CD or the BEA e-docs site.
Frank
"didier" <[email protected]> wrote:
Hello, my new error when i try to boot my tuxedo configuration is
320 ERROR: BB TLOGSIZE differs from number of pages in TLOG file
Description
An error has occurred opening the transaction log because the actual size
of the
TLOG differs from the size maintained by the BB.
What can i do to change this ??
where could we change the size of the TLOG ? and those of the BB ?
Thanks
Brian Douglass
Transaction Processing Solutions, Inc.
8555 W. Sahara
Suite 112
Las Vegas, NV 89117
Voice: 702-254-5485
Fax: 702-254-9449
e-mail: [email protected]
[att1.html] -
Retail: In which POS system do we create SAP TLOG file
Hi Gurus,
Can anyone tell me in which POS system (is it SAP Triversity?) we create SAP TLOG files( or) in someother POS sytem we create SAP TLOG files?I just want to know the POS System name where this SAP TLOG files are created?Hope for correct answer.
Thanks in advance
Thanks
TonyraoHi ,
The Genral Landscape for SAP IS Retail is Store -> ETL -> POSDM -> ECC.
The TLog file in SAP Architecture resides in POSDM and the actual name of this deep structure table is /POSDW/TLOG.
When the BAPI CALL runs it pushes data in this table. And when data processing takes via PIPE Dispatcher
/POSDW/PIPEDISPATCHER program then it processes this TLOG data according to business rules or tasks as per POSDM Config .
This will push data to ECC via IDOCs or can push data to BW Cubes as the case may be...
So its the POS data => POSDW TLOG => SAP ECC Transactional data like Billing doc / Art DOC / Acct DOC etc
Or it can be BW Cube data as well. You can jolly welll find these POS Tlog data in Daily Cubes as well.
However worthwhile to mention its quite difficult to read /posdw/tlog via SE16 in POSDM .... as data resides in deep str in this gigantic table...
regards
Amitava -
Estimating tlog impact before deleting records
Does this approach sound reasonable?
So I need to pruning several tables of old data. One part of this will be to setup an ongoing maintenance job that removes old records. First Im going remove a larger than normal amount of records based on date ranges etc. Before I do that,
I want to get a rough estimate on the batch size to limit the amount of data being deleted so as not to swell the tlog to the point it has to grow. What I DO NOT want to happen is freeze up the tlog because of doing an operation that is big enough for
that to happen. Ive done that before and it aint fun.
So first, I get the free space left in the current log file
--Get the size of data actually used and the free space in the data/log files for a given database
select
name
, filename
, convert(decimal(12,2),round(a.size/128.000,2)) as FileSizeMB
, convert(decimal(12,2),round(fileproperty(a.name,'SpaceUsed')/128.000,2)) as SpaceUsedMB
, convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) as FreeSpaceMB
from dbo.sysfiles a
Then , one table at a time, select top(1) from the table(s) Im going to delete from and turn on client stats in SSMS. This will give me the amount of data that is coming back from the server. This would also be about the same amount being put
into the tlog upon deleting that same row, correct?
So, with the above info, I should be able to come up with a relatively "safe" estimate of the number of records I can delete from one or more tables.
Sound reasonable?Read this link
http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Batch%20processing%20records%20in%20MS%20SQL
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Reconfigure to multiple machines - what happen with TLOGs?
Hi,
I am maintainig the app based on Tuxedo and Oracle 8.1.7.
It sits on many machines, but till now, only one (in fact
the master) calls the DB and uses the TLOG. Works quite fine,
but core of logic lies on master (causing SPOF).
Now, to achieve higher availability, there is an
idea to deploy same set of servers as on master
onto another machine (backup).
Can somebody explain
a) whether is TLOG necessary on each machine, even if there is
an option to use volume shared by both nodes ?
b) in case of failure of node A, is there any way to continue
transactions pending on A node? The A's and B's TLOGs
should be on array, accessible from both machines. Could
it be possible to do dumptlog TLOG-A and loadtlog, all
from the surviving node B?
c) can we do it without reworking/recompiling the application?
In ideal case, I would like to rewrite ubbconfig only.
Thank you in advance!
JohnJohn,
I posted some information about the function of the TMS a few days ago, under the
topic "Optimum Number of TMS's". That should answer your questions about what a
TMS does.
For migration information, try this: http://edocs/tuxedo/tux81/ada/admigt.htm#1017862
Scott Orshan
J. J. Swallow wrote:
Hi,
I am maintainig the app based on Tuxedo and Oracle 8.1.7.
It sits on many machines, but till now, only one (in fact
the master) calls the DB and uses the TLOG. Works quite fine,
but core of logic lies on master (causing SPOF).
Now, to achieve higher availability, there is an
idea to deploy same set of servers as on master
onto another machine (backup).
Can somebody explain
a) whether is TLOG necessary on each machine, even if there is
an option to use volume shared by both nodes ?
b) in case of failure of node A, is there any way to continue
transactions pending on A node? The A's and B's TLOGs
should be on array, accessible from both machines. Could
it be possible to do dumptlog TLOG-A and loadtlog, all
from the surviving node B?
c) can we do it without reworking/recompiling the application?
In ideal case, I would like to rewrite ubbconfig only.
Thank you in advance!
John -
Hi All,
Can you please any one share the POS Tlog structure.
Thanks in advance,
Hari.Please look at chapter "Oracle Retail Sales Audit Batch" in the RMS Operations Guide nr. 1 (e.g. per http://download.oracle.com/docs/cd/E12448_01/rms/docset.html), and especially the section on "saimptlog (Sales Audit Import)". You may really want to sit down for this because it is a little much to take in, it was at least for me the first time.
Best regards, Erik Ykema
Maybe you are looking for
-
Hi Guys, This is my First post, I am in a deep trouble with the following exception. Initially I was getting javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter I added few l
-
Hi all, I have a SOWS 6.1 and I am getting the following error eache time a user try to get the page: Client-Auth reports: Unexpected error receiving data: -5938 Do you know what it should be? Thanks in advance
-
ARD admin password will not hold
I recently had to have my MacBookPro replaced under warranty. I cloned the hard drive from the old machine to the new. All good except a few apps needed to be re-registered. ARD on the other hand will not hold/remember my password. I accepts the pass
-
Is it possible to turn off logging for a session or the entire DB? (I know that it will make the DB unrecoverable, but that is of no concern.) How is this done? I found a nologging clausul that could be used when creating tables from another and crea
-
Hi all, i am new to MM & PP, i am working on this Application as BW consultant. i am unaware of them. plz any help me to know the flow of MM & PP. if any documents plz forward it to [email protected] *********points will b assign******* Thanks in adv