Trace source of rapidly growing tablespace

A tablespace which has about 24 datafiles of 1GB each is growing fairly rapidly and I would like to know if it is easy and quick to evaluate the cause.
I appreciate I could gather details of all tables in the tablespace and monitor their size over a period of time, but is there any other "immediate" way?
Thanks

Hi,
I have two suggestions for you..
1. try to partition the table if it is huge .. and
2. try to take sampling (statistics) when you do analysis to reduce the run time..
Thanks
--Raman                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Fast growing tablespaces

    Hi Experts,
    The following tablespaces are consuming max free space.
    PSAPBTABD : 50% of the total space aquired 77.5 GB   
    PSAPBTABI : 38.5 GB                     
    PSAPCLUD : 15 GB          
    85 % of total space is consumed by these tablespaces.
    Tables with max growth of increasing are :
    BSIS, RFBLG, ACCTIT, ACCTCR, MSEG, RSEG   etc.
    Average increase of  2GB space per month.
    Kindly help me to find out the solution.
    Regards,
    Praveen Merugu

    Hi praveen,
    Greetings!
    I am not sure whether you are a BASIS or Functional guy but, if you are BASIS then you can discuss with your functional team on selecting the archiving objects inline with your project. Normally, func consultants will have the knowledge of which archive object will delete entries from which tables... You can also search help.sap.com for identifying the archiving objects.
    Once you identified the archiving objects, you need to discuss with your business heads and key users about your archiving plan. This is to fix the data retention period in the production system and to fix the archiving cycle for every year.
    Once these been fixed, you can sit along with func guys to create varients for the identified archiving objects. Use SARA and archivie the concerned objects.
    Initiating a archiving project is a time consuming task. It is better to start a seperate mini project to kick off the initial archiving plan. You can test the entire archiving phase in QA system by copying the PRD client.
    The below summary will give you a idea to start the archiving project,
    1. Identify the tables which grow rapidly and its module.
    2. Identify the relevent archiving object which will delete the entries in rapidly growing table.
    3. Prepare a archive server to store the archived data ( Get 3rd party archiving solution if possible). Remeber, the old data should be reterived from the archive server when ever the business needs it.
    4. Finalise the archving cycle inline with your business need.
    5.Archvie the objects using SARA.
    6.Reorganize the DB after archiving.
    Hope this will give some idea on archiving project.
    regards,
    VInodh.

  • ORA-01653: unable to extend table SYS.SOURCE$ by 64 in tablespace SYSTEM"

    Hi,
    While creating a package the , I got the following error.
    "ORA-00604: error occurred at recursive SQL level 1
    ORA-01653: unable to extend table SYS.SOURCE$ by 64 in tablespace SYSTEM"
    Could anyone please explain, how to solve this problem.
    Thank you,
    Regards,
    Gowtham Sen.

    solution: increase the size of the system tablespace.
    the text of all pl/sql objects is stored in the database by sys. packages, procedures, and functions are stored in sys.source$ (which is part of the USER_SOURCE view definition). so, you've created a lot of pl/sql, and the table wants to extend, but there isn't room.
    this is a major problem, because it means that nothing in system can extend. add another datafile, or put the tablespace on autoextend.

  • How to effectively manage large table which is rapidly growing

    All,
    My environment is single node database with regular file system.
    Oracle - 10.2.0.4.0
    IBM - AIX
    A tablespace in this database is growing rapidly. Especially a single table in that tablespace having "Long Raw" column datatype has grown from 4 GBs to 900 GBs in 6 months.
    We had discussion with application team and they mentioned that due to acquisitions, data volume is increased and we are expecting it to grow up to 4 TBs in next 2 years.
    In order to effectively manage the table and to avoid performance issues, we are looking for different options as below.
    1) Table is having date column. With that thought of converting to partitioned table like "Range" partitioning. I never converted a table of 900 GBs to a partitioned table. Is it a best method?
         a) how can I move the data from regular table to partitioned table. I looked into google, but not able to find good method for converting to regular table to partitioned table. Can you help me out / share best practices?
    2) In one of the article, I read, BLOB is better than "Long RAW" datatype, how easy is to convert from "Long RAW" datatype. Will BLOB yield better performance and uses disk space effectively?
    3) Application team is having purging activity based on application logic. We thought of using shrinking of tables option with enable row movement- "alter table <table name> shrink space cascade". But it is returning the error that table contains "Long" datatype. Any suggestions.
    Any other methods / suggestions to handle this situation effectively..
    Note: By end of 2010, we have plans of moving to RAC with ASM.
    Thanks

    To answer your question 2:
    2) In one of the article, I read, BLOB is better than "Long RAW" datatype,
    how easy is to convert from "Long RAW" datatype. Will BLOB yield better
    performance and uses disk space effectively?Yes, LOBs, BLOBs, or CLOBs are (supposed) to be better than raws (or long raws). In addition, I believe Oracle has or will shortly be desupporting the use of long raws in favor of LOBs, CLOBs, or BLOBs (as appropriate).
    There is a function called "to_lob" that you have to use to convert. Its a pain because you have to create the second table and then insert into the second table from the first table using the 'to_lob' function.
    from my notes...
    =================================================
    Manually recreate the original table...
    Next, recreate (based on describe of the table), except using CLOB instead of LONG:
    SQL> create table SPACER_STATEMENTS
    2 (OWNER_NAME VARCHAR2(30) NOT NULL,
    3 FOLDER VARCHAR2(30) NOT NULL,
    4 SCRIPT_ID VARCHAR2(30) NOT NULL,
    5 STATEMENT_ID NUMBER(8) NOT NULL,
    6 STATEMENT_DESC VARCHAR2(25),
    7 STATEMENT_TYPE VARCHAR2(10),
    8 SCRIPT_STATEMENT CLOB,
    9 ERROR VARCHAR2(1000),
    10 NUMBER_OF_ROWS NUMBER,
    11 END_DATE DATE
    12 )
    13 TABLESPACE SYSTEM
    14 ;
    Table created.
    Try to insert the data using select from original table...
    SQL> insert into SPACER_STATEMENTS select * from SPACER_STATEMENTS_ORIG;
    insert into SPACER_STATEMENTS select * from SPACER_STATEMENTS_ORIG
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    That didn't work...
    Now, lets use TO_LOB
    SQL> insert into SPACER_STATEMENTS
    2 (OWNER_NAME, FOLDER, SCRIPT_ID, STATEMENT_ID, STATEMENT_DESC, STATEMENT_TYPE, SCRIPT_STATEMENT, ERROR, NUMBER_OF_ROWS, END_DATE)
    3 select OWNER_NAME, FOLDER, SCRIPT_ID, STATEMENT_ID, STATEMENT_DESC, STATEMENT_TYPE, TO_LOB(SCRIPT_STATEMENT), ERROR, NUMBER_OF_ROWS, END_DATE
    4 from SPACER_STATEMENTS_ORIG;
    10 rows created.
    works well...
    ===============================================================

  • Rep.log rapidly growing on IPCCX

    Hi,
    we have 5.0(2)SR02_Build045 IPCCX HA.
    Few days ago rep.log started to grow rapidly (cca 20GB per a day). we were trying to find what is the cause of that and we found that slurp.exe and slapd.exe are taking cca 50% of CPU (and sometimes slapd.exe is flapping).
    In LDAPMonSrv trace we found some errors like :
    ERROR LM0012 Monitor::Run:: Slapd process died.
    ERROR LM0012 Monitor::Run:: Slurpd process died.
    ERROR LDAPMon CDVThread:: initializeToCVD failed , retring ...
    whole log you can find in attachment.
    IP addresses are IPCC1 - 10.64.224.111 and IPCC2 - 10.64.224.112.
    Also we had problem with overloaded disk and services were down but we managed to start them and everything seemed like it is working fine but then we noticed that rep.log is all the time actually growing rapidly.
    Do you have any ideas what could be the problem? and any idea how to solve it?
    IPCCXs are in production so we have to find solution ASAP.
    Thank you!
    BR,
    Jelena

    Hi Roy,
    At the end we opened TAC case and we were solving this problem for few months. When we stopped the growing of the rep.log (and problem with slapd and slurpd processes) than we had the problem with backup. We did a lot of things but those that "helped" the most are: the procedure for synchronizing directory services (few times), page 40 in this pdf:
    http://cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_7_0/troubleshooting/guide/cad66tg-cm.pdf
    applied some patch that TAC send and at the end conclusion was that the problem was in corrupted file for LDAP credentials. They copied that file from IPCC2 and apply it for IPCC1 and after that the problem was solved.
    Hope this will help you.
    Good luck
    BR,
    Jelena

  • Debugger wont trace source.

    What the!?
    The Flex environment is great. Debugs well. I especially like
    the "Expressions" watcher. However, if I place my script code into
    files then the debugger ignores every break point in those files.
    For example, if I have a component and use the command:
    <mx:Script source="theScriptsAreHere.as"/> then run
    the app it works fine. But if I want to trace it then too bad. The
    debugger simply refuses to enter the "theScriptsAreHere.as" script
    file.
    Perhaps there is a switch or something I have left out of
    somewhere. Does anyone else experience this problem? Any comments
    appreciated.
    Cheers.

    Thanks for your reply Tracy. It's good to know that I'm not
    the only one :-)
    I have searched the archives and found one other person
    (AlainF) who posted a month ago. Unfortunately no response was sent
    to them.

  • Random Account Lockout (How to trace source?)

    In Windows 2003 server native domain environment: XP Pro machines have no issues, but all ~10 PCs that have Win7 Pro (in different offices) have their domain accounts locked out randomly throughout the day. Workstations have no passwords listed in credentials
    management.
    Suspect it is something on the workstations that is sending incorrect logon and triggering the invalid password lockout limit on domain policy. Found MSFT tools to trace in XP, but nothing for Win7. Does anyone know how to use Procmon or similiar tool to
    trace such source on the workstations? Thank you.
    (Procmon.exe from systernals)

    Hi,
    The user account has been automatically locked because too many invalid logon attempts or password change attempts have been requested.
    We can run the LockoutStatus.exe on domain controller to identify and investigate the account lockout issue.
    Troubleshooting tools:
    By using this tool, we can gather and displays information about the specified user account including the domain admin's account
    from all the domain controllers in the domain. In addition, the tool displays the user's badPwdCount value on each domain controller. The domain controllers that have a badPwdCount value that reflects the bad password threshold setting for the domain are the
    domain controllers that are involved in the lockout. These domain controllers always include the PDC emulator operations master.
    You may download the tool from the link
    Download Account Lockout Status (LockoutStatus.exe)
    http://www.microsoft.com/downloads/details.aspx?familyid=D1A5ED1D-CD55-4829-A189-99515B0E90F7&displaylang=en
    Once we confirm the problematic computer, we can perform further research to locate the root cause. Actually, there are many possible
    causes for bad password, such as cached password, schedule task, mapped drives, services, etc. Please remove the previous password cache which may be used by some applications and therefore cause the account lockout problem.
    Troubleshooting steps:
    1. Click Start, click Run, type "control userpasswords2" (without the quotation marks), and then click OK.
    2. Click the Advanced tab.
    3. Click the "Manage Password" button.
    4. Check to see if these domain account's passwords are cached. If so, remove them.
    5. Check if the problem has been resolved now.
    If there is any application or service is running as the problematic user account, please disable it and then check whether the problem
    occurs.
    For your convenience, I'd like to list the common troubleshooting steps and resolutions for account lockouts as the following:
    Common Causes for Account Lockouts
    To avoid false lockouts, please check each computer on which a lockout occurred for the following behaviors:
    Programs:
    Many programs cache credentials or keep active threads that retain the credentials after a user changes their password.
    Service accounts:
    Service account passwords are cached by the service control manager on member computers that use the account as well as domain controllers.
    If you reset the password for a service account and you do not reset the password in the service control manager, account lockouts for the service account occur. This is because the computers that use this account typically retry logon authentication by using
    the previous password. To determine whether this is occurring, look for a pattern in the Netlogon log files and in the event log files on member computers. You can then configure the service control manager to use the new password and avoid future account
    lockouts.
    Bad Password Threshold is set too low:
    This is one of the most common misconfiguration issues. Many companies set the Bad Password Threshold registry value to a value lower
    than the default value of 10. If you set this value too low, false lockouts occur when programs automatically retry passwords that are not valid. Microsoft recommends that you leave this value at its default value of 10. For more information, see "Choosing
    Account Lockout Settings for Your Deployment" in this document.
    User logging on to multiple computers:
    A user may log onto multiple computers at one time. Programs that are running on those computers may access network resources with
    the user credentials of that user who is currently logged on. If the user changes their password on one of the computers, programs that are running on the other computers may continue to use the original password. Because those programs authenticate when they
    request access to network resources, the old password continues to be used and the users account becomes locked out. To ensure that this behavior does not occur, users should log off of all computers, change the password from a single location, and then log
    off and back on.
    Stored user names and passwords retain redundant credentials:
    If any of the saved credentials are the same as the logon credential, you should delete those credentials. The credentials are redundant
    because Windows tries the logon credentials when explicit credentials are not found. To delete logon credentials, use the Stored User Names and Passwords tool. For more information about Stored User Names and Passwords, see online help in Windows XP and the
    Windows Server 2003 family.
    Scheduled tasks:
    Scheduled processes may be configured to using credentials that have expired.
    Persistent drive mappings:
    Persistent drives may have been established with credentials that subsequently expired. If the user types explicit credentials when
    they try to connect to a share, the credential is not persistent unless it is explicitly saved by Stored User Names and Passwords. Every time that the user logs off the network, logs on to the network, or restarts the computer, the authentication attempt fails
    when Windows attempts to restore the connection because there are no stored credentials. To avoid this behavior, configure net use so that is does not make persistent connections. To do this, at a command prompt, please type net use /persistent:no. Alternately,
    to ensure current credentials are used for persistent drives, disconnect and reconnect the persistent drive.
    Active Directory replication:
    User properties must replicate between domain controllers to ensure that account lockout information is processed properly. You should
    verify that proper Active Directory replication is occurring.
    Disconnected Terminal Server sessions:
    Disconnected Terminal Server sessions may be running a process that accesses network resources with outdated authentication information.
    A disconnected session can have the same effect as a user with multiple interactive logons and cause account lockout by using the outdated credentials. The only difference between a disconnected session and a user who is logged onto multiple computers is that
    the source of the lockout comes from a single computer that is running Terminal Services.
    Service accounts:
    By default, most computer services are configured to start in the security context of the Local System account. However, you can
    manually configure a service to use a specific user account and password. If you configure a service to start with a specific user account and that accounts password is changed, the service logon property must be updated with the new password or that service
    may lock out the account.
    Internet Information Services:
    By default, IIS uses a token-caching mechanism that locally caches user account authentication information. If lockouts are limited to users who try to gain access
    to Exchange mailboxes through Outlook Web Access and IIS, you can resolve the lockout by resetting the IIS token cache. For more information, see "Mailbox Access via OWA Depends on IIS Token Cache" in the
    Microsoft Knowledge Base.
    MSN Messenger and Microsoft Outlook:
    If a user changes their domain password through Microsoft Outlook and the computer is running MSN Messenger, the client may become locked out. To resolve this behavior,
    see "MSN Messenger May Cause Domain Account Lockout After a Password Change" in the
    Microsoft Knowledge Base.
    For more information, please refer to the following link:
    Troubleshooting Account Lockout
    http://technet.microsoft.com/en-us/library/cc773155.aspx
    Account Passwords and Policies in Windows Server 2003
    http://technet.microsoft.com/en-us/library/cc783860.aspx
    Hope this helps!
    Novak

  • Stupidly easy - trace source to debug output inunit test...

    I'm convinced that this is stupidly easy, but I cant figure it out...
    I have a class with "TraceSource "classname" and in debugging it work fine, I see the output in the "output" window, however when running unittests I only see "debug.Write" output...
    What am I missing... do I need to add a listener route or something?!
    - sure I'm noJedi but that's no reason to stop trying to make stuff levitate! -
    to clarify...this UnitTest:
    [TestMethod()]
    public void TestLogging()
    System.Diagnostics.Debug.WriteLine("this is a debug writeline");
    System.Diagnostics.Trace.WriteLine("this is a Trace writeline");
    var ts = new System.Diagnostics.TraceSource("classname");
    ts.TraceInformation("this is a ts.TraceInformation");
    throw new AssertInconclusiveException();
    outputs this:
    Test Name: TestLogging
    Test Outcome: Skipped
    Result Message: Exception of type 'Microsoft.VisualStudio.TestTools.UnitTesting.AssertInconclusiveException' was thrown.
    Result StandardOutput:
    Debug Trace:
    this is a debug writeline
    this is a Trace writeline
    Now... for SOME reason, THIS is NOT outputing to the DebugOutput either... when I step through the TEST...
    however stepping through code NOT in unit test does put stuff in the debug output window...
    What am I missing... why is the TraceSource not outputting... I can see in step through that the DefaultTraceListener is there (its the only listener) and I was under the impression that it directed output to the DebugOut output stream... is this not what
    I think it is...?!

    Hi Jack,
    I think you are correct in that my expectations were wrong.
    1) stepping through "SwitchLevel" is "Off", in the unit test, which is problematic.
    turning it on (to All) starts logging, but ONLY when I manually add a CONSOLETraceListener (and set the route to debug error stream to true)
    2) the doco you've pointed me at was what I was reading, but I think I read/misread it differently than you...
    I think its this that threw me:
    •A DefaultTraceListener emits Write and WriteLine messages to the OutputDebugString and to the Debugger.Log method. In Visual Studio, this causes the debugging messages to appear in the Output window. Fail and failed Assert messages also emit to the OutputDebugString Windows API and the Debugger.Log method, and also cause a message box to be displayed. This behavior is the default behavior for Debug and Trace messages, because DefaultTraceListener is automatically included in every Listeners collection and is the only listener automatically included.
    to my thinking this means that by default a "Default -> aka Debug (when DEBUGGGING)" is created for you always, and therefore in the ABSENCE of config stuff, this would be perfect for UNITTESTING - therefore TraceSource with nothing but a name
    should effectively produce the same output as "Debug.WriteLine"...
    that was my thinking, but it looks like even with 'SourceLevel.All" you still need to fiddle with it...
    Thanks for your input, for now, I've resolved this by adding the above stuff so that my existing TraceSources are at least outputting stuff in my tests so I can see more of what is going on without having to change my "TraceSource" calls to "Debug"
    calls everywhere.
    - sure I'm noJedi but that's no reason to stop trying to make stuff levitate! -

  • Stp loop, Not able to trace source

    Hello,
    I am new to cisco switches and learning about cisco switches now. we have a LAN with 6509 as core router and 2950s/3550s as access switches.
    When I ran wireshark on my machine, I saw an stp loop repeating from a cisco device. I have noted down the MAC-address and tried in vain to find the same in our LAN. I am seeing packets like Address: "Spanning-tree-(for-bridges)_00" and "loop reply". I am not able to see any of the MAC addresses found in this loop conversation, on my LAN. I read that these loops are not good for the network. Where can I start to resolve this problem?
    Thanks in advance for your advice.

    There is likely no problem at all. ;-)
    If you were really experiencing a loop, you would have other problems.
    Best for you will be to start making a study of spanning tree (STP) and it's inner workings. Here is a good starting point:
    http://www.cisco.com/en/US/tech/tk389/tk621/tsd_technology_support_protocol_home.html
    Armed with this knowledge you can try to analyze the traffic that was observed by wireshark.
    regards,
    Leo

  • Impdp operation taking more tablespace size in compare to expdp...

    Hi All,
    I have one issue with impdp operation. I am using 11gR2 database and schema's dmpfile size is 5G. When I start loading data through impdp schema's tablespace size grow more than 5G. I have to stop the impdp operation because of growing tablespace size. There is no compress parameter passed during expdp. Lastly I given tablespace maxsize= unlimited but seems like it still not sufficient and have to add one more dbf. so the tablespace size as of now is 60G and impdp operation is still running.
    Can anyone guide me if dmp file size is 5G then how could be the tablespace size more than 5G? I have an assumption that if my dmpfile size is 5G then the tablespace size in which I loaded my data (using impdp)should not more than 5G.
    Thanks in advance.

    I was facing the same problem. After giving parameter TRANSFORM=SEGMENT_ATTRIBUTES:n, the problem has been resolved.
    TRANSFORM = transform_name:value[:object_type]
    The transform_name specifies the name of the transform. Some of possible options are as follows:
    SEGMENT_ATTRIBUTES - If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y. ====> IF THIS IS 'N' PHYSICAL STORAGE ATTRIBUTES ARE NOT INCLUDED.
    STORAGE - If the value is specified as y, the storage clauses are included, with appropriate DDL. The default is y. This parameter is ignored if SEGMENT_ATTRIBUTES=n.
    Although thread a quite old, but just updating this in case someone needs to refer in future. My system parameter deferred_segment_creation is set to TRUE.
    Here is the complete syntax, I have used
    impdp vygrdba/******* dumpfile=VYGRVS6I5_25DEC12.dmp logfile=VYGR_PT_09Jan13.log remap_schema= VYGRVS6I5:VYGR_PT TRANSFORM=SEGMENT_ATTRIBUTES:n
    Edited by: 980762 on 9 Jan, 2013 3:58 AM

  • Migrating a new partition table with transportable tablespace

    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).

    user564785 wrote:
    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).Yes why not.
    1) create a table as CTAS from 2012 in new Tablespace on source
    2) transport the tablespace
    3) Add partition to existing partition table Or exchange partition
    Oracle has also documented this procedure:
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007549

  • Trace File -ST01

    In QAS and PRD I am gettign an excellent trace file on ST01, But on thePRD system I am not getting any..THe file itself is not created.. I am on 46c
    Thanks

    Specified will overide default.
    What this means is that rstr/max_diskspace has a deault value of 16 384 000, if the current value of parameter rstr/max_diskspace is 0 then thats the value that the system will take...
    You can either delete the parameter from your profile (which will return this to the default) or increase the value to more than 0 to give the trace some space to grow.
    Regards
    Juan

  • Debug/Packet Trace - flow options on the CSS11501 running 7.20 Build 3

    Hello all, I need to do some packet tracing on the CSS11501 running 7.2 and need to limit debugged packets to a particular IP.
    I know that you can set the IP by running: "flow trace-ip X.X.X.X"
    But what HEX option would I need for the command : "flow options" to only limit the traffic to one IP?
    From the help I get this:
    flow options ?
    00000020 Trace Route Changes
    00000010 Simulate DOS Attack
    00000008 Trace Spoof List
    00000004 Trace Source IP Address
    00000002 Trace UDP Flows
    00000001 Trace TCP Flows
    I tried different options like 0x00000004 and others, but get an error : "%% Invalid hex string entered".
    I can only get all TCP packets using " flow options 0x0000001" and that causes all TCP packets being dumped onto the console which risks crashing the CSS device.
    Has anyone been able to just dump packets from one particular IP address?
    Thank you.
    Dmitry.

    Thank you Gilles, that worked!
    Don't know why I did not try that before :)
    Dmitry.

  • IGS Trace file growth problem

    Hello,
    In /usr/sap/SID/Instance/igs/log i i found the following file - wd_<SID>.trc - this trace file keeps on growing to a very large size - i would like to stop this file from growing - i already set the igs/tracelevel parameter to 0 but the file keeps on growing - how do i stop it?
    Regards,
    Moshe

    Hi Moshe,
    have you performed a system restart after setting igs/tracelevel to 0?
    what is the content of the file wd_<SID>.trc. Maybe there is a general problem when starting the IGS and an IGS patch solves the issue.
    Regards
    Matthias

  • How to trace a module that uses shared server sessions?

    I have an app (Esri's ArcGIS Server 9.3.1) that I want to trace. Oracle is at 11.2.0.2.0. Our OEM is is at 10.2.0.4.0.
    When I initiate a trace for the module (ArcSOC.exe) in OEM, I can see a trace file start to grow larger. But it seems to be a trace file that already exists (example: <instance>s00022493.trc). To isolate activity to just the duration of time I'm interested in, I'd like to start with a fresh trace file. Is there any way to do that? Must I delete currently active trace files or would that cause a failure?
    When tracing a dedicated server session, I've noticed that a new trace file will be generated with "ora" in its name. Evidently not so with shared server stuff, whether it's tracing a module or a particular session. When the shared server model is used, it seems Oracle wants to reuse existing trace files. And since trace files an get quite large, it would be difficult to open the trace file in an editor and remove the pre-existing, older activity that I don't want to work with.

    See http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php for various ways. Delete the trace file before you begin. See http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#i20110

Maybe you are looking for

  • Can you help me get this blog posted on ALL possible parts of the website? Please read.

    DUAL SIMM CARD CAPABILITY and WORLDFONE QUAD-BAND CAPABILITY.  This is the crux of the biscuit (as Frank Zappa once said).  Multiple SIMM-card quad-band world-phones (2 or more SIMM cards per phone AND cover the North American and Pan-Asian GSM bands

  • Default locale support for texts datasources

    Hi, I faced with problem - does BW support default locales ? I have some texts datasources which contain some list of entries for default locale. The question is - what value of LANGU field I should setup for  these entries in order to BW recognize t

  • Data loading from csv  files

    Hello, I designed a quite simple characteristic having language dependent texts, two numeric time dependent attributes, and a compounding characteristic. I then built a set of infosource, data source and  infopackage for texts,.and a similar one for

  • Keyboard Glitch T430 - Pressure issue?

    Hi All, Looking for some help here. I am having a really strange glitch with my T430. It seems whenever I put pressure on the left side of the keyboard, or on the bottom of the laptop directly below the windows button, the laptop goes to the screen w

  • Transport of the query under some role................

    Hi, I created new query in development and saved it under one role. I transported the query and workbook to production but didnt transfer the role with query. When I checked the query in production it is not reflecting under the required role. Do I n