Best practice to reduce size of BIA trace files

Hi,
I saw alert on BIA monitor says 'check size of trace files'. Most of my trace files are above 20MB. I clicked on details it says "Check the size of your trace files. Remove or move the trace files with the memory usage that is too high or trace files that are no longer needed."
I would like to reduce them these trace files but not sure what is the safetest way to do it. Any suggestion would be appreciated!
Thanks.
Mimosa

Mimosa,
Let's be clear here first. The tracing set via sm50 is for tracing on the ABAP side of BI not the BIA.
Yes, it is safe to move/delete TrexAlertServer.trc, TrexIndexServer.trc, etc from the OS level. You can also right click the individual trace when you enter the "Trace" tab in the TREX Admin Tool (python) and I believe there is options to delete them there but it is certaintly OKAY to do this on the OS level. They are simply recreated when new traces are generated.
I would recommend that you simply .zip the files and move the .zip files to another folder in case SAP support may need them to analyze an issue. As long as they aren't huge, and if hard disk space permits, this shouldnt be an issue. After this you then will need to delete the trace file. It is important that if a trace file has an open handle registered to it then it wont let you delete/move it. Therefore it might be a good idea to do this task when system activity is low or non-existent.
2 things also to check:
1. Make sure the python trace is not on.
2. In the python TREXAdmin Tool, check the Alerts tab and click "Alert Server Configuration". Make sure the trace level is set to "error".
Hope that helps. As always check the TOM for any concerns:
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/46e11c10-0e01-0010-1182-b02db2e8bafb
Edited by: Mike Bestvina on Apr 1, 2008 3:59 AM - revised some statements to be more clear

Similar Messages

  • Best practice for reducing the number of XML files in an IDML file for translation?

    Our engineering team is looking for ways for us to reduce the number of XML files produced when a (relatively) simple 2-page INDD file is saved out as IDML for translation?

    IDML contains quite a few XML files, but I suspect you're only interested in the Stories folder if you're working on a translation. The way to do that is... to reduce the number of stories. If it's a two-pager, chances are that you have a whole bunch of unthreaded text frames. Thread them in logical reading order. This will help the translator(s) as well - by threading frames in logical reading order, they don't have to work to read the document in the same order as the target audience.

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • What are the best practices for average size of an Adobe PDF file on a Web Server to be downloaded?

    We know at a certain point, an extremely large file can be slow in downloading/accessing from a web site, and lessens the end-user's experience.  How large is too large? What is a good average or suggested size for Adobe PDF files in general?
    Thank you for any guidance.

    I'm sure it varies depending on one's opinion but we have a policy that they should be under 1- 1.5MB. This isn't always posible of course. Our PDF's are mostly informational and contain little or no graphics.
    I would say that it greatly depends on the PDF. If you get up over that size, you should do your best to optimize it for your users. Sometimes you can only do so much.

  • How to reduce size of a PDF file without loss of sharp graphics

    Hi, I'm working in Acrobat Pro X.  I have combined a file of graphics, and need to reduce the size.  Original graphics were created in PScs6, and saved as .jpg's.  What would be the best way to reduce the overall size of the PDF file and still maintain the clarity of the graphics?  Advice would be appreciated.

    When you are dealing with graphics, you are almost always talking about reducing the resolution or color depth. A lot depends on the graphics themselves. You might check one of the graphics and see what the color depth and resolution are. The 300 dpi setting is fine in most cases, unless you are wanting folks to zoom in. For many applications a color depth of 256 is fine rather than millions of colors. If those are acceptable options, be sure your graphics are set that way. You can try the Optimize to see if you can improve the size and still have adequate resolution (File>Save As>Optimize PDF).

  • How to reduce size or compress PDF files?

    Hi guys,
    Does someone knows if there is a way  to reduce the size of the PDF file with good quality even on Pictures and Scanned documents? I know the option 'export', 'quartz filter' 'reduce size', but the compression is so extrem. Many files can't not been read, the quality is so bad. Is there a additional app, software or extension that let the user play with the compression on pdf files and let it be more personalized like the dopdf V7 available for windows? i am very dissapointed with this. Please help or suggestion. I will appreciate it. Thanks

    I have used the excellent PDF Toolkit app for a while, and its preformed well. Occassionaly it makes the document into a negative image...which Im noyt sure why...
    Mathishk, im trying your online version out, and am impressed and admire the fact that you have done this.
    Well done. 
    A TRICK i use often. Once all hires images are on the designed document - and its coming in at 15-20mb, just change the link to the images so they are 'missing' then make a pdf.
    i.e 'Hi Res images' to HRes images - old'
    The screen resolution of images is still great, but file comes in at fraction of the size.
    You 'trick' the document to use screen images only. So my 18mb file comes in as 5.2mb.
    Just remember to change the images folder name back, so files relink.
    Andy

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Best practice for importing non-"Premiere-ready" video files

    Hello!
    I work with internal clients that provide me with a variety of differnet video types (could be almost ANYTHYING, WMV, MP4, FLV).  I of course ask for AVIs when possible, but unfortunately, I have no control over the type of file I'm given.
    And, naturally, Premiere (just upgraded to CS5) has a hard time dealing with these files.  Unpredictable, ranging from working fine to not working at all, and everything in between.  Naturally, it's become a huge issue for turnaround time.
    Is there a best practice for preparing files for editing in Premiere?
    I've tried almost everything I can think of:  converting the file(s) to .AVIs using a variety of programs/methods.  Most recently, I tried creating a Watch Folder in Adobe Media Encoder and setting it for AVI with the proper aspect ratio.  It makes sense to me that that should work:using an Adobe product to render the file into something Premiere can work with.
    However, when I imported the resulting AVI into Premiere, it gave me the Red Line of Un-renderness (that is the technical term, right?), and had the same sync issue I experienced when I brought it in as a WMV.
    Given our environment, I'm completely fine with adding render time to the front-end of projects, but it has to work.  I want files that Premiere likes.
    THANK YOU in advance for any advice you can give!
    -- Dave

    I use an older conversion program (my PrPro has a much older internal AME, unlike yours), DigitalMedia Converter 2.7. It is shareware, and has been replaced by Deskshare with newer versions, but my old one works fine. I have not tried the newer versions yet. One thing that I like about this converter is that it ONLY uses System CODEC's, and does not install its own, like a few others. This DOES mean that if I get footage with an oddball CODEC, I need to go get it, and install it on the System.
    I can batch process AV files of most types/CODEC's, and convert to DV-AVI Type II w/ 48KHz 16-bit PCM/WAV Audio and at 29.97 FPS (I am in NTSC land). So far, 99% of the resultant converted files have been perfect, whether from DivX, WMV, MPEG-2, or almost any other format/CODEC. If there is any OOS, my experience has been that it will be static, so I just have to adjust the sync offset by a few frames, and that takes care of things.
    In a few instances, the PAR flag has been missed (Standard 4:3 vs Widescreen 16:9), but Interpret Footage has solved those few issues.
    Only oddity that I have observed (mostly with DivX, or WMV's) is that occasionally, PrPro cannot get the file's Duration correct. I found that if I Import those problem files into PrElements, and then just do an Export, to the same exact specs., that resulting file (seems to be 100% identical, but something has to be different - maybe in the header info?) Imports perfectly into PrPro. This happens rarely, and I have the workaround, though it is one more step for those. I have yet to figure out why one very similar file will convert with the Duration info perfect, and then a companion file will not. Nor have I figured out exactly what is different, after running through PrE. Every theory that I have developed has been shot down by my experiences. A mystery still.
    AME works well for most, as a converter, though there are just CODEC's, that Adobe programs do not like, such as DivX and Xvid. I doubt that any Adobe program will handle those suckers easily, if at all.
    Good luck,
    Hunt

  • What are Best Practice Recommendations for Java EE 7 Property File Configuration?

    Where does application configuration belong in modern Java EE applications? What best practice(s) recommendations do people have?
    By application configuration, I mean settings like connectivity settings to services on other boxes, including external ones (e.g. Twitter and our internal Cassandra servers...for things such as hostnames, credentials, retry attempts) as well as those relating business logic (things that one might be tempted to store as constants in classes, e.g. days for something to expire, etc).
    Assumptions:
    We are deploying to a Java EE 7 server (Wildfly 8.1) using a single EAR file, which contains multiple wars and one ejb-jar.
    We will be deploying to a variety of environments: Unit testing, local dev installs, cloud based infrastructure for UAT, Stress testing and Production environments. **Many of  our properties will vary with each of these environments.**
    We are not opposed to coupling property configuration to a DI framework if that is the best practice people recommend.
    All of this is for new development, so we don't have to comply with legacy requirements or restrictions. We're very focused on the current, modern best practices.
    Does configuration belong inside or outside of an EAR?
    If outside of an EAR, where and how best to reliably access them?
    If inside of an EAR we can store it anywhere in the classpath to ease access during execution. But we'd have to re-assemble (and maybe re-build) with each configuration change. And since we'll have multiple environments, we'd need a means to differentiate the files within the EAR. I see two options here:
    Utilize expected file names (e.g. cassandra.properties) and then build multiple environment specific EARs (eg. appxyz-PROD.ear).
    Build one EAR (eg. appxyz.ear) and put all of our various environment configuration files inside it, appending an environment variable to each config file name (eg cassandra-PROD.properties). And of course adding an environment variable (to the vm or otherwise), so that the code will know which file to pickup.
    What are the best practices people can recommend for solving this common challenge?
    Thanks.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • How can I reduce size of a PDF file?

    My PDF file is 20MB.  How can I reduce its size?

    I used the"File > Save as Other > Reduced Size PDF" method and reduced a 4 MB pdf down to about 2.7 MB.
    I then just printed the same file out to the Print PDF (not sure of the exact name) print driver.  When you install Adobe Acrobat, it adds a virtual printer to your list of available printers.  When you print to it, you don't get an actual print out, you just get a new PDF file.  This method reduced the same 4 MB pdf down to about 1.5 MB.
    I find it weird that their Reduced Size PDF method gave inferior results.  Why don't they just have that function do whatever printing to the virtual printer does?

  • Best practice?-store images outside the WAR file?

    I have an EAR project with several thousand images that are constantly changing. I do not want to store the images in the WAR project since it will take an extremely long time to redeploy with every image change. What is the best practice for storing images? Is it proper to put them in the WAR and re-deploy? Or is there a better solution?

    Perryier wrote:
    Can you expand on this? Where do they get deployed and in what format? How do I point to them on a jsp?
    I am using Sun Application server 9.0, and I don't really think this has a "stand alone" web server. How will this impact it?You could install any web server you want (Apache?). The request comes in and if the request matches something like .jpg or .gif or whatever, you serve up the file. If you have a request for a jsp or what not, you forward the request to the app server (Sun App Server in your case). i.e. your web server acts as a content-aware proxy.

  • Best practice on storing the .as and .mxml files

    I have some custom components, and they use their own .as
    action script files. The custom components are placed in the
    "src/component" folder right now. Should I place the associated .as
    files in the same "src/component" folder? What is the suggested
    best practices?
    Thanks,

    Not quite following what you mean by "associated .as files ",
    but yes, that sounds fine.
    Tracy

  • Best approach to reduce size of 400GB PO (yikes!)

    Hi fellow Groupwise gurus,
    Am taking a position with a new company that has just informed me they have a 400GB PO message store. I informed them that, uh yea, this is a bit of a problem. So, I am starting to consider best way(s) to deal with this. They are on NetWare 6.5 with Groupwise 7.03. My inclination is to take them to GW 8 because we can do an in place upgrade leaving GW on NetWare for the time being (they can't do the 2012 upgrade and Linux migration, they say, until next year). I would rather have them on GW8 for stability reasons and better support, 7.03 being so old now. I know we could (and perhaps should) create another PO and move users to reduce the size of the main PO. Not sure yet, however, if we have hardware resources to setup another server (although I guess we could just load a 2nd instance of the POA at, say, Port 1678). Retention is obviously a problem here as it is likely that nobody has deleted or purged anything for many, many years (and there are quite possibly less than 50 users, I believe, on this PO!). I would love to get them on Retain, but again that is not an option until next year. We could have them start archiving using the Groupwise client archiving feature, but that does create another set of files outside of the PO that need to be backed up reliably. And finally, we could have people delete but I am not sure that is a viable option. Any suggestions? FYI - they are having to restart the POA on a regular basis due to POA instability and unresponsiveness. Plus, backups take forever, of course. Thanks.
    Don

    On 04.05.2012 02:26, djhess wrote:
    >
    > Hi fellow Groupwise gurus,
    > Am taking a position with a new company that has just informed me they
    > have a 400GB PO message store. I informed them that, uh yea, this is a
    > bit of a problem.
    And that is a problem why?
    The biggest PO I support currently is 2,5TB. Runs without a hitch.
    > So, I am starting to consider best way(s) to deal
    > with this.
    Leave it alone, and reconsider your stance what is a "problem" for
    groupwise? ;)
    > Not sure yet, however, if we have hardware
    > resources to setup another server (although I guess we could just load a
    > 2nd instance of the POA at, say, Port 1678).
    And the latter, running on the same server, would only further increase
    the total size of the data, but else not have any positive effect.
    > Any suggestions? FYI - they are having to
    > restart the POA on a regular basis due to POA instability and
    > unresponsiveness.
    Which is almost certainly unrelated to it's size. Before I'd make any
    changes, I would want to *know* the cause of those problems. Post some
    details of what exactly happens, and we'll be able to help with that.
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    http://www.cfc-it.de

  • Best practice to reduce downtime  for fulllaod in Production system

    Hi Guys ,
    we have  options like "Initialize without data transfer  ", Initialization with data transfer"
    To reduce downtime of production system for load setup tables , first I will trigger info package for  Initialization without data transfer so that pointer will be set on table , from that point onwards any record added as delta record , I will trigger Info package for Delta , to get delta records in BW , once delta successful, I will trigger info package for  the repair full request to  get all historical data into setup tables , so that downtime of production system will be reduced.
    Please let me know your thoughts and correct me if I am wrong .
    Please let me know about "Early delta Initialization" option .
    Kind regards.
    hari

    Hi,
    You got some wrong information.
    Info pack - just loads data from setup tables to PSA only.
    Setup tables - need to fill by manual by using related t codes.
    Assuming as your using LO Data source.
    In this case source system lock is mandatory. other wise you need to go with early delta init option.
    Early delta init - its useful to load data into bw without down time at source.
    Means at same time it set delta pointer and will load into as your settings(init with or without).
    if source system not able lock as per client needs then better to go with early delta init options.
    Thanks

  • Best Practice - Message Store size

    Is there a recommended max size for the message store?
    iplanet 5.2

    The offical recommendatin is mostly, "that depends" . . .
    If you intend to do backups, then store partition size depends on backup parallelism and how long you will tolerate backups running.
    If you're talking about maximum size of the store itself, there's no hard limit, only performance limits.
    We have seen large systems, with up to 500,000 users on single boxes, with many hundreds of gigs of store, split up into 40 or 50 partitions work very successfully. On large boxes.
    Better to give us some idea of your needs, and perhaps we can offer advice.

Maybe you are looking for

  • Error when starting OIM

    Error message appears "Server is not up and running" when logging on to the OIM. anyone has a suggestion!

  • Mac OS 10.5.6 Crashed while watching Online TV Shows (Al MacBook)

    I am not sure this is the right place for these discussion...But I hope someone can point me in the right direction. I usually watch TV shows on my Aluminum MacBook, it has worked pretty well thus far; however, after 10.5.6 update, the OS keeps rando

  • Performance for a Specific Query:

    Hi, i am using a multi-threaded java application to execute a stored procedure with bulks of data grouped in an array. I use oracle 9i and this is the SP definition: create or replace PROCEDURE IE_SP_SET_BANS_TO_RECLASSIFY( MIN_AMOUNT IN NUMBER , MIN

  • 6500 classic-codec specification ranges

    Hello.I have NOKIA 6500 classic and i am very happy with it.I have tvwo questions. 1.is it truth that this model have screen crack problems 2.i want to convert some of my videos to this phone but i dont know what are the codec ranges for audio and vi

  • Execution time, elapsed time  of an sql query

    Can you please tell me how to get the execution time, elapsed time of an sql query