Huge mirror file

Hello,
After installing Leopard I noticed a really huge file labeled Users/myname/Library/Mirrors/0016cb.......9/myname.dmg. Almost 10 GB in size. Do I need it? I'd like to erase it or move to the external HD.
Then, I have about 50 files all 8 MO in size in Users/myname/Library/FileSync/0016cb...9/myname_iDisk.sparsebundle/bands/number . Same thing, do I need it?
At last, after installing Leopard I have every half an hour an error message poping out saying "PubSubAgent quit". Apparently, causes no specific problems, but is annoying. Somebody knows about it?
Thanks for any help, I see there is much more serious problems you guys have.
Alex

Alec312 wrote:
At last, after installing Leopard I have every half an hour an error message poping out saying "PubSubAgent quit". Apparently, causes no specific problems, but is annoying. Somebody knows about it?
This is a new function in Leopard which, from the feedback about it, has some problems. Apple introduced an RSS reader in the mail.app Mail application. Plus in the Mail application preferences it mentions RSS feeds.
Try this:
In Mail go to preferences - menu Mail | Preferences. Then go to the "RSS" tab. Check what it says is the "Default RSS Reader". Change the "Check for updates:" from "Every 30 minutes" to "Manually".
Let us know if this solves this issue for you.

Similar Messages

  • Mirror file??

    hi
    I'm having a tidy up og my hard drive, and I just found a huge (10g) file in my user/library/mirrors
    that appears as my user name and a .dmg tag
    Any ideas on what this might be, when i try and open it Omnioutliner opens but no info given
    regards
    Steve

    Hi,
    I have copied an answer from another thread, but it doesn't really totally answer our question. I too am running low on drive space on my G4, but from the paste below, it looks as if this mirror dmg will continually get recreated if I keep syncing on (a necessity for me). Does anyone know definitively if we can get rid of this file yet keep syncing? I'd love to have my 10 GB back since I only have 75 GB on that computer.
    Amy J
    Paste:
    (from thread: http://discussions.apple.com/thread.jspa?messageID=7354337&#7354337)
    From Limnos: Re: 'mirrors'
    Posted: Jun 10, 2008 1:46 PM in response to: BurntMonkey
    <snip>
    This web site (http://macosx.com/forums/mac-os-x-system-mac-software/297290-problem-users-libra ry-mirrors.html) has some discussions which may help in understanding what the file does. Basically it is a local copy of your iDisk contents so you can get to it when not connected to the Internet. I am guessing you could delete it but it might be re-created unless you configure .Mac not to do so.
    Message was edited <and written by>by: Limnos
    <end paste>
    Amy J's info:
    Message was edited by: aejohns
    Message was edited by: aejohns
    Message was edited by: aejohns

  • Java Programming: Any Ideas for breaking a huge class file into smaller ?

    Hello Java pros,
    I have some very huge class files, some with dozens of methods; each method containing an average of a screen-page full of code.
    Obviously, such huge class files are difficult to maintain inspite of using an IDE, especially when changes have to be made to a bunch of a category of methods scattered all over the class.
    I am wondering if there are ways/best-practices out there to make the core class file smaller/smarter - fr eg.
    <a> by retaining the real core definitions within the core class and moving the detailed implementation outside the core class
    <b> by breaking down the file into more manageable pieces - something to the effect of using 'include' files that some languages support
    etc.
    Thanks for your help in advance.
    Sree Nidhi

    If you have huge class files with dozens of methods, maybe the design of your application is not so sound. You could use all kinds of OO design techniques to design your application so that it is easier to maintain.
    Start by learning about design patterns. The most famous book about design patterns is this one: http://www.amazon.com/exec/obidos/ASIN/0201633612/qid=1029971487/sr=2-1/ref=sr_2_1/102-4299125-5141710
    Here is also a nice book about anti-patterns: http://www.antipatterns.com/
    Jesper

  • Need help -To Restrict Huge temp file, which grows around 3 GB in OBIEE 11g

    Hi Team,
    I am working on OBIEE 11.1.1.5 version for a client specific BI application. we have an issue concerning massive space consumption in OBIEE 11g installed linux environment whenever trying to run some detail level drill down reports. While investigating, we found that whenever a user runs the drill down report a temp file named nQS_xxxx_x_xxxxxx.TMP is created and keep's growing in size under the below given folder structure,
    *<OBIEE_HOME>/instances/instance1/tmp/OracleBIPresentationServicesComponent/coreapplication_obips1/obis_temp/*
    The size of this temp file grows huge as much as around 3 GB and gets erased automatically when the drill down report output is displayed in UI. Hence when multiple users simultaneously try to access these sort of drill down reports the environment runs out of space.
    Regarding the drill down reports:
    * The drill down report has around 55 columns which is configured to display only 25 rows in the screen and allows the user to download the whole data as Excel output.
    * The complete rows being fetched in query ranges from 1000 to even above 100k rows. Based on the rows fetched, the temp file size keeps growing. ie., If the rows being fetched from the query is around 4000 a temp file of around 60 MB is created and gets erased when the report output is generated in screen (Similarly, for around 100k rows, the temp file size grows up to 3 GB before it gets deleted automatically).
    * The report output has only one table view along side Title & Filters view. (No Pivot table view, is being used to generate this report.)
    * The cache settings for BI Server & BI Presentation services cache are not configured or not enabled.
    My doubts or Questions:
    * Is there any way to control or configure this temp file generation in OBIEE 11g?
    * Why the growing temp file automatically gets deleted immediately after the report output generation in screen. Is there any default server specific settings governing this behaviour?
    * As per certain OBIEE article reference for OBIEE 10g, I learnt that for large pivot table based reports the temp file generation is quite normal because of huge in-memory calculations involved . However we have used only Table view in output but still creates huge temp files. Is this behaviour normal in OBIEE 11g. If not, Can any one Please suggest of any specific settings to be considered to avoid generating these huge files or atleast generate a compressed temp file.
    * Any other work around solution available for generating a report of this type without the generation of temp files in the environment?
    Any help/suggestions/pointers or document reference on this regard will be much appreciated. Please advice
    Thanks & Regards,
    Guhan
    Edited by: 814788 on 11-Aug-2011 13:02

    Hello Guhan,
    The temp files are used to prepare the final result set for OBI presentation server processing, so as long as long you dataset is big the tmp files will be also big and you can only avoid this by reducing your dataset by for example filtering your report.
    You can also control the size of your temp files by reducing the usage of the BI server.I mean by this if you are using any functions like for example sorting that can be handled by your database so just push to the DB.
    Once the report finished the BI server removes automatically the tmp files because it's not necessary anymore.you can see it as a file that is used for internal calculations once it's done the server gets rid of it.
    Hope this helps
    Adil

  • Reader 10.1 update fails, creates huge log files

    Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
    I clicked it to allow the install.
    Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
    It seemed to still be doing something and was showing that it was copying file icudt40.dll.  It still displayed the same thing ten minutes later.
    I went to bed, and this morning it still showed that it was copying icutdt40.dll.
    There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
    Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
    They are so big I can't even look at them to see what's in them.  (Too big for Notepad.  When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
    You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
    There doesn't seem to be much point to creating log files that are too big to be read.
    The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
    Maybe I should go back Adobe Reader 9.
    Reader X never worked very well.
    Sometimes the menu bar showed up, sometimes it didn't.
    PDF files at the physics e-print archive always loaded with page 2 displayed first.  And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
    And I liked the user interface for the search function a lot better in version 9 anyway.  Who wants to have to pop up a little box for your search phrase when you want to search?  Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.

    Hi Ankit,
    Thank you for your e-mail.
    Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
    MSI38934.LOG file.
    So I can't upload them.  I expect I would have had a hard time doing so
    anyway.
    It would be nice if the install program checked the size of the log files
    before writing to them and gave up if the size was, say, three times larger
    than some maximum expected size.
    The install program must have some section that permits infinite retries or
    some other way of getting into an endless loop.  So another solution would be
    to count the number of retries and terminate after some reasonable number of
    attempts.
    Something had clearly gone wrong and there was no way to stop it, except by
    going into the Task Manager and terminating the process.
    If the install program can't terminate when the log files get too big, or if
    it can't get out of a loop some other way, there might at least be a "Cancel"
    button so the poor user has an obvious way of stopping the process.
    As it was, the install program kept on writing to the log files all night
    long.
    Immediately after deleting the two huge log files, I downloaded and installed
    Adobe Reader 10.1 manually.
    I was going to turn off Norton 360 during the install and expected there
    would be some user input requested between the download and the install, but
    there wasn't.
    The window showed that the process was going automatically from download to
    install. 
    When I noticed that it was installing, I did temporarily disable Norton 360
    while the install continued.
    The manual install went OK.
    I don't know if temporarily disabling Norton 360 was what made the difference
    or not.
    I was happy to see that Reader 10.1 had kept my previous preference settings.
    By the way, one of the default settings in "Web Browser Options" can be a
    problem.
    I think it is the "Allow speculative downloading in the background" setting.
    When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
    problem. 
    I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
    University Library) and I got banned from the site because "speculative
    downloading in the background" was on.
    [One gets an "Access denied" HTTP response after being banned.]
    I think the default value for "speculative downloading" should be unchecked
    and users should be warned that one can lose the ability to access some sites
    by turning it on.
    I had to figure out why I was automatically banned from arXiv.org, change my
    preference setting in Adobe Reader X, go to another machine and find out who
    to contact at arXiv.org [I couldn't find out from my machine, since I was
    banned], and then exchange e-mails with the site administrator to regain
    access to the physics e-print archive.
    The arXiv.org site has followed the standard for robot exclusion since 1994
    (http://arxiv.org/help/robots), and I certainly didn't intend to violate the
    rule against "rapid-fire requests," so it would be nice if the default
    settings for Adobe Reader didn't result in an unintentional violation.
    Richard Thomas

  • Huge size file processing in PI

    Hi Experts,
    1. I have seen blogs which explains processing huge files. for file and sftp
    SFTP Adapter - Handling Large File
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    Here also we have constrain that we can not do any mapping. it has to be EOIO Qos.
    would it be possible, to process  1 GB size file and do mapping? which hardware factor will decide that sytem is capable of processing large size with mapping?
    is it number of CPUs,Applications server(JAVA and ABAP),no of server nodes,java,heap size?
    if my system if able to process 10 MB file with mapping there should be something which is determining the capability.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    2. consider pi is able to process 50 MB size message with mapping. in order to increase the performance what are the options we have in PI
    i have come across these two point many times during design phase of my project. looking for your suggestion
    Thanks.

    Hi Ram,
    You have not mentioned what sort of Integration it is.You just mentioned as FILE.I presume it is FILE To FILE scenario.In this case in PI 711 i am able to process 100MB(more than 1Million records ) file size with mapping(File is in the delta extract in SAP ECC AL11).In the sender file adapter i have chosen recordset per message and processed the messages in bit and pieces.Please note this is not the actual standard chunk mode.The initial run of the sender adapter will load the 100MB file size into the memory and after that messages will be sent to IE based on recordset per message.If it is more than 100MB PI Java starts bouncing because of memory issues.Later we have redesigned the interface from proxy to file asyn and proxy will send the messages to PI in chunks.In a single run it will sent 5000 messages.
    For PI 711 i believe we have the memory limtation of the cluster node.Each cluster node can't be more than 5GB again processing depends on the number of Java app servers and i think this is no more the limitation from PI 730 version and we can use 16GB memory as the cluser node.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    If i understand this i think if it is asyn communication then definitely 1GB data can sent to webservice however messages from Proxy should sent to PI in batches.May be the same idea can work for Sync communication as well however timeouts in receiver channel will be the next issue.Increasing time outs globally is not best practice however if you are on 730 or later version you can increase timeouts specific to your scenario.
    To handle 50 MB file size make sure you have the additional java app servers.I don't remember exactly how many app server we have in my case to handle 100 MB file size.
    Thanks

  • Word docs become huge RTF files in RoboHelp 8.02

    Every Word document that I add to the RoboHelp 8 project becomes huge in file size.  For example, a 100k Word 2007 document that consists of a couple of pages of text with a couple of small JPEG images (around 50k each) becomes a 15-30 meg RTF file in RoboHelp after importing.  The same happens if I created the page/document from within RoboHelp.  Is there a setting to keep the RTF files from swelling to such large gigantic sizes?

    I found out what the problem was.  For some reason, the RTF files swell up when the a jpeg image is added and embedded into the Word document.  I created a test document and converted some of the jpeg images to bitmap (BMP) files and used the Image import tool that comes with RoboHelp.  The RTF documents stayed tiny even when the same images were added but now as BMPs.   The difference is that the RoboHelp image tool creates a link to the bitmap image as opposed to embedding it in the document which is what happened with the jpegs.  I read about this in another post that alluded to this in one of the RoboHelp forums.
    Thanks!

  • Reading  huge xml files in OSB11gR1(11.1.1.6.0)

    Hi,
    I want to read a huge xml file of size 1GB in OSB(11.1.1.6.0)?
    I will be creating a (JCA)file adapter in jdeveloper and importing artifacts to OSB.
    Please let me know the maximum file size that could be handled in OSB?
    Thanks in advance.
    Regards,
    Suresh

    Depends on what you intend to do after reading the file.
    Do you want to parse the file contents and may be do some transformation? Or do you just have to move the file from one place to another for ex. reading from local system and moving to a remote system using FTP?
    If you just have to move the file, I would suggest using JCA File/FTP adapter's Move operation.
    If you have to parse and process the file contents within OSB, then it may be possible depending on the file type and what logic you need to implement. For ex. for very large CSV files you can use JCA File Adapter batching to read a few records at a time.

  • Huge .dmg File in Library

    I've got a .dmg file, same name as my .mac account name, sitting in my users/library/mirrors/001b639659db folder: patrickfifth.dmg
    It is 20GB in size! I tried to throw it away and it says it is in use.
    Any idea what this is and why it showed up?
    Thanks.

    That would be because you have turned on mirroring for your iDisk. Open your .Mac preferences in System Preferences, click on the iDisk tab and turn off mirroring the iDisk. If the mirrored file doesn't get deleted automatically you can safely delete it after you've turned off iDisk mirroring.

  • How do I compress a huge flash file for a web banner?

    Hi there,
    I am having trouble figuring out how to compress a huge flash file for a small web banner. I have 3 jpeg sequences (lowquality) that are used as background animations and the swf file is now over 900kb. I need a file under 200 or even 100. 
    thank you for your attention,
    Ayka

    The only way to reduce the size of the file is to reduce the sizes of the content it has.  If you go into your Flash Publish Settings and choose the option to generate a a size report then it will publish a report that will let you know how the weight of the file is distributed and you can focus on eliminating that weight from there.

  • How to break up a huge XML file and generate serialized JSP pages

    I have a huge xml file, with about 100 <item> nodes.
    I want to process this xml file with pagination and generate jsp pages.
    For example:
    Display items from 0 to 9 on page 1 page1.jsp , 10 to 19 on page2.jsp, and so on...
    Is it possible to generate JSP pages like this?
    I have heard of Velocity but dont know if it will be the right technology for this kind of a job.

    Thank you for your reply, I looked at the display tag library and it looks pretty neat with a lot of features, I could definitely use it in a different situation.
    The xml file size is about 1.35 MB, and the size is unpredictable it could shrink or grow periodically depending on the number of items available.
    I was hoping to create a documentation style (static pages) of the xml feed instead of having 1 jsp with dynamic pages
    I was looking at Anakia : http://jakarta.apache.org/velocity/docs/anakia.html , may be it has features that enable me to create static pages but not very sure.
    I think for now, I will transform the xml with an xsl file and pass the page numbers as input parameters to the xsl file
    null

  • How to compare two huge xml files(50MB+) using Java Code

    I want to compare two huge xml files using java code and need to find the difference of those xml files
    is there any API for that

    You should find third party API

  • 11g SOA with AIA suddenly creates huge temp files(sar files)!!

    Hi All,
    One of our clients that is on 11g SOA with AIA, the team observed while deploying applications, that it suddenly creates huge temp files(sar files) and the server slows down and then shuts down, has anyone seen such behavior or possible reasons?
    If anyone could share such prior experience would be apreciated!
    Thanks for your time!
    Regards,

    Hi Ajay,
    Could you check the managed server logs on the server you are deploying to? I prefer the soa_server1.out file if its available. Hopefully there is something more telling on that side.
    My gut feeling is that there is a schema required by the ProcessFulfillmentOrderBillingBRMCommsAddSubProcess process has not been deployed (which sometimes happens with this PIP in particular).

  • Schema advice for huge csv file

    Guys, I need an advice: Huge csv file (500 millions rows) to load in a table and I did. But now I need to alter the columns (they came all as varchar(50)). I'm just change one column and it's taking age...what kind of schema should I adopt? So far I applied
    a simple data flow but I am wondering if I should do something like:
    drop table
    create table (all varchar)
    data flow
    alter table
    No sure about

    Is this a once off/ad-hoc load or something that'll be ongoing/BAU?
    If it's ongoing then Arthur's post is the standard approach.
    create a staging table with varchar(50s) or whatever. Load into that, then from that staging table go into your 'normal' table that has the correct column types.
    If it's a once off, what I'd do is create a new table with the correct data types. Do a bulk insert from your table with 500mil rows.
    then drop the old table and rename the new table.
    Converting the columns in your 500mil table one by one is going to take a very long time, it'll be faster to do one bulk insert into a table with the correct schema
    Jakub @ Adelaide, Australia Blog

  • Read a huge wav file

    Hi,
    We have a problem on playing a huge wav file around 70Mb.  If we directly read from "Snd Read Wav File.vi",  ,the waiting time is very long. Also, it says "LabVIEW: Memory is full.  This operation is incomplete." 
    If the wav is playing in Cool Edit(Adobe Audition), it works quickly and smoothly.  Would anyone give us suggestion? 
    thx.

    Hello
    In this link, at the end, you can find vis to read huge wav files in chunks.
    http://forums.ni.com/ni/board/message?board.id=170&message.id=142895&query.id=46434#M142895
    Hop it helps.
    Alipio
    "Qod natura non dat, Salmantica non praestat"

Maybe you are looking for

  • How can I get rid of the message that I am using a trial version of Photoshop CC even after becoming a paying member for Photoshop CC and Lightroom ?

    After having payed fo an account for one year for Photoshop CC and Lightroom 5 i keep getting the message ,that I am using a trial version of Photoshop CC. Lightroom on the other hand is not functioning in a trial version ! Could anyone tell me how I

  • Reports & idoc

    1what is the difference between IDOC types and IDOC views? 2.What is Document Tool and what is use of it? 3.what is IDOC archiving? 4.what is the difference between watch point and break point.? 5.How to add extra functionality to a standard print pr

  • Add SharePoint List attachments to Multiple List Items

    I have a SharePoint list with around 7000 items.  What is the simplest way to add attachment to each list items? I'm thinking of Edit in Datasheet but of no luck.. cal_bonjovi

  • E-mail Removal

    My Wife Recently Got A BlackBerry Curve 8310 From A Friend, She Cannot Remove This Person's E-Mail Account From Her Phone, How Could We Go About Getting This  Off Of There, Any Help Would Be Greatly Appreciated

  • Unable to upgrade workflows

    BEA states that in order to upgrade/migrate workflows that ALL instances of currently executing workflows must be terminated. How can we expect to tell our customers to close all PO's and GSR/Procurements while we upgrade/migrate thier workflow packa