Quotas file increase to 1GByte

Hi, All
I have a
SunOS isb 5.9 Generic_112233-11 sun4u sparc SUNW,Ultra-250
box
I have enable the quota on 34GB Alocated file system and have 21000 users in my /etc/passwd file
I observed the file size of the "quotas" aproximately 1GB.
Is it normal?? Also let me know the antomy of qutas file i.e., what content this file holds
Your answer in this regard will be higly appreciated
Thanx
Farooq Bhatti

Either you could hope to get lucky and hope that someone on this forum knows the answer (see you've been waiting for about two weeks now), or you could contact Sun Support. They could probably give you an answer..
Also: have you searched through the Sun docbase at http://docs.sun.com ??

Similar Messages

  • Index file increase with no corresponding increase in block numbers or Pag file size

    Hi All,
    Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
    I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
    The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB.  We expect this to grow fairly rapidly over the next 12 months or so.
    After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
    When I then perform a dense restructure, the index file reduces to 0.6GB.  The PAG file remains around 12GB (a minor reduction of 0.4GB occurs).  The number of blocks remains exactly the same.
    If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
    If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
    Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
    Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
    I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
    I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it.  At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
    After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1. 
    I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
    http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
    However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right?  But I am getting more than 160% growth (1.6GB / 0.6GB).
    And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
    The Index file growth in itself is not a problem.  But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st.  And with the expected growth of the model, this will likely get much worse.
    Anyone have any explanation as to what is occurring, and how to prevent it...?
    Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.

    alan.d wrote:
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.
    I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
    Sabrina

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Files increase sixfold or more when importing as "original size"

    Simple question really - why does a 4GB .MTS file from my camera end up as a 35GB .MOV file in my Events folder, when choosing the "keep original size" option when importing? Why does the same file increase from 4GB to 9GB when I choose the lighter "large file" size that convert the movie from HD to SD?
    Is there a way around this problem?
    Background info - I'm using a Panasonic HDC-SD10 camera and either importing straight into iMove '09 Events, or importing as archive and then importing later. As I've only got an 8GB SD card for my
    camera, I have to get them off there and onto my Macbook on a daily basis while I'm on holiday (as I am right now).
    Unfortunately the large size of the eventual files in iMovie means that I have run out of disk space on my Macbook, and have had to resort to just copying the files onto my hard disk, rather than importing them into iMovie, so I'm wondering if there's an intermediate stage I could introduce that would stop the files growing in size so much?

    This isn't a problem. Video on your camera is in a highly compressed format and as such can't be edited without decompressing it first. Your import routine is decompressing your files and copying them to your hard drive so that iMovie can edit it.

  • Any news on the problem of large CHC files increasing size of roaming profiles?

    Any news on the problem of large CHC files increasing the size of roming profiles?  Thank you.

    I talked to the product manager.  Here's what he said.
    This was addressed in our last release.  CHC 4.0 changes the location of the locally stored documents so that they are longer included when users synchronize their roaming profiles.  The CHC will even move previous files to the new location when users upgrade.

  • Delete file from original location and when copied size of file increases.

    hi !
    I have 2 exe's both of 1.15MB which I want to move to another folder. The files are copied to the folder but the size of exe increases to around 350MB each. What can be the problem ?
    BufferedOutputStream bos=new BufferedOutputStream(new FileOutputStream(filename));
    int o=bis.read();
    do{bos.write(o);}while (o!=-1);
    boolean del=this.fpara.delete();
    System.out.println(del);          Also, the file is not deleted from the original location even though I have used the delete function. The last line produces null output.

    And do follow coding conventions
    {color:0000ff}http://java.sun.com/docs/codeconv/html/CodeConvTOC.doc.html{color}
    If your original source is all jammed up like the snippet you posted, I'm not surprosed that you can't spot your mistakes.
    db
    Edited by: Darryl.Burke on Mar 9, 2008 12:40 AM

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Pdf file increase on print command

    when i go to print a pdf file i have noticed a 7 megabyte file will increase to 100 megabytes which overwhelms my printer's buffer and the print time goes from less than a minute to fifteen minutes.  why? is there a fix?  thanks

    Hi Leonard,
    Thank you very much for your prompt reply. The answer has not solved the problem yet. My users don't want to manually merge the files. They want it to happen automatically.
    The application which prints documents is a simple MS access database. This is available on users personal desktop. As soon as the user opens the MS access, it is programmed in such a way that, the access database runs as a program. It uses DSN to connect to the sybase database and get the details of documents that are pending for print.
    At this point MS accesses database starts an itretive process, fetches the data from sybase , calls the correct report template in MS Access , populates it and prints the report using the default printer on which the database resides. Thus it prints the document one by one. By setting the default printer to Adobe PDF converter, we were successful in creating PDF documents rather than printing to paper , but the disadvantage was each document got printed to one file each.
    The whole process is coded as VBA module of MS access. my question is by using an API from Adobe SDK, will I be able to make changes in the VBA code of MS access database so that all files requestd for print comes out as a single PDF file.

  • Read time of tdm file increases

    Hello,
    I recently recorded a lot of data in tdm format. I originally saved it in an excessively organised and complex way which resulted in large header files and was very slow. I changed it and most of the data is fine, except 6 files in the old format. Just trying to read them to get the data out and then convert it into an easier format is taking far too long. Each file has ~15000 channel groups, each with 3 channels (I won't do this again...). I have witten a small vi to test the reading speed. Excluding opening and closing of the file, if I just find a channel group by name, reading one channel from that group takes about 1 second. This would still take days to read it all but would be acceptable. The strange thing is, is that if I do this in a loop for successive channel groups which is clearly necessary, it takes longer and longer each iteration. This is behaviour that I have always seen when reading tdm files, but it hasn't caused a serious problem before. I can't find any information about why though, so I hope I'm just doing something stupid. I have attached the speed testing vi (Labview 2010). Running it for 5 iterations, the time taken to read the channels is
    0.82304 s
    1.58809 s
    2.42514 s
    3.56820 s
    5.60632 s
    The channels it is reading all have the same number of values, and it makes no difference if I start at a different channel group (by adding a number to i in the loop).
    Does anyone have any explanation for this behaviour?
    Thank you very much for your help, and I'm sorry if it's something tht has been asked before,
    James 
    Solved!
    Go to Solution.
    Attachments:
    ReadBadTDMSpeedTest.vi ‏44 KB

    TDMS is an all binary format whereas TDM posses an XML header, otherwise they are very similar. Being a binary format you would expect it to read faster especially as you mentioned that the header was complex.
    There are a number of un-closed references in your code and I wonder if this was partly the issue, although this wouldn't explain why converting to TDMS fixed the problem?
    Beyond that, it would be nice to try this without using the express VIs to see if the problem still occurs. 
    Also, where you able to see if this was a memory issue? And if the read time continued to increase beyond what you posted?
    Anyway, I am glad we found a solution to your problem.
    Nick C.
    Cardiff University

  • Trying to find mysterious 23GB file increase

    While I was out at work today, my Library seems to have increased from 45GB to 68GB, meaning that suddenly I'm bumping up against the limit of my available RAM (though I thought I had way more than 20GB left to play with). I can't even Time Machine my way out of this, as I don't have enough available RAM to overwrite with the older, smaller Library.
    So: how do I search to find out what has ballooned so grotesquely in my Library? I spent quite a while opening individual files in it (800KB, 300KB etc) before realising that I could keep doing that all night without necessarily hitting the answer.
    I set up a Finder Find window and told it to look for files modified within the last day, making sure that System files were included. All it found were a couple of innocuous downloaded documents from this morning before I went to work, and I've verified that they're not problematic. It reckons no System files have changed, though the size of the Library suggests that that's not accurate.
    (I ran Clamxav last night, which told me that my machine contains five viruses -- BUT they're all safely quarantined in old unopened emails, and it recommended leaving them well alone.)
    How do I find out which file(s) grew or multiplied in my absence, if the Finder search that I described failed to do so? I'm really at a loss here.
    Thanks in advance for any advice that anyone can give me.

    This was extremely useful and I have bookmarked it, although in the event I didn't need its more in-depth advice for this issue. I didn't previously know about Show View Options/Calculate All Sizes, and as soon as that was switched on the problem was revealed -- a glitch in Mac Mail that was creating bogus "recovered messages" at 51.4MB a time. I've deleted them and my memory is restored. Thanks for the tip!

  • IMovie files increasing in size in latest version?

    I opened iMovie for the first time for a couple of years and since the last upgrade (10.0.6).
    I opened a file and it started to convert it for some reason.
    I then noticed that my hard drive usage has increased by 40GB.
    This is a problem as my hard drive is now completely full.
    There is a remote chance something else happened to my Mac to cause the increase but I can't identify anything else unusual or different.
    Any ideas what has happened?
    Thank you.

    This issue corrected itself but now it's back again.
    About 40GB of space has disappeared.
    I'm sure it's not iMovie but not sure what it is.
    I am getting messages that my hard drive is full but it shouldn't be.
    Any ideas what it is?
    Thank you

  • POSIX paths, quotes, & file links

    This script almost works to add a clickable link to a file:
    <code>
    tell application "OmniOutliner"
    set MyDoc to front document
    set theReply to (choose file) as alias
    set note of selected rows of MyDoc to "file://localhost" & POSIX path of theReply
    end tell
    </code>
    As long as the path or filename contains no spaces, it works fine.
    I've tried using "quoted form of", but that inserts apostrophes into the link, which breaks the link.
    How can I tweak this script so that it returns a clickable file:// link?
    (You can see results at: Image: )
    Thanks,
    CB

    Thanks red_menace and Tony!
    Your suggestions did the trick. I'll have to read up on delimiters to understand exactly how that function works, but it does exactly what I needed. For the record, my final, functioning program is:
    on ReplaceText(theString, fString, rString)
    set current_Delimiters to text item delimiters of AppleScript
    set AppleScript's text item delimiters to fString
    set sList to every text item of theString
    set AppleScript's text item delimiters to rString
    set newString to sList as string
    set AppleScript's text item delimiters to current_Delimiters
    return newString
    end ReplaceText
    tell application "OmniOutliner"
    set MyDoc to front document
    set theReply to (choose file) as alias
    set myPath to "file://localhost" & POSIX path of theReply
    set myString to ReplaceText(myPath as string, " ", "%20") of me
    set note of selected rows of MyDoc to myString
    end tell
    Thanks,
    CB

  • Suddenly flashback file increased

    Hi All,
    I have two database ora1 and ora2. All parameters are same in this two database. In this two DB normally total flashback size 40 GB each. But suddenly flashback file of ora1 db is increased and size is 139 GB, but ora2 flashback file is till 40 GB. In ora1 and ora2 db flashback retention target is 1 day, but ora1 flashback logs 9 to 10 days old. Also we have execute daily RMAN backup in two db.Please let me know why flashback has grown so large and why it has flashback logs 1 week old when the flashback retention target is 1 day.
    Oracle Version 10g
    OS Linux

    Hi Ashif,
    Please find the output :
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/10.2.0/dbs/snapcf_ora1.f'; # default
    Edited by: user1013607 on Jan 30, 2012 12:30 AM

  • AAC audio file increase in size when imported into DVD Studio Pro

    Hello all!
    I just encountered a very unique issue with the DVD Studio Pro. I used the compressor to export my video as an MPEG2 file and my audio as AAC file. After the compression my audio file had an .mpg4 extention and was about 160MB in size. When I imported it into my DVD Studio Pro project, however, the audio increased in size to 1.2 GB. I tried so many different compressions, but the problem really is (I think) that the file bumps up in size once imported into DVD SP project. Is there anything in preferences that I need to adjust or change? Please help!
    Thank you.

    Hi
    I'm a little lost now with versions history, but if you have A.Pack in your App folder, then must use it. Check this tutorial: Encoding AC3 with A.Pack.
    Compressor 2 and 3 has AC3 encoding inside the main application.
      Alberto

  • IDOC to file: Increase sequence number for each item in a segment

    Dear All,
    I am having one scenario IDOC to file, where one IDoc with multiple items for an header in it will be sent.
    In Destination structure, for each iitem segment I am creating multiple rows. I have a field(INC_NUM) in target structure where in I have to start line number from 1 for each item.
    Also I have a condition - for item x or y I should only have the same line number.
    TinAdvance
    Swarna.
    Edited by: Swarna on Oct 24, 2011 8:15 AM

    Hello,
    You can use the statistic function called index for your requirement. Can you give sample input/output so that we can visualize further?
    Hope this helps,
    Mark

Maybe you are looking for

  • How to iterate a symbol name and use "push" to add it to an array?

    I'm having trouble with the push statement. On my stage I have ten blocks labeled Block1, Block2,...Block10. I added these blocks manually by dragging and dropping. Did this because it is easier for me to figure where they should be on the stage than

  • Displayport for macbook - dvi connector

    Hey everyone, When I purchased the latest MBP, I also purchased a mini displayport to dvi adapter. (Model A1305) I am trying to find out whether or not this adapter is defective, or if I have the wrong type of female connector. My monitor dvi cable w

  • Glyph GT050 Firewire External Drive in Windows

    Glyph support doesn't know the answer to this, so hopefully someone has experience with this. My question is can I partition this 120 gig drive with say a 20 gig fat32 partition and then save data on it from my windows side? The other partition would

  • Garageband - using apogee Jam and acoustic guitar sound

    Hi I am trying to record an acoustic guitar on garageband (using my actual acoustic guitar), however the sound of the low strings sounds really muffled and low quality. i am using the built-in mic on my Mac to do this, not any external equipment. Is

  • The invitees option in my iPhone calendar disappeared?

    The invitees option in my iPhone calendar disappeared.  Not sure what happened. I do not see an option in iPhone settings to turn on/off invitees when creating or editing a calendar event on iPhone. How do I get back invitees?