Autoextend on next size

hi experts,
I would like to know what is the best tablespace "autoextend on next" size for the performance of the db?
oracle 11gR2 on redhat
users.dbf (is the only tablespace for tables) size is 330 gb
daily db growth is between 1 gb and 2 gb
tablespace ddl is:
CREATE BIGFILE TABLESPACE "USERS" DATAFILE
'path/users01.dbf' SIZE 268435456000
AUTOEXTEND ON NEXT 1310720 MAXSIZE 33554431M
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT NOCOMPRESS SEGMENT SPACE MANAGEM
ENT AUTO
ALTER DATABASE DATAFILE
'path/users01.dbf' RESIZE 351655690240
do I undestand it right that the unit of "AUTOEXTEND ON NEXT 1310720" is byte?
this meand that the minimum autoextend size is 1,25 mb.
what is the maximum autoextend size per 1 autoextend operation? I havent found any info on this in the documentation.
how often should the autoextend operation happen for the good performance in the db with much DML ?
thanks in advance!

920748 wrote:
1) there is a SYSTEM EVENT trigger available for extent allocation ?No.
2) the next (maybe increasing) extent size is predictable for segments in LM tablespaces with ALLOCATION_TYPE=SYSTEM ?Yes, it is predictable. The algorithm isn't published and potentially changes in different versions of Oracle. You can easily enough, though, create a test table, fill it with tons of data, and see the sizes of the extents. However, at the point that you want to do this, you're probably at a point where you shouldn't be using automatic extent allocation. If you want to be able to predict the size of the next extent, use uniform extents.
Here's an example from askTom that discusses the algorithm in Oracle 9.2.
3) Oracle will allocate smaller extents if the requested size is not available continuously?No.
I would like to check remaining tablespace after every extent allocationIt seems highly unlikely that you really want to do this sort of thing every time a new extent is allocated. Presumably, you know roughly how quickly your application's tables are supposed to grow. So, presumably, it's not terribly hard to ensure that you have, say, a month's worth of growth allocated as free space in the tablespace. If you keep a reasonable amount of free space around in general, you can then monitor the free space on a periodic basis (daily maybe even hourly) via a scheduled job and then add more free space at a convenient time whenever your free space drops below whatever threshold you set (in this example, a month). That should give you plenty of time between the threshold being tripped and the system actually running short of space and give you plenty of buffer in case you get swamped by a legitimate increase in activity.
Justin

Similar Messages

  • How to calculate AUTOEXTEND ON NEXT in tablespace clause

    hi
    Please explain How to calculate AUTOEXTEND ON NEXT in tablespace clause.
    whether AUTOEXTEND ON NEXT 50M or 100M or 500M
    Thanks

    174313 wrote:
    hi
    Please explain How to calculate AUTOEXTEND ON NEXT in tablespace clause.
    whether AUTOEXTEND ON NEXT 50M or 100M or 500M
    ThanksThe autoextend size depends on the following :-
    1) Tablespace type Dictionary managed (DMT) or Locally managed (LMT)
    2) Segment space management manual or Auto.
    3) Extent allocation management Autoallocate or uniform.
    If your tablespace is Locally managed with segment management Auto (ASSM) and with extent management is uniform , the autoextend size will be uniform for all the extents , initial and next extents all will be uniform size .
    If your tablespace is Locally managed with segment management Auto (ASSM) and with extent management is autoallocate, Oracle will size the extents automatically starts with 64KB and can go upto 64MB.
    If you have dictionary managed tablespace you will specify the next size at tablespace creation.
    Hope this helps.

  • *** glibc detected *** free(): invalid next size (fast):

    Hello,
    I am working with LabVIEW 8.2 PDS on a RedHawk Enterprise Linux 4.0 system.
    I have a a VI consisting of a two strings: a control and an indicator string. Both are connected to a CIN object. The control is the input whereas the indicator is the output. Within the C-Code the string input is used as a name of paramter whereas the value of the parameter is saved in the indicator string. This works together with an API of third party supplier.
    The whole process of getting the value of the given parameter name is working quite well but from time to time the VI and also LabVIEW crash - after the CIN code had been executed - with follwoing error message:
    *** glibc detected *** free(): invalid next size (fast): 0xXXXXXXXX ***
    Aborted
    Of course it could be the third party API but I put a message at the end of the CIN code which is printed everytime also when LabVIEW crashed.
    Is there anybody who has expierenced the same behaviour?
    Thanks in advanced for any help or suggestions.
    Johannes

    Hello Johannes,
    I did a database search for the error message in our support database, but nothing was found.
    A quick google search showed some hits, but nothing was specific enough to point to a reason why LabVIEW would crash.
    I'm afraid I can't really help you with this.
    Regards,
    Johannes
    NI Germany

  • Lv71/labview: realloc(): invalid next size:

    Linux Weenies;
    I've had good luck running LV7.1 on Suse 10.1 and 10.3 (x86_64).  I have installed the box version from Novell of Suse 10.3 and now get the following errors:
    *** glibc detected *** /usr/local/lv71/labview: realloc(): invalid next size: 0x223b1bf0 ***
    (no debugging symbols found)
    ======= Backtrace: =========
    /usr/local/lv71/lib/libc.so.6[0xf7bea4b6]
    /usr/local/lv71/lib/libc.so.6[0xf7bed7ed]
    /usr/local/lv71/lib/libc.so.6(realloc+0x10a)[0xf7bee1aa]
    /usr/local/lv71/labview[0x822752d]
    /usr/local/lv71/labview(DSSetHandleSize+0x69)[0x82262c9]
    /usr/local/lv71/labview(FAppendName+0x60)[0x823c970]
    /usr/local/lv71/labview(_Z9FListDir2PP6Path_tPP5CPStrPP10FMFileTypei+0x24f)[0x81d972f]
    /usr/local/lv71/labview[0x86a5996]
    /usr/local/lv71/labview[0x86a5a8e]
    BTW:  I have tried placing earlier versions of glibc within /usr/ocal/lv71/lib/, it then can't find all the older x libraries etcetera.
       ...Dan

    For what it's worth, running gdb on LabView shows an infinite loop at:
    #0  0xffffe405 in __kernel_vsyscall ()
    #1  0xf7c8f8d6 in __lxstat () from /lib/libc.so.6
    #2  0x081d9797 in FListDir2 ()
    #3  0x086a5996 in ?? ()
    #4  0x086a5a8e in ?? ()
    -- snip --
    #4248 0x086a5a8e in ?? ()
    #4249 0x086a5948 in ?? ()
    #4250 0x086d6052 in OMIVIClassMgr::Init ()
    #4251 0x086c0264 in ObjManagers::RegisterObjClassManager ()
    #4252 0x086bfe78 in RegisterObjClassManager ()
    #4253 0x086bff99 in InitObjMgr ()
    #4254 0x08597a42 in InitApp ()
    #4255 0x085b2146 in ?? ()
    #4256 0x085b51d7 in ?? ()
    #4257 0x08282835 in ?? ()
    #4258 0x08282cca in WSendEvent ()
    #4259 0x0822c9f4 in InitializeApp ()
    #4260 0x08230a8e in ?? ()
    #4261 0x08230b03 in main ()
    Hopefully, someone familiar with the internals can help figure this out.
       ...Dan

  • Autoextend without next

    Hello
    Regarding the autoextend issue, If I specify autoextend on,and omit next clause,by default the datafile
    will extend 1 block.(not OMF)
    Consider the datafile is full and segments are extended uniformly or autoallocate.
    When oracle attempts to extend the segment, it has to extend datafile 1 block.and then 1 block
    until autoallocate or uniform size.
    and Finally new extent will be allocated to segment.
    Is that right?
    or
    Does oracle have internal algorithm to prevent this poor performace issue?

    Hello,
    With Locally Managed Tablespace you have 2 options:
    - Uniform size
    - Autoallocate
    With Uniform size you set the size of the Extend (for instance 1 Mo). So it's useful if you have an idea of the Size of your Tables/Indexes
    With Autoallocate the size of the Extend depends on the size of the Tables/Indexes.
    I mean, the first extend are about 64K then (after the 16th Extend) 1 Mo, then after the N1th extends 8Mo and after the N2th extends 64 Mo.
    So by that way the small Tables have only small Extends and large Tables can have larger extends.
    This is for the Segment.
    For the datafile if you didn't set NEXT option on the Autoextend clause on the Datafile then, the datafile will normally extend by a default value.
    Hope it can help,
    Best Regards,
    Jean-Valentin Lubiez
    Edited by: Lubiez Jean-Valentin on Nov 7, 2009 7:49 AM

  • Temporay tablespace grows rapidly. . .

    Dear All,
    When I run a large query the temporary tablespace grows continuously.
    The max size of temporary tablespace is 6G.
    I set Autoextend on, next size is 10m and maxsize is 6291456M.
    Also I set the SORT_AREA_SIZE = 256m.
    How to stop the rapid growth of temporary tablespace.
    Thanks in advance,
    PratHamesh.

    If your Oracle version is 9i you can set your PGA_AGGREGATE_TARGET into a appropriate value(i.e. PGA_AGGREGATE_TARGET=1G to avoid sorting on TEMP tablespace.
    Frequently sorting happens on your DB and it supposed to shrink.
    You can also think of allocating other users to an specific TEMP tablespace but you have to identify which users it is.

  • Appropriate size for Autoextending datafile in oracle 10g

    Hi,
    I am using Oracle 10g 10.2.0.3.0 on linux 64 bit with 16GB RAM, one thing i want to find out that i have my schema datafile, which set on autoextend, i have set next size to 100mb, if the file reaches to its full does it make us wait for long to create a place of 100mb or should i reduce the size from 100mb to 10mb.
    What should be the appropriate Next size in Autoextend for oracle database in my case, if there is a performance problem like login delay in schema due to increasing in size of datafile then how do i find out because i do not have database diagnostic tools.
    Because today there was a delay in login to the schema and EM was showing wait, but i could not find out that why the wait was there, when i check the datafile size it was 1203 MB , i thought it may be due to the extension of data file but how do i confirm this thing.
    In the table DBA_ADVISOR_FINDINGS it is giving following problem
    Problem PL/SQL execution consumed significant database time.
    Symptom Wait class "Network" was consuming significant database time.
    Problem Wait event "TCP Socket (KGAS)" in wait class "Network" was consuming significant database time.

    Slow connect is not related with datafile extension.
    Regarding Wait event "TCP Socket (KGAS)"
    Please read Metalink note 416451.1
    How fast is tnsping to database, ping to database server?

  • Could we autoextend a datafile when it's size hits a defined threshold

    Hi all,
    I have one doubt, could we be able to autoextend a tablespace datafile when it's size hits a defined threshold? For example if my datafile size is 10G, and the autoextend is on,
    1. will there be any options to grow it's size automatically (say when the usage reaches 9.5G, size of the datafile should automatically be extended to 11G) ?
    2. And if we are using auto extend on for a data file, will there be any performance impact at the time when the size of the datafile gets full during some transactions and the datafile is about to extend to the next size ? I mean from the application point of view, will it behave the same way as some transactions(insertions) takes place in a datafile that already have enough free space for insertion?
    Thanks in advance

    Just to add few points to what Sybrand has mentioned,
    You can't ask Oracle to add a specific amount of free space. It'll extent by extent.
    Performance impact will be there anyway as the process trying to write the data have to wait until Oracle allocates free blocks.
    So if you feel, your datafiles will be filled up with a high frequency, better to use some fairly large value for the parameter nextextent. So whenever the DBWR make a call to the extents to be added, Oracle will add extents of big sizes and can reduce the frequency of calls b/w dbwr and the extents allocation process.
    btw, Please mark the question as 'answered' if you feel you got the needed information. This will save the time of the people who may answer an already answered question.
    regards,
    CSM
    Edited by: CSM.DBA on Aug 11, 2012 8:36 PM

  • Tablespace size and autoextend

    hi,
    if i am to create a tablespace with one datafile, i can size it to say 100M and set autoextend on. If it reaches 100M and then starts growing, does it make things run slower as it is constantly growing? would it be better if i had done some analysis and say sized it at 2000M? And if autoextend does make it slow down, how much does it slow it down by?
    thanks

    My personal preference is to create at 100m with autoextend on next 100m maxsize 16000m or so. But I don't think I'd do that for a tablespace that was going to be on the receiving end of thousands of concurrent inserts a minute in a heavy OLTP environment. For that sort of situation, I think I would prefer to create at 2000m autoextend on next 2000m maxsize 16000m.
    Because the poor transaction that causes the autoextend obviously suffers from a nasty bout of waiting ...but no-one else will for quite a long while, thanks to that large 2GB jump in size. Meanwhile, my hard disk is not stuck there with an over-sized monster of a datafile that was over-sized by an overly-cautious DBA.
    The thing being begged in your question, in other words, is "does it make things run slower as it is constantly growing".
    If your files are truly CONSTANTLY growing, yes, you're in deep trouble and you will be suffering from severe slow-down. But if you choose an appropriate NEXT clause, you can have an OCCASIONAL growth of the file, sufficient for maybe a week or more's use, that only slows down one or two people for just a relatively short time.
    Where you have heavy inserts going on, increase the NEXT clause to make the incidence of file growth keep at the 'not too often' level.
    In the days of multi-hundred-gig disks, I would think that any DBA sitting there still trying to predict the future as far as disk space usage is concerned is just wasting his time. As jgarry put it, there are lots of other things more deserving of your attention. There's no pressing need to ration out carefully something (ie, space) which is abundant, in other words.

  • Setting the size of a tablespace prior to its creation

    Hi guys,
    Oracle version: 10.2.0.x
    OS: Windows 32bit
    I was wondering if there are any 'conditions' in setting the intial size of a tablespace prior to its creation. This is because, one of our customers wants to export and import from one schema into another but set an initial size for a tablespace during its creation and prior to importing data into it, but we have a database wizard tool that creates the accounts/schemas by using the following underlying sql:-
    CRETE TABLESPACE <tbl_name> datafile <path><datafile_name>.dbf SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 32000M DEFAULT STORAGE (INITIAL 256 NEXT 256K MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0) ONLINE;
    Under normal circumstances, what he would do is to use our wizard tool and create an account and then import data from an export dump.
    But he wants to create a tablespace using a SQL something like:-
    CRETE TABLESPACE <tbl_name> datafile <path><datafile_name>.dbf SIZE 2000M AUTOEXTEND ON NEXT 500M MAXSIZE 32000M DEFAULT STORAGE (INITIAL 256 NEXT 256K MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0) ONLINE;
    He thinks this should be ok because his export dump would be on size 2GB and would want to import all of that 'at once' i.e. without actually using autoextend feature for the first 2GB.
    Does this make any sense? Do you guys see any difference 'performance-wise' between the 2 create tablespace statements?
    I think the second sql (with size 2000M initially) seems to be a more logical approach but I am not really sure of the implications going down the line performance-wise.
    Any help/input much appreciated!
    Thanks guys

    1) Provide the facility in your tool to change the default size of datafiles and keep autoextend off.You won't believe it...we had this option in a previous version but now it isn't there anymore!
    But why turning off AUTOEXTEND? I mean, our customers are not meant to modify anything in the database other than via using our software, hence we would want our customers to just use the software as it is but have their DBAs monitor the growth of any datafiles etc...
    2) Let your tool do what it is doing and then manually increasing the size of the datafiles and making autoextend off once you have created the tablespace and before doing import. This can be done by your tool or loggin to the database.We could do this, but we don't want our customers to manually finger the database as they are/might not be Oracle-efficient and wouldn't want production systems to crash for the same reason.
    Hence we could just give them the SQL to be replaced with the 'default' one and then create a tablespace via the tool which would create a tablespace with an initial size of 2GB+ and with greater autoextend facility (such as autoextend on 500M) rather than 100M so that there are fewer autoextends.
    Also, in your point 2, are you saying to turn off autoextend only for the import and set it back on by alter tablespace....?
    Thank you.

  • Default Storage versus Uniform size at creating Tablespace

    anybody can explain if those two concepts mean same? in my practice, i need to dump all data from 9i to 10g database, in 9i a tablespace created by :
    CREATE TABLESPACE DATA_TS
    DATAFILE '/opt/oracle/u01/data_04.dbf' SIZE 11M AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
    EXTENT MANAGEMENT DICTIONARY
    LOGGING
    DEFAULT STORAGE(INITIAL 112K
    NEXT 112K
    MINEXTENTS 1
    MAXEXTENTS 1017
    PCTINCREASE 0)
    ONLINE
    PERMANENT
    SEGMENT SPACE MANAGEMENT MANUAL
    so in 10g I created tblsp like:
    CREATE TABLESPACE DATA_TS
    DATAFILE '/opt/oracle/u10/data_04.dbf' SIZE 11M AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 112K
    LOGGING
    ONLINE
    SEGMENT SPACE MANAGEMENT MANUAL
    are they same?

    Hi,
    Two concepts are not same.
    Check this link:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96521/tspaces.htm
    Starting with Oracle9i, the default for extent management when creating a tablespace is locally managed. However, you can explicitly specify that you want to create a dictionary-managed tablespace. For dictionary-managed tablespaces, Oracle updates the appropriate tables in the data dictionary whenever an extent is allocated, or freed for reuse.

  • Reducing oracle data file sizes to a bare minimum

    Hi,
    How can I create an oracle instances with minimal file sizes ?
    Using the database configuration wiz, i've specified 10~20mb sizes for SYSTEM01, UNDOTBS01 etc. but when the instance is created the files were still huge :
    SYSAUX01 was 83MB,
    SYSTEM01 was 256MB
    UNDOTBS01 was 158MB
    The only files that seems to have complied were the redo log files.
    It's a testing environment that's constantly restored so i'm trying to fit the files into a ram-disk (using imdisk) in favour of performance.
    The data required/created by each test is not large and because it's only a testing db reliability/consistency is not an issue - rollbacks are also minimal (i've set the retention at 15 secs so far)
    So if anyone can shed some light on how to keep filesizes to a minimal, I'm all ears.
    If there's a way to run without some of these files, i'm keen to hear too =)
    Thanks

    So if anyone can shed some light on how to keep filesizes to a minimal, I'm all ears.
    SQL below works.
    reduce SIZE value & see what works for yourself
    spool V888.log
    set term on echo on
    startup mount;
    CREATE DATABASE "V888"
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 '/u01/app/oracle/product/10.2.0/oradata/V888/redo01.log'  SIZE 50M,
      GROUP 2 '/u01/app/oracle/product/10.2.0/oradata/V888/redo02.log'  SIZE 50M,
      GROUP 3 '/u01/app/oracle/product/10.2.0/oradata/V888/redo03.log'  SIZE 50M
    EXTENT MANAGEMENT LOCAL
    SYSAUX DATAFILE '/u01/app/oracle/product/10.2.0/oradata/V888/aux88801.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    DEFAULT TABLESPACE USERS DATAFILE
      '/u01/app/oracle/product/10.2.0/oradata/V888/system01.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    DEFAULT TEMPORARY TABLESPACE TEMP888 TEMPFILE '/u01/app/oracle/product/10.2.0/or
    adata/V888/temp88801.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    UNDO TABLESPACE UNDO888 DATAFILE
      '/u01/app/oracle/product/10.2.0/oradata/V888/undo88801.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    CHARACTER SET AL32UTF8
    NATIONAL CHARACTER SET AL16UTF16
    spool offBut WHY are you doing so?

  • Is ther any limit  for  size of datafile, if we mention maxsize unlimited

    Hi,
    We has ASM with oracle enterpirse linux ( Linux xxxxx.net 2.6.18-92.1.17.0.1.el5 #1 SMP Tue Nov 4 17:10:53 EST 2008 x86_64 x86_64 x86_64 GNU/Linux) and database is 11gR2.
    Question is , if i add a datafile with maxsize unlimited ( example below syntax), What will be the maxsize of the datafile, it can grow to?
    alter database datafile '+DATA_DG/user/datafile/useridx.376.752379021' autoextend on next 8k maxsize unlimited;
    Thanks

    if i add a datafile with maxsize unlimited ( example below syntax), What will be the maxsize of the datafile, it can grow to?For smallfile tablespaces (the default) it's 4M blocks. For 8K block size DBs it's 32G.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/limits002.htm#i287915

  • SIZE TABLESPACE

    I'm make a import, but get next errors:
    ORA-01659: unable to allocate MINEXTENTS beyond 1 in tablespace OPC_DATOS
    ORA-01658: unable to allocate INITIAL for segment in tablespace OPC_DATOS
    I 'm created like;
    CREATE TABLESPACE OPC_DATOS DATAFILE 'C:\BASES\OPC_DATOS.DBF' SIZE 500M AUTOEXTEND ON NEXT 500 MAXSIZE 7000M;
    i dont know allocate size for parameters of tablespaces, i refer MINEXTENTS, INITIAL , MAXEXTENDS, PCTINCRESE

    Post results of
    SELECT * from v$version;
    DICTIONARY or LOCAL Managed tablespace?

  • How autoextend affects the performance of a big data load

    I'm doing a bit of reorganization on a datawarehouse, and I need to move almost 5 TB worth of tables, and rebuild their indexes. I'm creating a tablespace por each month, using BIGFILE tablespaces, and assigning to them 600GB, that is the approximate of size of the tables for each month. The process of just assigning the space takes a lot of time, and I decided to try a different approach and change the datafile to AUTOEXTEND ON NEXT 512M, and then run the ALTER TABLE MOVE command to move the tables. The database is Oracle 11g Release 2, and it uses ASM. I was wondering what would be the best approach between these two:
    1. Create the tablespace, with AUTOEXTEND OFF, and assign 600GB to it, and then run the ALTER TABLE MOVE command. The space would be enough for all the tables.
    2. Create the tablespace, with AUTOEXTEND ON, and without assigning more than 1GB, run the ALTER TABLE MOVE command. The diskgroup has enough space for the expected size of the tablespace.
    With the first approach my database is taking 10 minutes approx moving each partition (there's one for each day of the month). Would this number be impacted in a big way if the database has to do an AUTOEXTEND each 512 MB?

    If you measure the performance as the time required to allocate the initial 600 GB data file plus the time to do the load and compare that to allocating a small file and doing the load, letting the data file autoextend, it's unlikely that you'll see a noticable difference. You'll get far more variation just in moving 600 GB around than you'll lose waiting on the data file to extend. If there is a difference, allocating the entire file up front will be slightly more efficient.
    More likely, however, is that you wouldn't count the time required to allocate the initial 600 GB data file since that is something that can be done far in advance. If you don't count that time, then allocating the entire file up front will be much more efficient.
    If you may need less than 600 GB, on the other hand, allocating the entire file at once may waste some space. If that is a concern, it may make sense to compromise and allocate a 500 GB file initially (assuming that is a reasonable lower bound on the size you'll actually need) and let the file extend in 1 GB chunks. That won't be the most efficient approach and you may waste up to a GB of space but that may be a reasonable compromise.
    Justin

Maybe you are looking for

  • Sorting not working  on a report :(

    Hi I have a very simply report where data comes from the following query Select substr(b.field1,2,16) CARDNUMBER, trim(substr(b.field2,89,40)) cardholdername, to_date(substr(b.field1,18,6),'mmddyy') POSTINGDATE from tablename; I have checked the radi

  • Vendor Number in Batch Info

    In table MCH1 there is a field LIFNR (Vendor account number) which is populated for some batches and not for others. All of these are purchased items from a vendor. Is there some customizing needed to get the vendor number to always appear in the Bat

  • Trying to print the priority of a thread

    Hi there, I'm doing an exercise that ask you to print the min priority of a thread, my code compiles, but it won't run: class SimpleRunnable implements Runnable public void run() String tName = Thread.currentThread().getName() ;           int tPriori

  • Query value output should be positive

    Hello , I am working on one finance query and requiremnt is that query out for that report must be positive value even it takes negative value. Is there any way to get positive value? Thanks CPY

  • CS4 Fatal error- missing component

    I just recently transferred my CS4 onto my new Mac desktop. Now, everytime I open CS4 I get the following message, Fatal error - missing component. I can push ok, and go onto my editing in phohtoshop, but once I try to save or save as any kind of fil