TABLE --- BLOCK SIZE - HELP

Hi,
Good day to all.
There are totally 4 tables(EMP, DEPT, STORE_INFO,WAREHOUSE_INFO) which has the range partition been applied.(Partition is on the date range for current date i.e. for every 1 day the partition will be automatically created)
There is a timer concept been applied such as the purge should automatically be done based on the days we pass.
But I can see the information in my table as “11 – Which indicates that the table is overloaded”.
For my surprise; for all the tables (NUM_ROWS and BLOCKS is NULL) except “EMP” table for which the NUM_ROWS IS NULL and BLOCKS size is 1,540,356.
1)     Will this be an issue during purge?
2)     Why NUM_ROWS is NULL and BLOCKS size is showing high.
Please help....

Thanks Hemant for your prompt reply.
Sorry and forgot to mention that this information is for our records "11 - Status which moves when table is overloaded".
Actually i was in an assumption that "because of the BLOCK SIZE" the table overloaded issue was getting raised.
Is the below code right?
When i am running it; it says the
Error
===
ORA-20000: TABLE "MOQ"."EMP" does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2105
ORA-06512: at "SYS.DBMS_STATS", line 5210
ORA-06512: at "SYS.DBMS_STATS", line 5243
ORA-06512: at line 8
DECLARE
   num_rows      NUMBER;
   num_blocks    NUMBER;
   avg_row_len   NUMBER;
BEGIN
   -- retrieve the values of table statistics on MOQ.EMP
   -- statistics table name: EMP    statistics ID: TEST1
   -- MOQ - SCHEMA_NAME
   DBMS_STATS.get_table_stats ('MOQ'
                              ,'EMP'
                              ,NULL
                              ,'EMP'
                              ,'TEST1'
                              ,num_rows
                              ,num_blocks
                              ,avg_row_len
   -- print the values
   DBMS_OUTPUT.put_line (   'num_rows='
                         || num_rows
                         || ',num_blocks='
                         || num_blocks
                         || ',avg_row_len='
                         || avg_row_len
END;Or only DBA has the privileage?
Please help

Similar Messages

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • HT2559 Help with setting raid block size after the fact

    I screwed up and created my raid 1 with block size set at 32. I need 256....it won't let me change it? What do I do?  Do I delete and re-configure it?

    thanks for the reply.  I am editing huge photo files (HDR Pano's) off the drive.  Doesn't that mean I need 256?  Anyway, when I go to erase it, it says "Deleting a mirrored RAID set changes each of its slices into a partition that contains a complete copy of the data from the deleted RAID set".   Is that a problem?

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Specifying segments and block size manaually

    Hi, just a quick question,
    But could anyone help me understand why someone may manually add segments to a table space (or is it a data file they would be added to) ? does auto extend not take care of this?
    And secondly ... why would you increase or decrease the block size of a segment?... is this because you may have small or large sized rows within a table and want a block size to acompany this?
    Any help would be appriciated

    Hi,
    In Oracle free space can be managed automatically or manually,You specify automatic segment-space management when you create a locally managed tablespace
    Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
    -Ease of use
    -Better space utilization, especially for the objects with highly varying size rows
    -Better run-time adjustment to variations in concurrent access
    -Better multi-instance behavior in terms of performance/space utilization
    For manually managed tablespaces, two space management parameters, PCTFREE and PCTUSED, enable you to control the use of free space for inserts and updates to the rows in all the data blocks of a particular segment. Specify these parameters when you create or alter a table or cluster (which has its own data segment). You can also specify the storage parameter PCTFREE when creating or altering an index (which has its own index segment).
    see this link
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/b_deprec.htm#634923 :)

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • How to install 10g database on windows with db block size 16k

    Hi,
    Can somone help me install oracle 10g database on windows xp with db block size as 16k.
    i need this database, because it is one of the recommendations for insalling OWB(Oracle Warehouse Builder).
    Thanks,
    Philip.

    1)     In Initialization parameter pile
    DB_BLOCK_SIZE=8192/16K - One way
    2) The other way once you install Oracle 10 G then at the time when you are creating database with Database Configuration Assistant – you can modify the block size by the[b] Initialization parameter screen.
    -     Sizing Tab
    Block Size 8K/16K
    *** Remember by default Oracle 10 G uses 8K block size.
    You use the options on the Sizing tab to configure the block size of your database
    and the number of processes that can connect to this database. The Block Size setting corresponds to the smallest unit of storage within the Oracle database. All storage of database objects (tables, indexes, and so on) are governed by the block size. The block size defaults to 8KB, but you can modify it. Once the database is created, you cannot modify this setting.
    The maximum and minimum size of an Oracle block depends on the operating system. Generally, 8KB is sufficient for most transaction-oriented applications, and larger block sizes such as 16KB and higher are used in data warehouse–type applications.
    The Processes setting specifies the maximum number of simultaneous operating system processes that can be connected to this Oracle database. You must include at least six processes for each of the Oracle background processes. You can increase this number on the Initialization parameter screen.

  • Any reliable cases where larger block sizes are useful?

    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    I don't have a specific need. I'm just curious. Every time I  look this up I get buried in generic copy and paste blog posts that copy the docs, the generic test cases that were debunked, by the guys above, and other junk. So its hard to look for this.

    Guess2 wrote:
    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    Lurking in the various things I've written about block sizes there are a couple of comments about using different block sizes (occasionally) for LOBs - though this might be bigger or smaller depending on the sizes of the actual LOBs and the usage pattern: it also means you automatically separate the LOB cache from the main cache, which can be very helpful.
    I've also suggested that for IOTs (index organized tables) where the index entries can be fairly large and you don't want to create an overflow segment you may get some benefit if the larger block size typically allows all rows for a give (partial) key value to reside in just one or two blocks.  The same argument can apply, though with slightly less strength for "fat" indexes (i.e. ones you've added columns to in order to avoid visiting the table for very impoartant time-critical queries).  The drawback in these two cases is that you're second-guessing, and to an extent choking, the LRU algorithms, and you may find that the gain on the specific indexes is obliterated by the loss on the rest of the caching activity.
    Regards
    Jonathan Lewis

  • Finding appropriate block size?

    Hi All,
    I believe this might be basic question, How to find appropriate block size for building an database to an specific application?
    I had seen always default 8K block size is used every where(Around 300-350 databases i have seen till now)....but why and how do they estimate this block size blindly before creating production database.
    Also in the same way how memory settings are finalized before creating database?
    -Yasser

    Yasser,
    I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning storing, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me - probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the amazon.com website.
    Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book (no, I have not read this book, I bumped into it a couple times when performing Google searches):
    http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA17#v=onepage&q=&f=false
    The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:
    * “That obviously means that the CPU is overwhelmed!”
    * “Wow 4,851 wait events on the CPU, that sure is a lot!”
    * “Wow wait events on the CPU, I didn’t know that was possible?”
    * “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
    * “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”
    A Google search then might take place to determine if anyone else reports the number of waits for the CPU in an Oracle instance:
    http://www.google.com/search?num=100&q=Event+Waits+Time+CPU+time+4%2C851+4%2C042
    So, it must be correct… or is it? What does the documentation show?
    Another page from the same book:
    http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA28#v=onepage&q=&f=false
    Shows the command:
    alter system set optimizer_index_cost_adj=20 scope = pfile;Someone reading the book might be left thinking one of:
    * That looks like an easy to implement solution.
    * I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
    * That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
    * Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans?
    A Google search finds this book (I have not read this book either, just bumped into it during a search) by a different author which also shows that it is possible to alter the pfile through an ALTER SYSTEM command:
    http://books.google.com/books?id=ufz5-hXw2_UC&pg=PA158#v=onepage&q=&f=false
    So, it must be correct… or is it? What does the documentation show?
    Regarding the question of updating my knowledge, I read a lot of books on a wide range of subjects including Oracle, programming, Windows and Linux administration, ERP systems, Microsoft Exchange, telephone systems, etc. I also try to follow Oracle blogs and answer questions in this and other forums (there are a lot of very smart people out there contributing to forums, and I feel fortunate to learn from those people). As long as the book or blog offers logical reasoning, it is fairly easy to tie new material into one’s pre-existing knowledge.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Mirrored RAID:  MediaKit reports block size error

    I am trying to create a 2nd set up backup drives for my photos.  I have two new iomega 2TB drives, which look essentially identical to drives I'm currently using as my primary backups as a mirrored RAID set.
    I can start the process with freshly erased and reformatted drives (with the default mac format, extended, journaled, unencrypted, not case-sensitive).  And after a minute or three, I see
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    The RAID options are Mirrored RAID, Mac extended journaled, and options settings are default.
    I see several series of posts with complaints about encrypting RAIDs and disk block sizes, but not unencrypted errors.   I actually started out trying to do this with the 2006 MBP running 10.6.8 and got a different error:  "POSIX reports:  the operation couldn't be completed. Operation not permitted."  I wasn't sure whether the 2TB RAID I already have was set up iwth the older or newer computer--it was definitely before I put Lion on this one--so I tried this one and now have a different error.
    Any idea what the problem might be? 

    Update:  I spent some time on the phone with an Apple support RAID expert, and couldn't figure out what the error was; we couldn't bypass it by playing with partitions on the drives, or any of another couple of manuevers that I've already forgotten.  He noted that his own searches were showing a lot of mentions of similar problems but only with Iomega drives, and he was finding the same links I found earlier about problems creating encrypted drives.  Now trying to decide if it's worth throwing more good money after bad for a call with Iomega support, and waiting to see if the iomega forum is at all helpful.

  • OSD-04001: invalid logical block size (OS 2800189884)

    My Windows 2003 crashed which was running Oracle XE.
    I installed Oracle XE on Windows XP on another machine.
    I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
    When I start the database in WinXP using SQLPLUS i get the following message
    SQL> startup
    ORACLE instance started.
    Total System Global Area 146800640 bytes
    Fixed Size 1286220 bytes
    Variable Size 62918580 bytes
    Database Buffers 79691776 bytes
    Redo Buffers 2904064 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Apr 25 18:38:36 2007
    ALTER DATABASE MOUNT
    Wed Apr 25 18:38:36 2007
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Wed Apr 25 18:38:36 2007
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Please help.
    Regards,
    Zulqarnain

    Hi Zulqarnain,
    Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
    So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
    Regards

  • Best Raid Block Size for video editing

    I cannot seem to get my head round about which Raid Block Size I should set my Striped Raid 50 configuration to.
    There seems to be very little info about this, but what info there is seems to imply that it could seriously affect the performace of the Raid.
    I have initialized two Raid array's to Raid 5 and was about to stripe them together using Disk Utility, when I decided to click on options in the bottom left of the Disk Utility window. This is where you can set the Raid Block Size.
    The default is 32K, but it states that there could be 'performance benefits' if this setting is changed to better match my configuration.
    What exactly does this mean?
    I want do read multiple dv streams from my Raid 50 - Any ideas which Block Size I should allocate??
    Should I just leave it as the default 32K??
    Any help will be appreciated
    Cheers
    Adam

    My main concern is really to have as many editors as possible reading DV footage from the Raid simultaneously (up to 5 at once).
    I understand that we may struggle at times, but Xsan isn't an option and I just need to get the best out of a limited budget!
    Chers
    Adam

  • Choosing block size for RAID 0 & Final Cut

    Hi.
    I now have 3 500GB internal Seagate drives in bays 2/3/4 and want to make a striped 1.5TB RAID to use with Final Cut Studio 2. The help page talks about choosing a "large" data block size for use with video, but makes no specific size suggestion. What value would you recommend that I select for the block size? I haven't been in there yet so I don't know what the choices are.
    Any other settings I should be aware of that will optimize the RAID performance for video capture and editing? Thanks!
    Fred
    Message was edited by: FredGarvin
    Message was edited by: FredGarvin

    If you're using Disc Utility to set up your RAID, when you go to the RAID tab, you'll see an options button near the bottom of the window... clicking this will open a small menu where you can set the data block size... the largest is 256K, which is what you'd want to use.
    As for you're other question... have a look at this website: http://bytepile.com/raid_class.php
    note that disc utility can only set up RAID 0 & RAID 1 (if i remember rightly).

  • Buffer pool keep and multiple db block sizes

    I have a tablespace with 8k block size (database default) and a tablespace with 16k block size. I have db_cache_size and db_16k_cache_size set (obviously).
    Also i have buffer cache keep set in the database.
    Question: If a table is placed in a tablespace with 16k block size, and it's buffer pool is keep, does it end up in the keep pool (like tables from 8k tablsepace and keep pool set), or it ends in 16k buffer?

    You can find in the following online manual
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i16408

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

Maybe you are looking for

  • What adapter to connect my Macbook Pro to projectors?

    Hi All, For the last year I've been using my old WIndows laptop when doing presentations out at client sites purely because it's what I was used to, and in almost every single case, the client will provide a VGA cable to plug into my laptop (which co

  • SSL Authentication Error While consuming HTTPS webservice

    Hi, i am calling a JAX-RPC Webservice method through HTTPS. I am Getting 403 Forbidden Error followed by a message, Your browser sent a query that could not be understood by the server. The following is the SSL debug Trace *** CertificateRequest Cert

  • Local variable in Data Form

    Hi All, I have a dataform in which Entity Dim is in Page. In a Business Rule attached to it there is a variable which should contain the value what ever selected in Entity in Page. Now my issue is what settings i should give while making Entity as lo

  • 6500 Slide Counter problem

    Is there a way to turn off the picture counter auto reset? Everytime I move the pictures to the computer the counter starts over from Pic000 is there anyway to turn this off? I want it to keep on counting so I dont get any "replace existing file" whe

  • Connect new TV to existing box

    Connect existing box to new tv