Data Block size Differents

My Block size is 92160
but the logs state the following
[Tue Jan 10 21:52:48 2012]Local/RFPWork/RFP/admin@Native Directory/2564/Info(1010008)
Maximum Declared Blocks is [428197512113280] with data block size of [27132]
[Tue Jan 10 21:52:48 2012]Local/RFPWork/RFP/admin@Native Directory/2564/Info(1010007)
Maximum Actual Possible Blocks is [377251820487360] with data block size of [11520]
why is the different between the three pieces ???
Please advise

Correction, the value in the logs is in bytes, your block size is in kilobytes. Sorry.....

Similar Messages

  • Data block size

    I have just started reading the concepts and I got to know that Oracle database data is stored in data blocks. The standard block size is specified by
    DB_BLOCK_SIZE init parameter. Additionally we can specify upto 5 other block sizes using DB_nK_CACHE_SIZE parameter.
    Let us say I define in the init.ora
    DB_BLOCK_SIZE = 8K
    DB_CACHE_SIZE = 4G
    DB_4K_CACHE_SIZE=1G
    DB_16K_CACHE_SIZE=1G
    Questions:
    a) Does this mean I can create tablespaces with 8K, 4K and 16K block sizes only?
    b) whenever I query data from these tablespaces, it will go and sit in these respective cache sizes?
    Thanks in advance.
    Neel

    yes, it will give error message if you create tablespace with non standard block size without specify the db_nk_cache_size parameter in the init parameter file .
    Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
    The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
    CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
    BLOCKSIZE 8K;
    reference:-http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces003.htm

  • Query Execution/Elapsed Time and Oracle Data Blocks

    Hi,
    I have created 3 tables with one column only. As an example Table 1 below:
    SQL> create table T1 ( x char(2000));
    So 3 tables are created in this way i.e. T1,T2 and T3.
    T1 = in the default database tablespace of 8k (11g v11.1.0.6.0 - Production) (O.S=Windows).
    T2 = I created in a Tablespace with Blocksize 16k.
    T3 = I created in a Tablespace with Blocksize 4k. In the same Instance.
    Each table has approx. 500 rows (So, table sizes are same in all the cases to test Query execution time ). As these 3 tables are created under different data block sizes so the ALLOCATED no. of data blocks are different in all cases.
    T1  =   8k  = 256 Blocks =  00:00:04.76 (query execution time/elapsed time)
    T2  = 16k=121 Blocks =  00:00:04.64
    T3 =   4k =  490 Blocks =  00:00:04.91
    Table Access is FULL i.e. I have used select * from table_name; in all 3 cases. No Index nothing.
    My Question is why query execution time is nearly the same in all 3 cases because Oracle has to read all the data blocks in each case to fetch the records and there is a much difference in the allocated no. of blocks ???
    In 4k block size example, Oracle has to read just 121 blocks and it's taking nearly the same time as it's taking to read 490 blocks???
    This is just 1 example of different data blocks. I have around 40 tables in each block size tablespace and the result are nearly the same. It's very strange for me because there is a much difference in the no. of allocated blocks but execution time is almost the same, only difference in milliseconds.
    I'll highly appreciate the expert opinions.
    Bundle of thanks in advance.
    Best Regards,

    Hi Chris,
    No I'm not using separate databases, it's 8k database with non-standard blocksizes of 16k and 4k.
    Actually I wanted to test the Elapsed time of these 3 tables, so for that I tried to create the same size
    tables.
    And how I equalize these is like I have created one column table with char(2000).
    555 MB is the figure I wanted to use for these 3 tables ( no special figure, just to make it bigger than the
    RAM used for my db at the db startup to be sure of not retrieving the records from cache).
    so row size with overhead is 2006 * 290,000 rows = 581740000(bytes) / 1024 = 568105KB / 1024 = 555MB.
    Through this math calculation I thought It will be the total table size. So I Created the same no. of rows in 3 blocksizes.
    If it's wrong then what a mes because I was calculating tables sizes in the same way from the last few months.
    Can you please explain a little how you found out the tables sizes in different block sizes.Though I understood how you
    calculated size in MB from these 3 block sizes
    T8K =97177 BLOCKS=759MB *( 97177*8 = 777416KB / 1024 = 759MB )*
    T16K=41639 BLOCKS=650MB
    BT4K=293656 BLOCKS=1147MB
    For me it's new to calculate the size of a table. Can you please tell me then how many rows I can create in each of
    these 3 tables to make them equal in MB to test for elapsed time.
    Then I'll again run my test and put the results here. Because If I've wrongly calculated table sizes then there is no need to talk about elapsed time. First I must equalize the table sizes properly.
    SQL> select sum(bytes)/1024/1024 "Size in MB" from dba_segments> 2 where segment_name = 'T16K';
    Size in MB
    655
    Is above SQL is correct to calculate the size or is it the correct alternative way to your method of calculating the size??
    I created the same table again with everything same and the result is :
    SQL> select num_rows,blocks from user_tables where table_name = 'T16K';NUM_ROWS BLOCKS
    290000 41703
    64 more blocks are allocated this time so may be that's y it's showing total size of 655 instead of 650.
    Thanks alot for your help.
    Best Regards,
    KAm
    Edited by: kam555 on Nov 20, 2009 5:57 PM

  • Choosing block size for RAID 0 & Final Cut

    Hi.
    I now have 3 500GB internal Seagate drives in bays 2/3/4 and want to make a striped 1.5TB RAID to use with Final Cut Studio 2. The help page talks about choosing a "large" data block size for use with video, but makes no specific size suggestion. What value would you recommend that I select for the block size? I haven't been in there yet so I don't know what the choices are.
    Any other settings I should be aware of that will optimize the RAID performance for video capture and editing? Thanks!
    Fred
    Message was edited by: FredGarvin
    Message was edited by: FredGarvin

    If you're using Disc Utility to set up your RAID, when you go to the RAID tab, you'll see an options button near the bottom of the window... clicking this will open a small menu where you can set the data block size... the largest is 256K, which is what you'd want to use.
    As for you're other question... have a look at this website: http://bytepile.com/raid_class.php
    note that disc utility can only set up RAID 0 & RAID 1 (if i remember rightly).

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Block size in tt for writing data to transaction log and checkpoint files

    Hello,
    what block size is TimesTen using when its writing data to transaction log and checkpoint files? Does it use some fixed block size during filesystem writes?

    Although in theory logging can write 2 KB blocks in almost all circumstances it will write 4 KB or larger so yes a filesystem with a 4 KB block size is fine for both checkpointing and logging.
    Chris

  • Space utilization and optimal row size in Data Blocks in Oracle 11g

    Hi,
    My main concern till now is to find the Optimal no. of rows/tuples per one oracle data block of size 8192. For that purpose i'm doing different kind of testing.
    I created one table:
    SQL> create table t5
    2 (x char(2000));
    Table created.
    Inserted 4 rows in it.
    Queried to check in which block no's these 4 rows exist.
    SQL> SELECT x,DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) "Block No."
    2 from t5;
    x Block No.
    A 422
    B 422
    C 422
    D 423
    SQL> analyze table t5 compute statistics;
    Table analyzed.
    SQL> select LOGGING,BACKED_UP,NUM_ROWS,BLOCKS,EMPTY_BLOCKS, AVG_SPACE,CHAIN_CNT, AVG_ROW_LEN
    2 from user_tables
    3 where table_name = 'T5';
    LOG B NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN . YES N 4 5 3 6466 0 2006 .
    8 data blocks are initially allocated. but my question is that while creating table of even small sizes, every time i'm seeing that it shows BLOCKS (used) are 5.
    why 3 block are EMPTY_BLOCKS allocated all the time?
    AVG_SPACE is the FREE space in every BLOCK that is used if i'm not wrong?
    In this scenerio 3 rows (each of around 2006 bytes) are inserted into into one BLOCK of 8192 i.e. 8192-6006=2186. 2186 is the PCTFREE or what?
    How can I see this CHAIN_CNT column value other than 0?
    How can actually I find the optimal no. of rows/tuples in 1 data block that will not increase OVERHEAD on the system.
    I'll highly appreciate the genuine help and solid suggestions.
    Thanks in Advance.
    Best Regards,
    Kam

    kamy555 wrote:
    If you want to suggest something it'll be nice of yo.I can't disclose my main concern :)If you can't disclose what you mean by "optimal" and you can't disclose what "overhead" you are concerned about, I'm not sure that anyone could answer your question.
    In this scenerio 3 rows (each of around 2006 bytes) are inserted into into one BLOCK of 8192 i.e. 8192-6006=2186. 2186 is the PCTFREE or
    what?PCTFREE is a percentage. You haven't specified what you set it to, but that decreases the space in a block available for inserts to 8192 * (1 - PCTFREE/100).
    How can I see this CHAIN_CNT column value other than 0?Why do you believe there would be chained rows? You would need to use the ANALYZE command to populate the CHAIN_CNT column.
    How can actually I find the optimal no. of rows/tuples in 1 data block that will not increase OVERHEAD on the system.Since you can't disclose what "optimal" means or what "overhead" you're concerned with, I don't see how this question could be answered.
    Justin

  • How to find Tuple(row) size in a Data Block

    hi,
    I want to find out the size of 1 row and total size of a table in a data block in Oracle 11g. For Example I've a Employee table with 300 rows. I want to find out how much space these rows are occupying in a data block (8192 default) and also the size of 1 individual row.
    I'll appreciate the valuable suggestions.
    Thanks alot.
    Regards,
    Kamran

    Hi,
    Welcome to the forum!
    See [Calculating Oracle average row length|http://www.dba-oracle.com/t_average_row_length.htm], I hope it helps
    Regards,

  • Buffer data before chart it and block size

    I hope you can help me with this situation because I have been stoped in this situation for two days and I think that I won't see the light in the short time and the time is a scarce resource.
       I want to use a NI DaqCard-AI-16XE-50 ( 20KS/s accordig to specifications). To acquired data in DasyLab I've been using OPC DA system but when I try get a chart from the signal I get awfull results. 
      I guess the origin of the problem is the PC is not powerfull to generate a chart in real time, so Is there a block to save the data, then graph the data without use "Write Data" Block to avoid write data to disk? 
      Another cause of the problem it could be an incorrect set value of Block size, but in my piont of view with the 10Khz, and 4096 of block size is more than neccesary to acquire a signal of 26[Hz] (showing in the photo). If I reduce the block size to 1 the signal showing in the graph is a constant of the first value acquire. Why could be this situation?
    Thanks in advance for your answers, 
    Solved!
    Go to Solution.
    Attachments:
    data from DAQcard.PNG ‏95 KB

    Is there someone who can help me
    I connect CN606TC2 and Dasylab 11... using the RS232 cable, and have done the instruction manual of the book cn606tc2.
    In dasylab RS232 module, there is a box:
    1). Measurement data request,
    2). Measurement data format ...
    what should I write in the box is so modules dasylab Digital meters can be read from CN606TC2.
    To start communication, the Command Module must send alert code ASCII [L] hex 4C. 
    Commands requesting data from the scannerCN606TC2)
     ASCII [A] hex 41 = Zones/ Alarms/ Scan time
     ASCII [M] hex 4D = Model/ Password/ ID#/ # of zones
     ASCII [S] hex 53 = Setpoints
     ASCII [T] hex 54 = Temperature
    I did not understand the program and the ASCII code.
    I send [T] in the RS232 monitor and restore dasylab 54.
    I am very grateful for your help

  • Size difference for expdp estimate and from data dictionaries

    Hello
    We have one large table with around 70 million rows and when we check size of this table using query as below
    select sum(bytes)/1024/1024/1024 "size in GB" from user_segments where segment_name='MYTABLE';
    we got expected size 17GB.
    However, when we exported the table using EXPDP it showed estimated size as 34GB.
    Can anyone help me identifying why the diffrence is there for expected tablesize in GB?
    Note: Table consits of BLOB datatype column
    Oracle10G
    Sun Solaris10
    Edited by: aps on Feb 20, 2009 7:28 AM

    The estimate is for table row data only; it does not include metadata.
    BLOCKS - The estimate is calculated by multiplying the number of database blocks used by the target objects times the appropriate block sizes.
    dba_lob_partition will give you more accurate info.

  • Difference between records in two data blocks

    Can you suggest me the best method to find difference (minus) between records in two similar data blocks.
    Thanks!

    What's your database version? Does this work? It can make the job a lot easier as you only have to loop through each block once.
    -- create collection types
    CREATE TYPE myobj AS object(col1 NUMBER, col2 NUMBER);
    CREATE TYPE mytab AS TABLE OF myobj;
    -- in the form:
    DECLARE
      b1 mytab := mytab();
      b2 mytab := mytab();
    BEGIN
      -- loop through blocks and populate collections
      b1.extend;
      b1(1) := myobj(1,1);
      b1.extend;
      b1(2) := myobj(2,1);
      b1.extend;
      b1(3) := myobj(3,5);
      b1.extend;
      b1(4) := myobj(4,2);
      b2.extend;
      b2(1) := myobj(1,1);
      b2.extend;
      b2(2) := myobj(2,2);
      b2.extend;
      b2(3) := myobj(3,5);
      -- find the difference (you might need to pass collections as parameters to a
      -- server-side function and do the query there)
      FOR rec IN (
        SELECT col1, col2 FROM TABLE(CAST(b1 AS mytab))
        MINUS
        SELECT col1, col2 FROM TABLE(CAST(b2 AS mytab))
      ) LOOP
        Dbms_Output.put_line(rec.col1||'-'||rec.col2);
      END LOOP;   
    END;
    /

  • Install Recommendations (RAID, ASM, Block Size etc)

    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    The way I usually handle databases of that size if you don't feel like migrating to ASM redundancy is to use RAID-10. RAID5 is HORRIBLY slow (your redo logs will hate you) and if your controller is any good, a RAID-10 will be the same speed as a RAID-0 on reads, and almost as fast on writes. Also, when you create your array, make the stripe blocks as close to 1MB as you can. Modern disks can usually cache 1MB pretty easily, and that will speed the performance of your array by a lot.
    I just never got into ASM, not sure why. But I'd say build your array as a RAID-10 (you have the capacity) and you'll notice a huge difference.
    16k block size should be good enough. If you have recordsets that are that large, you might want to consider tweaking your multiblock read count.
    ~Jer

  • Data Blocks and the Elapsed Time

    Hi,
    I have created 3 tables with one column only. As an example Table 1 below:
    SQL> create table T8k( x char(2000));
    So 3 tables are created in this way i.e. T8k,T16K and T4K
    T8 = in the default database tablespace of 8k (11g v11.1.0.6.0 - Production) (O.S=Windows).
    T16 = I created in a Tablespace with Blocksize 16k.
    T4K = I created in a Tablespace with Blocksize 4k. In the same Instance.
    Each table has 290,000 rows and all the 3 tables have equal size of 555MB (2006(rowsize with overhead) * 290,000/1024/1024 = 555MB) to test Elapsed Time (set timing on).
    As these 3 tables are created under different block sizes so the allocated no. of data blocks are different as below:
    T8K = 97177 BLOCKS= 00:41:20.21 (Elapsed Time)
    T16K=41639 BLOCKS= 00:44:11.59
    BT4K=293656 BLOCKS=00:37:29.06
    Please note the difference. First table i.e. 8k block size, allocated blocks are 97177 and taking around 41 mins. Third table i.e. BT4K (in a 4k block size tablespace), allocated blocks(293656 ) are almost 3 times bigger, taking around 37 mins to execute the query. I mean the difference is only 4 mins hardly and blocks difference is 3 times.
    In case of any doubt, I've created these tables bigger than the memory used for my db(memory_max_size 408M) i.e. 555 MB that If Blocks are already in cache then reading the blocks and counting the rows will nearly be the same regardless of the block sizes or the number of blocks.
    Need solid suggestions and if possilble, links also which has some serious discussions regarding my issue.
    Bundle of thanks.
    Best Regards,

    Because I was not completely satisfied with the things last time. It doesn't seem so simple to me as people said
    I thought may some guru have a look today, not just to point out that i have created it again.
    And I have compact my question to avoid confusion.

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

Maybe you are looking for

  • Automatic posting of purchase order from sales order

    hiii experts, i m doing automatic purchase order from sales order. i have generated purchase requisition number from sales order now when i m trying to do automatic po the system says no suitable purchase requisition available.I have maintained info

  • Cast from Class A to Class A B should be allowed?

    https://bugs.eclipse.org/bugs/show_bug.cgi?id=99107 public final class A<B> {   void doit() {     Class<A> clazz1 = (Class<A>) this.getClass();     Class<A<B>> clazz2 = (Class<A<B>>) clazz1;     clazz2.getName(); }Sun's javac (1.5.0_03) gives an erro

  • MSS: Team View Data drop down auto refresh

    Hi all, I'm trying to figure out if it is possible in the Team View for MSS to be able to automatically refresh the Team View when the drop down changes.  Right now, when a user chooses the Display drop down (Employees or Org Units and Positions, in

  • IMessage photo messages not working?

    I have an iPhone 4 on iOs 6.1.2 through Net10, for about the past two weeks I haven't been able to send photo messages throught iMessage, well they say they were "Delivered" but my friend in the uk hasn't received them (it worked previously and they

  • Spilt Water on Macbook Pro - I turned it off - What do I do next?

    - Spilt Water on Macbook Pro - Turned it off immediately, flipped it over and dried it the best I could (towell + blowdryer) - Haven't turned it on yet - should i even try or should i just take it to the repair shop?  I live in Singapore making thing